The Spark that Lit the Hybrid Fire, Part I
Unthinkable thoughts, cognitive revolutions, natural born cyborgs, extended mind theory, and building better worlds to think in
Richard Hamming, a mathematician, computer scientist, and Manhattan Project member, once suggested that just like there are sounds we cannot hear, waves of light we cannot see, and flavours we cannot taste, there could be thoughts we cannot think:
“Evolution, so far, may possibly have blocked us from being able to think in some directions, there could be unthinkable thoughts.” - Richard Hamming
How do we know waves of light, flavours, particles, and other invisible units of reality exist?
We build tools.
We build tools outside of our immediate senses to adapt to our bodies. Telescopes, microscopes, radiography, etc. are all examples of increasing our technological capacities to better understand the world around us.
So, what about our thoughts? How do we tap into “unthinkable thoughts”?
Cognitive Revolutions
Let’s start with history. The “Axial Age” refers to the mid-first millennium BCE during which a cluster of changes in cultural traditions occurred in different places around the world. These changes most notably included the emergence of moralizing religions and philosophies like Platonism, Buddhism, Confucianism, Hinduism, and Judaism. One view of Axial Age scholarship considers the period to be a “cognitive revolution”—an era that saw these changes due to the invention of writing and codification of knowledge in written texts propelling what is known as “second order thinking”.
Of course, the Axial Age is not without dispute and there are other views of cognitive revolutions in history happening before the advent of writing. One of the most popularized is outlined in Yuval Noah Harari’s Sapiens which paints human history as one of three major revolutions: the Cognitive Revolution, Agricultural Revolution, and the Scientific Revolution. He argues that these revolutions set us apart from other mammals because we were able to create and connect ideas that do not exist, and yet have come to define us—religion, capitalism, money, and politics. Harari argues that our most persistent cognitive technology, the myth, enabled us to coordinate with each other through a common aim allowing us to overcome natural forces no other mammal has.
I’ve simplified, but this is the Harari argument for the Cognitive Revolution occurring between 70,000 to 30,000 years ago (well before the Axial Age). Myth-making not only scaled coordination between humans, but it also scaled imagination. We became story-tellers. Narration became the default mode for understanding the world—if you could speak it, persuade it, sell it, it became reality. This is the world of myth, legend, and fantasy upon which collective beliefs were built and defines how we shape reality to this day.
Natural Born Cyborgs
While this formalized “Cognitive Revolution'' occurred tens of thousands of years ago, the malleability of the human mind that powered it provides a medium for constant adaptation.
First, the very structure of the brain reveals one that is more like a system of brains that has evolved over time and can be viewed from different perspectives—right and left hemispheres with different modes of function, as well as your cortex and neocortex, lizard brain and developed brain, cerebellum vs front lobe, System I and II, etc.
Second, the brain is a “constructive learning system”, one whose own basic computational and representational resources alter and expand (or contract!) as the system learns. These systems use early learning to build new basic structures upon which to base later learning.
Third, there’s sufficient evidence to suggest we have parallel information processes occurring and that nonconscious information processing happens throughout the body through interoception. This allows us to process larger and more complex pieces of information through unconscious channels in the body, conserving energy for others.
Cognition, thus, is highly malleable and adaptive. This gets to the question inherent in defining cognition and shifts within in it—if the mind is constantly adapting in response to our environment, where does the mind stop and the rest of the world begins?
In 1998, two cognitive scientists, Andy Clark and David Chalmers, published a paper opening with exactly this question, suggesting that our cognition isn’t just a process that happens in our head, but that the extended mind is increasingly entangled with technology. Extended Mind Theory (EMT) proposes that cognition is not confined to a person’s physical brain but also includes external elements in their environment. Clark’s later book, Natural-Born Cyborg, paints the very essence of humanity as being defined by the evolving relationships between mind and tool. Forces traditionally viewed as opposed, he argues that technology is essentially human.
“With speech, text, and the tradition of using them as critical tools under our belts humankind entered the first phase of its cyborg existence.”
“As a result, it would invite us to systematically and repeatedly build better worlds to think in… Public language was the spark that lit the hybrid fire.”
- Natural Born Cyborg
Language was our first cognitive tool that extended our minds. It was the first spark that gave us unthinkable thoughts.
While the thought of homo sapiens as “natural born cyborgs” might make some uncomfortable, there’s ample evidence to suggest that the development of our psychology has transformed over time from shared storytelling to one that is much more individual and internal due to our technology. In a prolific overview of the development of “WEIRD” psychology (Western Educated Industrialized Rich and Democratic), Joseph Henrich argues in his book that Westerners are uniquely individualistic, analytical, and think in terms of individual responsibility in comparison to most other cultures that identify with family first, think more “holistically”, and take responsibility as a group. This difference is one of psychology and culture driven by the printing press and spread of literacy. With writing came reading through books, a new mode of thought that emphasized internal vs shared cognition—a change that was frowned upon at the time.
This kind of individualized, WEIRD thinking has also motivated the use of contained analogies and metaphors in an effort to better understand how our own black box of a brain works. From “brains as computers” to “brains as muscles”, we make the mistake of restricting an expansive computational system to any kind of box at all. Most of our early human history as oral storytellers shows us it’s not.
Over two decades since “the extended mind” was proposed, author Annie Murphy Paul gives an updated view of the research in her recent book, The Extended Mind, arguing that evidence for their theory has only grown stronger:
“... thought happens not only inside the skull but out in the world, too; it’s an act of continuous assembly and reassembly that draws on resources external to the brain… the kinds of materials available to “think with” affect the nature and quality of the thought that can be produced. The capacity to think well—that is, to be intelligent—is not a fixed property of the individual, but rather a shifting state that is dependent on access to extra-neutral resources and the knowledge of how to use them.”—The Extended Mind
The capacity to think well is not a fixed state, but a dynamic one dependent on 1) tools to extend the mind and 2) knowledge of how to use them.
Understanding this, EMT gives us a more applicable lens to better understand the transformative benefits of an evolving human-machine symbiosis emerging before us today. A symbiosis we were well-designed for as natural born cyborgs.
Mirror Mirror
But, there is also a loud and emerging view that says we’re not.
After publishing my excitement on the Human AI Hybrid last November, the release of Microsoft’s Bing AI, its emergent personas (Sydney, Riley, etc.), and its subsequent rollback has caused collective hysteria that’s worth confronting. In less than a year, there’s been a swing in collective attitude from one that pokes fun at engineers who claim our AI show sentience to one where we’re willing to seriously engage with tech journalists that cite “unsettling”, “disturbing”, and “frightening” conversations they’re having with their chatbots. (Mike Solana’s piece digs into this.)
So, what’s going on?
We should take real safety and ethical concerns raised seriously, but these should be balanced with a deeper understanding of how these systems and our own minds work—these systems thrive on “context” in relation to how they’re trained, which is both their strength and their weakness as generalized tools. Right now they’re also early in their development, which means the prompts, techniques, and intent really matter. If you’re looking for clicks and wild stories, you’ll be able to find them—just like you can on the internet, in books, with word processors, and with any other media. Mind extension holds promise, but also the peril that we might use it for nefarious and naïve aims. This has always been the case.
The idea that people could use computers to amplify thought and communication, as tools for intellectual work and social activity, was not an invention of the mainstream computer industry or orthodox computer science, nor even homebrew computerists; their work was rooted in older, equally eccentric, equally visionary, work. You can't really guess where mind-amplifying technology is going unless you understand where it came from.—Howard Rheingold, Tools for Thought
The ChatGPT moment in November 2022 was the moment that a new mode of intelligence became accessible to the world. With the release of the ChatGPT API earlier this week, OpenAI is showing its ability to extend even more powerful and cheaper capabilities to developers (that then can use it in their own products) at incredible speed, marking another massive moment but also clearer signs for what’s ahead. The large scale commercialization begs new questions and concerns. But the most practical takeaway in my mind is that we are likely now looking at a new world that will not only require digital literacy to thrive, but a new kind of AI literacy—as the logical next step from writing to calculators to word processors to cognitive assistance.
So while we’re fixated on chatbot hysteria prompted by our journalists, we should remember past fears that came before: pen and paper, the typewriter, and the calculator introduced new mediums with old fears. Plato of the Axial Age debated the use of writing as dangerous, typewriters during the late 19th century were viewed as “an invader of privacy”, and calculators of the 1970s took 15 years to be integrated into educations programs for fear of what they might do to “real learning”.
Old fears and claims of religious experiences with new technology is about as human as it gets—sometimes even among the greatest of minds. In 1957, MIT researcher and professors, DR. J.C.R. Licklider admitted to have “a kind of religious conversion” when he first used a computer, prompting him to dedicate much of his professional life to think about a truly “universal” interface as a new kind of “thinking center”. He would go on to propel the field of human computer interaction, laying much of the foundation of interactive systems we use today.
Perhaps some of our fear stems from a new thrust to redefine how we externalize our thoughts and knowledge—one where we again question who we are, what knowledge is, and what could be. First, we need to confront a darker image of ourselves than we’d like—one of an extended mind forced to question the messages it’s been sending itself (it’s not just Kevin). Second, we’ll then move past an uncomfortable confrontation to one better positioned to build better worlds to think in.
Just another step into the future of our past.
Building Better Worlds To Think In
In our history, myth was used to construct a coherent narrative to explain unexplainable phenomena. In a civilization that was mostly oral, this method works well and allows language, beliefs, and social cohesion to deepen and grow. But they can also mislead when new forms of thought appear.
Writing, however, shifts the attention from one that is focused on building a collective narrative to one that can focus on criticizing thought once it’s externalized. In response, cognition becomes more internal and analytical. It starts to break down old narratives while leaving remnants of their structure. While writing has its roots in imagery like ideographs such as cuneiform and hieroglyphics dating back to 5,500 BC, it transformed human cognition everywhere it evolved.
In his analysis of the Axial Age, cognitive scientist John Vervaeke argues that the evolution of alphabetical literacy from ideographic literacy was the crucial step in our cognition that spurred cultural transformation. As an information store, ideograms are information dense, but not easily learned—so much so that managing them was its own occupation: the scribe. This was the early seed for knowledge work and one that would later spawn into “computers” (humans who did computing) and “programmers” (humans that programmed the computers) and now perhaps “prompters” (humans that prompt the programming).
The emergence of alphabetic literacy not only enabled wider access to knowledge, it slowly allowed anyone to externalize their own thought processes. This is the difference in view of a cognitive technology under “the extended mind” lens—as one that allows new actions, but also enables new cognitive processes. By storing both thought and knowledge outside of working memory, you actually enhance your metacognition—your ability to reflect on how you’re thinking. You also allow others to engage with you in critique, analysis, and revision as a cognitive feedback loop that can be shared.
Language was the spark, and writing was the hybrid fire that allowed:
People to correct their thinking faster
Enhanced self-awareness of self-deception
Freed up cognitive capacity for other tasks
Clark agrees and goes as far to say that second-order cognitive dynamics are “the deepest contribution of speech and language to human thought”. Clark also cites work done by Donald Merlin through his book, The Making of the Modern Mind, in which he also argues for two basic ways we’ve used language as a society: the mythic and theoretic. Myth focuses on fluid storytelling, but the theoretic builds theories aimed to be tested.
Vervaeke argues that this changed the sense of the self and the sense of the world during the Axial Age. Most importantly, people changed where they place responsibility, which ultimately spurred new concepts of morality and religion. Rather than chaos, warfare, and brutality being immutable states of the world as told by myth and legend, people tapped into a new-found sense of agency and codified this through institutions. Clark also argues that “it is this capacity, in turn, that may have been the crucial foot-in-the-door for the culturally transmitted process of designer-environment construction: the process of deliberately building better worlds to think in.”
Better tools allow better cognition that enhance our sense of agency that in turns builds better worlds. As a collective, this is a cognitive shift from a mode of describing what is to analyzing how it is–from narrative to analytical thought.
The Harari view recognizes humanity as storytellers with myth deeply embedded in human psychology—emphasizing a static state of storytellers over time. The extended mind view says that human psychology is transformed over time in relationship with the tools they build, transforming collective modes of thought. Taken together, it’s worth going beyond the popularized narratives of AI as a force outside ourselves— and as personas with agency—and ask how it might be used to advance our cognition yet again.
More specifically, if writing revolutionized our cognition in antiquity, spawning philosophical and religious thought we still adhere to today, what happens when the writing talks back?
When Text Talks Back
Harari is also well-known for drawing attention to the dangers of AI and the narratives they might tell to us—that AI will eventually “know us better than we know ourselves”. While his concerns might strike new nerves today, our philosophizing will only be as good as its practical use. Looked at through EMT, we already have subconscious processing mechanisms in our mind, body, and social groups that know us better than our conscious minds are able to—and we use them to our advantage.
So, what might cognitive assistants look like that extend their power and shed light on ourselves, expanding the possibility frontier for thoughts we can think?
There is already evidence that this is happening. For example, Dan Shipper, the CEO of Every, built a personalized chatbot with GPT-3 and his own journal entries from the past 10 years. Asking questions like “When in his life has the author been the happiest?” or “Why was the author so hurt by this?”, Dan shares his experience as one of delight and “aha” moments, admitting that “in some strange way, it felt like the AI knew me better than I knew myself.” More than just a “Crtl + F” across journal entries, a new mode of dynamic language redefined the insight he could gain across his own written words.
It enabled new thoughts.
While LLMs explode onto the scene, we’re already finding ways to use emerging technology to learn more about how we work and learn best. (Very) early work (not peer reviewed) looking at ChatGPT in the workplace shows some positive signs on productivity, job satisfaction, and self-efficacy, largely enabling low-ability workers more. Outside of personal chatbots, there is new research on the potential of student learning when humans and machines work together. This recent paper on using technology to enable deliberate practice concludes:
“Our evidence suggests that given favorable learning conditions for deliberate practice and given the learner invests effort in sufficient learning opportunities, indeed, anyone can learn anything they want."
As with writing, typewriters, calculators, and writing processors, how might we build a better world to think in with our new cognitive assistants?
Does this allow more people to engage with research and PhD programs if they have access to their own research assistant?
Will more high school students be able to understand a world of quantum physics and complex engineering with personalized tutors?
Are more people able to grow into the best versions of themselves with free access to a personalized cognitive coach?
What new occupations and hobbies are created when everyone has access to “Universal Basic Intelligence”? (great post here)
What is the new realm of unthinkable thoughts?
In summary, we asked the question of how we might enable “unthinkable thought” through our tools and used the frame of “extended mind theory” to unpack the following, in the context of AI hype and hysteria to draw the following conclusions:
The defining feature of humanity can be understood and distinguished through various lenses, one of which is storytelling. Language, as the externalization of symbolic reasoning, however, goes a level deeper as the atomic unit in understanding human cognitive power through its ability to tap into shared information processing.
Regardless of your view or definition of a “cognitive revolution”, a brief history of human cognition shows a) there have been consequential shifts in how we think over time and b) these shifts share a close relationship with our tools that is bound to increase in a technological future.
While not perfectly applicable, “extended mind theory” is a useful framework for understanding the potential of dynamic AI systems to aid, rather than distract, limit, or replace human cognition.
Over time, we’ve shifted from a narrative to analytical mode of thought. While much of recent attention has fixated on AI risks and blunders, carefully crafted and intentional use of dynamic AI tools show promise in extending our analytical mode to one that is more reflective and imaginative. Just like cognitive shifts before, new modes of thought can reinvent the notion of “the self” and what realms of thought the self is able to extend into.
In the next post, we’ll explore questions raised here in more detail by diving deep into:
the history of human computer interaction
untangling the power of the “interface”
the false dichotomy between artificial intelligence and intelligence augmentation
exploring practical ways people are already creating their own human AI hybrid systems