Heideggerrian AI

Dreydegger meets ChatGPT

Published August 28, 2023

Emerging from the philosophical reflections of Hubert Dreyfus, a renowned Heidegger scholar and foremost critic of early AI endeavors, the notion of "Heideggerian artificial intelligence" took shape as a practical application of existential phenomenology onto technological developments. During his early tenure teaching phenomenology at MIT in the 1960s, Dreyfus was tasked with evaluating the groundbreaking work of Herbert A. Simon and Allen Newell at MIT's AI Lab. This engagement ignited a prolonged and spirited debate between Dreyfus and the AI research community that endured until his passing in 2017.


Dreyfus consistently expressed skepticism regarding the lab's ambitious forecasts of AI progression, particularly their claims of imminent parity between machine and human intelligence. For instance, in 1957, Simon predicted that AI chess programs would outplay humans within a decade. This proved overly optimistic— the technology only managed to reach that milestone forty years later. In his critiques, Dreyfus was vindicated, revealing the nascent state of AI's evolution during that period. Drawing from his expertise on Heideggerian and Merleau-Pontian literature, Dreyfus was instrumental in shifting the paradigm from the classical Good Old Fashioned AI (GOFAI, or symbolic AI) towards today's embrace of neuro-symbolic and deep learning methodologies.


Dreyfus’ seminal 1972 manifesto, wherein he doubled-down on the impossibility of disembodied machines mimicing human cognitive function


Though Dreyfus did not live to witness the emergence of OpenAI's ChatGPT, the integration of AI into our quotidian experiences renders his insights more pertinent than ever. His foundational critique remains crucial in contemporary AI discourse, which emphasizes the necessity for AI to achieve an embedded-embodiment to circumvent "the frame problem”—  the issue of finding sufficient structures of axioms for a viable description of the environment in which an AI operates. In other words, the frame problem is the challenge of specifying which parts of a system's knowledge or environment need to be updated in response to a particular change, and which parts remain unchanged. This is something that humans are intuitively good at; it’s a feature of our common sense, which stems from our natural, embodied situatedness within our environment. 

ChatGPT, a manifestation of state-of-the-art language processing and symbol manipulation, is engineered not to replicate human sentience, but to analyze and reproduce patterns discerned from extensive human-generated data. This endeavor, while monumental, serves as a testament to Dreyfus's core assertion: genuine understanding may require more than mere pattern recognition. When asked how it answers the problems proposed by the concept of Heideggerian AI, ChatGPT-4 replied:

“ChatGPT, while sophisticated in its language understanding and generation capabilities, does not align with the concept of Heideggerian AI. It lacks the embodied experience and the ability to interact with the physical world. It doesn't have personal experiences or emotions, it doesn't have a body, and it doesn't perceive or interact with the world in the same way a human would. ChatGPT operates purely on symbolic manipulation, based on patterns in the data it was trained on. It does not have a direct understanding of the world, a self, or subjective experiences. So, while it can generate impressive and seemingly understanding responses, this is largely a product of the statistical patterns it has learned rather than an understanding born out of a lived, embodied experience.”


In AI's nascent phase, the aspiration was to engineer technology that mirrored or even surpassed the cognitive capacities of humans. This vision was grounded in a then-contemporary hypothesis on human cognition—proponents of this AI future posited that our conscious processes could be likened to a system of mental representations. Ironically, this viewpoint, what we now call Representational Theory of Mind, was central to modern philosophy; traceable to luminaries like Aristotle, and popularized by Descartes. However, the long-upheld theory faced criticism and re-evaluation in the philosophical realm, particularly in the 20th century. The term "artificial intelligence," introduced by John McCarthy in 1955, came into prominence just as Ludwig Wittgenstein's posthumously released "Philosophical Investigations" (1953) provided substantial arguments against the traditional theory of mental representations. Wittgenstein's assertions were reminiscent of Heidegger's seminal work "Being & Time" (1927). Thus, as philosophers began to pivot away from this age-old framework, AI pioneers were unknowingly building upon it.

Descartes’ illustration of mind-body dualism 



During the era of AI's inception, many computer scientists believed their theory of conscious thought to be groundbreaking. At MIT, where Dreyfus lectured in the early 1960s, students from the Artificial Intelligence Lab who attended his course on Heidegger were taken aback. They noticed that philosophers had been grappling with the metaphysics of consciousness for centuries; meanwhile, the AI Lab had, seemingly, devised a unanimous framework for understanding consciousness at an unprecedented speed.


Unbeknownst to the AI faculty, their conclusions inadvertently echoed philosophical tenets that had been the bedrock of intellectual discourse since the 17th century. Their hypotheses either directly or indirectly drew from the rationalist traditions of philosophers like Hobbes, Descartes, Leibniz, and Kant—Hobbes postulated that thinking was akin to computation; Descartes introduced the representational theory of the mind; Leibniz believed that every piece of knowledge could be distilled into a set of fundamental elements.


AI pioneer, Herbert A. Simon, proposed that both humans and computers function as physical symbol systems. We absorb information, craft mental models of phenomena within our cognitive architecture, and derive rules to make sense of our environment. Central to this viewpoint is the belief that human mental processes can be emulated in symbolic forms, paving the way for computers to operate on similar informational structures.


However, the terrain gets thorny when delving into the realm of meaning and significance. The Cartesian model of mental representation aimed to decode the world as a tapestry of neutral facts, upon which we layer meanings or functionalities (e.g., a hammer exists primarily for the act of hammering). Our mental content has an intentional directedness—it is about, or refers to something. Upon encountering an object of experience, we create and store a representation of it in our minds. These mental constructs come bundled with semantic attributes—they bear the weight of truth, accuracy, relevance, or congruency. The mental representation of a blue banana is inaccurate; the idea that Queen Elizabeth is dead is true; the desire to drink a chair is inappropriate. Furthermore, our intentional mental states (what we might think of as feelings or emotions) are directly related to these mental representations. I miss my husband, I desire a snack, I fear snakes. These representations and their accompanying intentional states are all made possible through our perceptual sensory experience with the material reference. Even when we use our imagination, we pull from references of data we’ve accumulated through experience.


The Intentional Arc


Heidegger's philosophical framework contends that conventional theories often bypass a fundamental concept he called "readiness-to-hand." This concept delves into our innate engagement with the world, our organic interaction with phenomena, and an inherent comprehension of their context, relationships, and potentialities. Heidegger denotes these experiential entities as "pragmata." This term transcends the ordinary connotation of "thing" to encapsulate deeds, acts, encounters, obligations, and matter.


In reference to the representational theory of mind, Heidegger posits that the semantic values we assign to objects become an added layer of abstraction. By linking a specific function like "hammering" to an object like a hammer, we inadvertently disengage it from its broader web of relationships—its association with nails, its role in construction. When we wield a tool such as a hammer, we seldom dwell on its essence or its intended purpose. These attributes are ingrained and pre-reflective; the object stands at our disposal. Our relationship with the world is one of tacit familiarity. Objects and phenomena often exist in the backdrop of our conscious attention, manifesting with a sense of passive acknowledgment. Pragmata present themselves in an unobtrusive, unreflective manner, largely guided by their utility. It is typically when an object deviates from its functional norm—for instance, when a hammer breaks—that we are jolted into an active consciousness of its essence. Similarly, as we navigate familiar terrains while driving, elements like the steering wheel, the traffic, or the indicators remain in the peripheries of our focus. It is only when the routine is disrupted—say, by a sudden brake from the car ahead—that these pragmata snap into sharp focus.


This perspective illuminates that pragmata, whether tangible or conceptual, do not always command our direct attention. At times, they linger implicitly in the periphery of our intentional directedness. Their emergence within our consciousness varies in intensity and centrality; this holds true even when they are conjured within our mental theater, a notion further buttressed by Wittgenstein’s assertion that linguistic constructs allow us to visualize scenarios within our inner sanctum.


In his 1966 essay "Phenomenology and Artificial Intelligence," Dreyfus delineates the distinctions between human conscious thought and AI's symbolic operations. Consider a chess-playing AI: it determines optimal moves by methodically scanning all possibilities and projecting their outcomes. In contrast, its human counterpart, while still strategizing, harnesses an innate sense of awareness. Rather than examining every potential move, the human player first hones in on a particularly promising segment of the board, and from this vantage point, evaluates the succeeding moves. The AI's systematic approach contrasts starkly with the human's intuitive, non-digital form of processing, rooted in our unique consciousness. This differential processing is accentuated by what Dreyfus terms "horizonal awareness" – borrowing from Husserl’s concept of perceptual horizon. It signifies the spectrum of one's perception, inclusive of all that is visible from a given perspective. This means that even elements at the periphery of our focus contribute to our cognitive understanding. Dreyfus's fundamental critique underscores that symbolic AI's attempts to mimic consciousness fall short, primarily because they cannot replicate the nuances of embodied interactions: encompassing awareness, commonsense, relativity, and peri-phenomena, collectively referred to as the "frame problem."



Husserl’s concept of intentional directness, horizons, and noema/noesis,



Several AI scholars at MIT imbibed this critique, frequently via dialogues with Dreyfus. He emphasized that the environments and objects humans encounter aren't mere models; they are the world in its entirety, replete with context, relationships, dynamism, and ephemerality. These mutable values, augmented by our horizonal awareness, contribute to the conundrum termed the “commonsense knowledge problem.” Our existence, interwoven with myriad direct and indirect acts, intentions, connections, and inferences, is so instinctive that its complexity is often taken for granted. However, translating this intricate web into an intangible information system is a formidable challenge.


AI pioneers initially sought to construct vast databases containing millions of facts about objects and their functions, erroneously assuming this would address the commonsense knowledge issue. Persistent critique from Dreyfus, coupled with numerous experimental setbacks, compelled a rethinking of strategies. Rodney Brooks introduced Cog, a humanoid robot rooted in behaviorism. Phil Agre conceptualized "interactionism," utilizing it to craft a virtual agent, Pengi. Walter Freeman pioneered an artificial rabbit-brain based on a neurodynamic model. These groundbreaking endeavors recognized the significance of environment and embodiment, advancing AI by underscoring the body's role in constructing meaning and understanding relativity. However, these Heideggerian AI attempts, while innovative, merely sidestep the frame problem rather than resolving it. For AI to match the capacity of human cognitive function, it requires a stream of tangible, dynamic, interconnected data in order to develop an accurate structure of relevance and significance. To speak about the world and its phenomena, requires a framing of the interrelatedness and symbiosis of reality.


The limitations of current AI technology, particularly with models like ChatGPT, become evident in our daily interactions. When querying ChatGPT for current information, users are reminded of its knowledge cutoff in September 2021. Even if the data was refreshed in real-time, the intricate web of cause-and-effect relationships and the comprehensive implications of any single change remain daunting challenges. As a result, responses generated are fundamentally based on the statistical patterns present in its training data, not an authentic grasp or sensitivity to context and intention. Presently, ChatGPT's incapability to dynamically adjust to shifts in relevance and significance is because its foundational data remains static.


While ChatGPT's training corpus is vast and provides more context than previous AI models, it's still insufficient to address this fundamental challenge. A broader spectrum of training data doesn't inherently confer upon the AI a deeper sense of significance, context, truth, accuracy, or congruency. Instead, it facilitates a more refined mimicry of human speech, patterns, and reasoning, creating an illusion that the AI is deliberating based on user input. However, users frequently face challenges that expose ChatGPT's lack of genuine awareness and commonsense. If prompts aren't meticulously detailed, the model often struggles to intuit the user's intention. And even with a well-framed question, ChatGPT can occasionally produce answers that seem nonsensical, a phenomenon often described as “hallucinations.” Interacting with ChatGPT can occasionally feel akin to dialoguing with a naive child, lacking a mutual foundational understanding. While ChatGPT's primary objective is to identify, anticipate, and emulate human behavioral patterns, this is a far cry from genuine comprehension of context and relevance. The AI's responses are often reflections of its training, not a deep understanding of the problem at hand.

For AI to transcend these limitations, there's a compelling need for some form of "embedded-embodiment" within the world, enabling it to genuinely interface with and understand our dynamic environment.





“Given a dynamically changing world, how is a nonmagical system ….to retrieve and (if necessary) to revise, out of all the representations that it possesses, just those representations that are relevant in some particular context of action?”


- Michael Wheeler in Reconstructing the Cognitive World





Read more:



Follow us

Podcast

Twitter

Email

About us

USURPATOR is an online magazine sharing essays and interviews about the user experience of our current virtual landscape

Run by @hard_boiledbabe