To build a machine one must know what its parts are and how they fit together. To understand the machine one needs to know what each part does and how it contributes to its function. In other words, one should be able to explain the “mechanics” of how it works.
According to a philosophical approach Called a mechanism, humans are arguably a kind of machine – and our ability to think, speak, and understand the world is the result of a mechanical process we don’t understand.
To better understand ourselves, we can try to build machines that mimic our abilities. In doing so, we would have a mechanistic understanding of those machines. And the more of our behavior the machine shows, the closer we may get to a mechanistic explanation of our own mind.
This is what makes AI philosophically interesting. Advanced models like GPT4 and Midjourney can now mimic human conversations, pass professional exams, and create beautiful photos with just a few words.
But despite all the progress, questions remain unanswered. How can we make something self-aware, or make others aware? What is Identity? Which means?
While there are many competing philosophical descriptions of these things, they have all resisted mechanistic explanations.
In a order of papers accepted for the 16th Annual Artificial General Intelligence Conference in Stockholm I give a mechanistic explanation for these phenomena. They explain how we can build a machine that is aware of itself, of others, of itself as perceived by others, and so on.
Read more: A Google software engineer believes an AI has become sentient. If he’s right, how are we supposed to know?
Intelligence and design
Much of what we call intelligence comes down to making predictions about the world with incomplete information. The less information a machine needs to make accurate predictions, the more “intelligent” it is.
For any job, there is a limit to how much intelligence is actually useful. For example, most adults are smart enough to learn to drive, but more intelligence is unlikely to make them a better driver.
Describe my papers the upper limit of intelligence for a given task, and what it takes to build a machine that performs that task.
I called the idea Bennett’s Razor, which in non-technical terms means that “statements should not be more specific than necessary”. This differs from the popular interpretation of Ockham’s Razor (en mathematical descriptions thereof), which favors simpler explanations.
The difference is subtle, but significant. In a experiment by comparing how much data AI systems need to learn simple math, the AI that preferred less specific explanations performed as much as 500% better than one that preferred simpler explanations.
Exploring the implications of this discovery led me to a mechanistic explanation of meaning – something called “Gricean pragmatics”. This is a concept in the philosophy of language that looks at how meaning is related to intention.
To survive, an animal must predict how its environment, including other animals, will act and react. You wouldn’t hesitate to leave a car unattended near a dog, but the same can’t be said for your steak lunch.
Being intelligent in a community means being able to deduce others’ intent, which stems from their feelings and preferences. If a machine were to reach the upper limit of intelligence for a task that depends on interactions with a human, then it should also correctly infer intent.
And if a machine can attribute intent to the events and experiences that happen to it, this raises the question of identity and what it means to be aware of yourself and others.
Causality and identity
I see John wearing a raincoat when it rains. If I force John to wear a raincoat on a sunny day, will it rain?
Of course not! For a human being, this is self-evident. But the subtleties of cause and effect are more difficult to teach a machine (interested readers can check this out The book why by Judea Pearl and Dana Mackenzie).
In order to reason about these things, a machine has to learn that “I made it happen” is different from “I saw it happen”. Normally we would program this understanding.
However, my work explains how we can build a machine that performs at the upper limit of intelligence for a task. By definition, such a machine must correctly identify cause and effect – and therefore also deduce causal relationships. My papers investigate exactly how.
The implications of this are profound. If a machine learns “I made it happen”, then it has to construct concepts of “I” (an identity for itself) and “it”.
The ability to infer intent, learn cause and effect, and construct abstract identities are all interrelated. A machine that reaches the upper limit of intelligence for a task must exhibit all of these capabilities.
This machine constructs an identity not only for itself, but for every aspect of every object that helps or hinders the completion of the task. Then you can use your own preferences like a predict baseline what others can do. This is similar to how people tend to ascribe intent on non-human animals.
So what does it mean for AI?
Of course, the human mind is much more than the simple program used to conduct experiments in my research. My work provides a mathematical description of a possible causal pathway to creating a machine that is demonstrably self-aware. However, the details of designing such a thing are far from settled.
For example, a human-like intent would require human-like experiences and feelings, which is difficult to construct. Moreover, we cannot easily test for the full richness of human consciousness. Consciousness is a broad and ambiguous concept that encompasses the more narrow statements above, but must be distinguished.
I have given a mechanistic explanation for it aspects of consciousness – but this alone does not capture the full richness of consciousness as humans experience it. This is just the beginning and future research will need to expand on these arguments.