Original article can be found here (source): Artificial Intelligence on Medium
What is the nature of your reality? What if your consciousness, your subjective reality, does not bring you closer to the objective reality of our world? This is the question that Professor Donald Hoffman’s book “The Case Against Reality” tries to answer.
Through his journey in the fields of evolutionary theory, philosophy, and cognitive science, Professor Hoffman pieces together a compelling narrative that poses many poignant questions to the next generation of researchers. As we develop increasingly intelligent AI systems, perhaps we should take a moment to look at the world from the different perspectives of a cognitive scientist.
At the beginning of this year, after watching Professor Donald Hoffman’s interviews and reading his book, I had the great pleasure of sitting down with him to have a conversation about his research.
It’s a conversation that opened up my mind to the interconnected world of philosophy, psychology, evolution, neuroscience, and artificial intelligence. At the intersection of artificial general intelligence and cognitive psychology, consciousness research will remain both a great inspiration as well as a great challenge for researchers in the years to come.
In Professor Hoffman’s book, “The Case Against Reality”, what we think of as the nature of our reality is simply our subjective reality or our current interface. There’s a world of objective consciousness that lies underneath this interface. This is the place where our current notions of space and time will need to be re-evaluated. In this world, conscious agents interact with one another in a dynamics that may explain many of our world’s current mysteries. One simple example of a current mystery is our conscious experience of the “taste of chocolate”. We don’t have a scientific theory that can describe how brain activity might create our conscious experience of the “taste of chocolate”. This new paradigm of research may lead to new theories that will not only work in our current world but have the potential to unleash other conscious experiences.
Dr. Zubin Damania’s interview with Professor Hoffman is an in-depth look at the world described in his book and sets the background for this interview.
What is the distinction between intelligence and consciousness? Is consciousness essential for intelligence?
Typically in the cognitive sciences, we have a functionalist view of intelligence. In some sense, by definition, an agent behaves intelligently if it acts with certain functional properties. For example, if it has goals, it acts to accomplish these goals. If you define intelligence that way then it’s utterly distinct from consciousness, at least on the face of it. You could have intelligent, unconscious machines that act to achieve goals. That is the standard way of viewing intelligence — as a purely functional notion.
Some of my colleagues think that intelligence necessarily involves consciousness. This is an intuitive claim without rigorous research to back it up. As a scientist, until I have a theory that is mathematically precise I don’t know how to test such a claim. So it’s not clear to me what, precisely, the non-functional view of conscious intelligence might be.
When you think about some of the agents used by current AI algorithms to model the world, do you think that consciousness can be developed from unconscious agents?
Most of my colleagues are physicalists. They have a functional view of intelligence. They think that consciousness is not fundamental. When they talk about intelligence, it’s about agents without consciousness interacting and developing a kind of swarm intelligence.
These agents can develop higher memory and intelligent behavior. Given these premises, no one has proposed a scientific theory that explains how consciousness emerges from unconscious agents.
If you don’t start with consciousness in the basic agent, no one has a scientific theory that can explain with scientific precision, how the collective intelligence of these agents can give rise to consciousness.
In your book, you talk about conscious agents, why are these conscious agents important?
Even without consciousness, from a functional point of view, you can get really high levels of intelligence. My concern with conscious agents is not so much in relation to intelligence. I am examining simple issues such as our conscious experience of the smell of chocolate, or the taste of garlic.
How can we as scientists develop an agent-based system that actually accounts for simple conscious experiences that even a rat might have? I’m not talking about advanced human intelligence or self-awareness. I’m just concerned with very low-level sensory experience. Agent-based models with unconscious agents may create unlimited intelligence, but they have so far failed to explain the genesis of something as simple as the conscious experience of the taste of chocolate. This is a big problem.
In my theory, I’m looking at simple conscious agents. I’m looking at how these conscious agents can interact functionally. Within this framework, specific interesting functional interactions can lead to new specific kinds of conscious experiences: there’s an infinite range of possible functional interactions that may lead to new kinds of higher levels of consciousness. So, this is a huge field to explore for the next generation of scientists. Human consciousness is just a tiny aspect of it. We are not the end-all and be all.
We may need sophisticated AI to help us go where our imagination may otherwise fail to go. They may even develop concepts that we can’t understand. A good theory of consciousness that defines exactly what we mean by conscious agents will tell us when new conscious agents emerge. For instance, a dynamical system of conscious agents that satisfies the definition of a conscious agent would constitute a new conscious agent.
How do you think current AI research will progress?
Judea Pearl has a good diagnosis of the current problem with AI. Current AI systems can only learn correlations. They do not build causal models of the world. Even though humans are causal modelers, we didn’t have a science of causal modeling until the last twenty years. Now that we have a mathematically precise theory of causal modeling and causal inference, the next generation of AI, perhaps, within the next ten or fifteen years will also incorporate causal modeling. If this happens, then AI’s will no longer be brittle: they will be flexible intelligent machines.
Can Artificial General Intelligence in the functional sense have the kind of deeper moral-ethical code without consciousness?
It comes down to a couple of interesting questions: what do we mean by ethics and morality in the very deep sense? Is consciousness required for what we mean by ethics and morality? Both are difficult questions. I’ve looked at different aspects of this from the religious sense as well as from the academic sense. I’ve never found any theory of morality that is deep enough to answer these questions.
Most of the theories of ethics are human-centric. They deal with our human notions of what is right and wrong. But, species differ widely in their behaviors; some routinely commit siblicide. What is the norm for one species to ensure its survival might not be the norm for another? So, what theory of right and wrong is deep enough to explain these differences across species? Until we have such a theory, I’m afraid that my notion of right and wrong is simply too human-centric. My guess is that whatever theory of morality that we come up with will have to be grounded in the notion of consciousness. I don’t know what to do that notion yet. I don’t see how morality is just a functional thing.
As a scientist, for me, again it starts with a mathematically precise theory of consciousness and a theory of morality. Until we have that, your guess is as good as mine.
In this human world, many of us have brains that don’t seem to fit the norm. For instance, we have not been able to find the functional causes of many mental illnesses even though we’ve studied them for a long time now. Do you think that in the objective conscious world that you describe in your book, we may find spaces for these conditions?
Absolutely yes. The potential variety of the kinds of consciousness and styles of consciousness is infinite. What we call normal human consciousness is a probability zero subset of the possible varieties of consciousness. So yes, we may find additional spaces for them.
In your book, you gave the example of certain people’s enhanced sensory experiences when taking psychedelics. People who took them often feel as if they just unleashed a “portal” into their own consciousness. They sometimes describe their experiences as a deeper level of imagination. Do you think that somehow, the effects of the “portal” left open is related to a higher level of imagination?
It’s conceivable that some people may have experiences beyond our space-time interface into the realm of conscious agents in ways that the rest of us do not. What we want is a deeper scientific framework, a mathematical model of consciousness, that informs empirical tests of the claim that a person has experience beyond our current space-time interface.
In your book, you talk about the theory of evolution by natural selection and the concept of “fitness”. If in the objective consciousness world, there are no resource limitations, then there’s no competition. Does that mean that our notion of “fitness” won’t apply?
Quite possibly. In evolution by natural selection, the fundamental issue is limited resources. If resources are unlimited, then there would be no need to compete. Without competition, there’s no natural selection. Natural selection is a brutal form of learning.
Do you think that in the realm of the objective consciousness world you will need a new theory of evolution?
Yes. We need a new theory of evolution, in the broad sense of a new dynamics; it may look nothing like evolution by natural selection. We need dynamics of consciousness. One of the constraints on this dynamics is that if we project it back to our current space-time, then it must look like evolution by natural selection in that space-time interface.
In a sense, evolution by natural selection is correct in our current interface. But, it’s not deep enough to be the correct dynamics of the conscious agents that are outside of this interface. If we come up with a new dynamics of consciousness, then it better look like evolution by natural selection (and general relativity and quantum field theory) when projected back into our current interface.
What do you think is the real payoff when we study consciousness and spend time to come up with a fundamental theory of consciousness?
When Michael Faraday studied electricity and magnetism in the 19th century, he did many experiments and recorded a variety of weird outcomes. Then the physicist James Maxwell looked at Faraday’s work and summarized them in equations, now known as Maxwell’s equations. These equations became the foundation of today’s electronics industry. They were critical to the development of relativity and quantum theory in the 20th century. Once we have a mathematical theory of the dynamics of conscious agents behind our space-time interface, we will be able to alter the parameters of space-time itself. That will be unprecedented power.