Consciousness as Functional Information

 

Congratulations to Anil Seth for winning the Berggruen essay prize on consciousness!  I didn’t learn the outcome until I emerged from one of the rare cellular blackout zones in modern America. My wife and I were whale watching south of Yachats (“ya-hots”) on the Oregon coast in this week of remarkable weather. We came up bupkis, nada, nil for the great migratory grays, but saw seals and sea lions bobbing in the surf, red shouldered hawks, and one bald eagle glowering like a luminescent gargoyle atop a Sitka spruce near Highway 101. We turned around at the dune groupings by Florence (Frank Herbert’s inspirations for Dune, weirdly enough, where thinking machines have been banned) and headed north again, the intestinal windings of the roads causing us to swap our sunglasses in and out in synchrony with the center console of the car as it tried to understand the intermittent shadows.

Seth is always reliable and his essay continues themes he has recently written about. There is a broad distrust of computational functionalism and hints of alternative models for how consciousness might arise in uniquely biological ways like his example of how certain neurons might fire purely for regulatory reasons. There are unanswered questions about whether LLMs can become conscious that hint at the challenges such ideas have, and the moral consequences that manifold conscious machines entail. He even briefly dives into the Simulation Hypothesis and its consequences for the possibility of consciousness.

I’ve included my own entry, below. It is both boldly radical and also fairly mundane. I argue that functionalism has a deeper meaning in biological systems than as a mere analog of computation. A missing component of philosophical arguments about function and consciousness is found in the way evolution operates in exquisite detail, from the role of parasitism to hidden estrus, and from parental investment to ethical consequentialism.… Read the rest

Time, Consciousness, and Joy in 2025

A glorious 2025 comes roaring in despite the nastiness of contemporary American and (some) worldwide politics. Everyone’s angry, despite the ingenious control of murderous pathogens, the brilliant performance of the post-COVID economic recovery in the United States, dropping crime rates, and the continued progress on reducing and eliminating worldwide poverty. But these are aggregate measures of social and scientific success and far too many individuals remain discontented with their own status and fears that social forces beyond their control are limiting their success and happiness.

In this, one must be circumspect: reading out stats that contradict the mood is not reading the room. So, instead, I try to focus on unexpected innovations that lead us to defocus on our own situational context and instead find a larger reimagining. This is a modern therapy that isn’t dismissive of the effectiveness of our highly successful institutions of scientific achievement, peace-preserving world orders, and liberal democracies that effectively balance individual freedoms against order. It’s a celebration of them, instead.

I give you two new joys as the new year starts to build. First, we have the novel realization that dark energy and matter might be better explained by relativistic distortions of space-time in the universe based on the quantities of matter in denser versus void-like areas of space. Here’s Anton Petrov with a primer:

This certainly simplifies things if true, but it needs to be observationally verified and reconsidered if it doesn’t pan out. There’s that underlying joy in science: everything is tentative because we are all flawed.

The second development changes from an external focus on the monumental scale of the universe to something much more human. I’ve previously covered the curious theory of quantum consciousness proposed by Roger Penrose and Stuart Hameroff, but there have been some recent developments.… Read the rest

Sentience is Physical

Sentience is all the rage these days. With large language models (LLMs) based on deep learning neural networks, question-answering behavior of these systems takes on curious approximations to talking with a smart person. Recently a member of Google’s AI team was fired after declaring one of their systems sentient. His offense? Violating public disclosure rules. I and many others who have a firm understanding of how these systems work—by predicting next words from previous productions crossed with the question token stream—are quick to dismiss the claims of sentience. But what does sentience really amount to and how can we determine if a machine becomes sentient?

Note that there are those who differentiate sentience (able to have feelings), from sapience (able to have thoughts), and consciousness (some private, subjective phenomenal sense of self). I am willing to blend them together a bit since the topic here isn’t narrowly trying to address the ethics of animal treatment, for example, where the distinction can be useful.

First we have the “imitation game” Turing test-style approach to the question of how we might ever determine if a machine becomes sentient. If a remote machine can fool a human into believing it is a person, it must be as intelligent as a person and therefore sentient like we presume of people. But this is a limited goal line. If the interaction is only over a limited domain like solving your cable internet installation problems, we don’t think of that as a sentient machine. Even against a larger domain of open-ended question and answering, if the human doesn’t hit upon a revealing kind of error that a machine might make that a human would not, we remain unconvinced that the target is sentient.… Read the rest

Two Points on Penrose, and One On Motivated Reasoning

Sir Roger Penrose is, without doubt, one of the most interesting polymaths of recent history. Even where I find his ideas fantastical, they are most definitely worth reading and understanding. Sean Carroll’s Mindscape podcast interview with Penrose from early January of this year is a treat.

I’ve previously discussed the Penrose-Hameroff conjectures concerning wave function collapse and their implication of quantum operations in the micro-tubule structure of the brain. I also used the conjecture in a short story. But the core driver for Penrose’s original conjecture, namely that algorithmic processes can’t explain human consciousness, has always been a claim in search of support. Equally difficult is pushing consciousness into the sphere of quantum phenomena that tend to show random, rather than directed, behavior. Randomness doesn’t clearly relate to the “hard problem” of consciousness that is about the experience of being conscious.

But take the idea that since mathematicians can prove things that are blocked by Gödel incompleteness, our brains must be different from Turing machines or collections of them. Our brains are likely messy and not theorem proving machines per se, despite operating according to logico-causal processes. Indeed, throw in an active analog to biological evolution based on variation-and-retention of ideas and insights that might actually have a bit of pseudo-randomness associated with it, and there is no reason to doubt that we are capable of the kind of system transcendence that Penrose is looking for.

Note that this doesn’t in any way impact the other horn of Penrose-Hameroff concerning the measurement problem in quantum theory, but there is no reason to suspect that quantum collapse is necessary for consciousness. It might flow the other way, though, and Penrose has created the Penrose Institute to look experimentally for evidence about these effects.… Read the rest