Consciousness as Functional Information

 

Congratulations to Anil Seth for winning the Berggruen essay prize on consciousness!  I didn’t learn the outcome until I emerged from one of the rare cellular blackout zones in modern America. My wife and I were whale watching south of Yachats (“ya-hots”) on the Oregon coast in this week of remarkable weather. We came up bupkis, nada, nil for the great migratory grays, but saw seals and sea lions bobbing in the surf, red shouldered hawks, and one bald eagle glowering like a luminescent gargoyle atop a Sitka spruce near Highway 101. We turned around at the dune groupings by Florence (Frank Herbert’s inspirations for Dune, weirdly enough, where thinking machines have been banned) and headed north again, the intestinal windings of the roads causing us to swap our sunglasses in and out in synchrony with the center console of the car as it tried to understand the intermittent shadows.

Seth is always reliable and his essay continues themes he has recently written about. There is a broad distrust of computational functionalism and hints of alternative models for how consciousness might arise in uniquely biological ways like his example of how certain neurons might fire purely for regulatory reasons. There are unanswered questions about whether LLMs can become conscious that hint at the challenges such ideas have, and the moral consequences that manifold conscious machines entail. He even briefly dives into the Simulation Hypothesis and its consequences for the possibility of consciousness.

I’ve included my own entry, below. It is both boldly radical and also fairly mundane. I argue that functionalism has a deeper meaning in biological systems than as a mere analog of computation. A missing component of philosophical arguments about function and consciousness is found in the way evolution operates in exquisite detail, from the role of parasitism to hidden estrus, and from parental investment to ethical consequentialism.… Read the rest

Functional Information Analysis and the Chinese Room

 

I’ve been considering the implications of a new scientific law, the law of increasing functional information, in terms of how it can be applied to our thinking about various ideas. At first glance, the law says little new about the physical world. We already know about much of the various levels of the functions that are described in the paper, from star formation up through the evolution of human behavior. But there may be another way of thinking about it. A quote from Margaret Bowdon on Searle’s famous Chinese Room Argument shows how it might help:

The inherent procedural consequences of any computer program give it a toehold in semantics, where the semantics in question is not denotational, but causal.

So here we have an attack on the underlying assumption that what human understanding amounts to involves semantics and meaning that a robot or computational procedure can never have. If we expand Bowdon’s claim about how meaning comes about to include some of Searle’s other quotes like the room can never know what a hamburger is in Chinese just by processing the relevant symbols, we can enlarge that toehold by including all the functional engagements that are part of the experience of coexisting with and consuming hamburgers in a Chinese-language environment.

Semantics and intentionality and meaning—all these folk concepts we use to express how we are aware and conscious—collapse into function with the impetus driven by this new law. Meaning is an inherent feature of function, we just mystify it a great deal. In fact, a part of the semantics associated with the Chinese Room is embedded in the transfer rules that are used for translation. Whoever developed those rules understood Chinese well-enough to code them up accurately and that represents functional information increase.… Read the rest