Sentience is Physical, Part 3: Now with Flaming Birds

Moving to Portland brings all the positives and negatives of urban living. A notable positive is access to the arts and I’m looking forward to catching Stravinsky’s The Firebird this weekend with the Oregon Symphony. Part of the program is a new work by composer Vijay Iyer who has a history of incorporating concepts derived from African rhythms, hip hop, and jazz into his compositional efforts. I took the opportunity this morning to read his 1998 dissertation from Berkeley that capped off his interdisciplinary program in the cognitive science of music. I’ll just say up front that I’m not sure it rises to the level of a dissertation since it does not really provide any significant new results. He notes the development of a microtiming programming environment coded in MAX but doesn’t give significant results or novel experimental testing of the system or of human perceptions of microtiming. What the dissertation does do, however, is give a lucid overview and some new insights about how cognition and music interact, as well as point towards ways to test the theories that Iyer develops during the course of his work. A too-long master’s thesis might be a better category for it, but I’ve never been exposed to musicology dissertations so perhaps this level of work is normal.

Iyer’s core thesis is that musical cognition and expression arise from a physical engagement with our environments combined with cultural situatedness. That is, rhythm is tied to a basic “tactus” or spontaneously perceived regular pulse or beat of music that is physically associated with walking, heartbeats, tapping, chewing, and so forth. Similarly, the culture of musical production as well as the history that informs a given piece all combine to influence how music is produced and experienced.… Read the rest

Sentience is Physical

Sentience is all the rage these days. With large language models (LLMs) based on deep learning neural networks, question-answering behavior of these systems takes on curious approximations to talking with a smart person. Recently a member of Google’s AI team was fired after declaring one of their systems sentient. His offense? Violating public disclosure rules. I and many others who have a firm understanding of how these systems work—by predicting next words from previous productions crossed with the question token stream—are quick to dismiss the claims of sentience. But what does sentience really amount to and how can we determine if a machine becomes sentient?

Note that there are those who differentiate sentience (able to have feelings), from sapience (able to have thoughts), and consciousness (some private, subjective phenomenal sense of self). I am willing to blend them together a bit since the topic here isn’t narrowly trying to address the ethics of animal treatment, for example, where the distinction can be useful.

First we have the “imitation game” Turing test-style approach to the question of how we might ever determine if a machine becomes sentient. If a remote machine can fool a human into believing it is a person, it must be as intelligent as a person and therefore sentient like we presume of people. But this is a limited goal line. If the interaction is only over a limited domain like solving your cable internet installation problems, we don’t think of that as a sentient machine. Even against a larger domain of open-ended question and answering, if the human doesn’t hit upon a revealing kind of error that a machine might make that a human would not, we remain unconvinced that the target is sentient.… Read the rest

Intelligent Borrowing

There has been a continuous bleed of biological, philosophical, linguistic, and psychological concepts into computer science since the 1950s. Artificial neural networks were inspired by real ones. Simulated evolution was designed around metaphorical patterns of natural evolution. Philosophical, linguistic, and psychological ideas transferred as knowledge representation and grammars, both natural and formal.

Since computer science is a uniquely synthetic kind of science and not quite a natural one, borrowing and applying metaphors seems to be part of the normal mode of advancement in this field. There is a purely mathematical component to the field in the fundamental questions around classes of algorithms and what is computable, but there are also highly synthetic issues that arise from architectures that are contingent on physical realizations. Finally, the application to simulating intelligent behavior relies largely on three separate modes of operation:

  1. Hypothesize about how intelligent beings perform such tasks
  2. Import metaphors based on those hypotheses
  3. Given initial success, use considerations of statistical features and their mappings to improve on the imported metaphors (and, rarely, improve with additional biological insights)

So, for instance, we import a simplified model of neural networks as connected sets of weights representing some kind of variable activation or inhibition potentials combined with sudden synaptic firing. Abstractly we already have an interesting kind of transfer function that takes a set of input variables and has a nonlinear mapping to the output variables. It’s interesting because being nonlinear means it can potentially compute very difficult relationships between the input and output.

But we see limitations, immediately, and these are observed in the history of the field. For instance, if you just have a single layer of these simulated neurons, the system isn’t fundamentally complex enough to compute any complex functions, so we add a few layers and then more and more.… Read the rest