Death, Healing, and Language Games

A phone call came in the early afternoon in late August: she was reclined on her day bed and she was dead. She had lain down for a nap and didn’t wake up. In the subsequent weeks there has been a rallying of the families, grief, tremendous effort, flights before dawn, and scripted expressions of condolences. In my youth I had necessarily been a rules deconstructor, going in bare feet to a wedding, challenging expectations, trying to find novel ways to intervene—sometimes boorishly, I’m certain. But now I prize cheerful clarity and just volunteer to do whatever is needed to reach our collective goals. Remember: freedom and coordination.

In moments like this there are somewhat scripted conventions for discussing the hard matters of duties and feelings. These language games have organically arisen from contending forces, from Anglo-American sentimentality to the influence of organized religion, and they serve to facilitate life transitions. And now they have been summarized by large language models (LLMs) like ChatGPT that have trained on the masses of written content on the web to the point that they have reliable consistency. An emergency room doctor reports in the New York Times that ChatGPT does a better job than he does at the hard job of best-practices for bedside manner when conveying bad news. He also notes that LLMs are remarkably reliable for refining the scripted discussion of symptoms and medical diagnoses.

So a counter to the “slop economy” at least that provides some guidance for harried professionals trying to do a good job at the delicate threshold of personal pain and fear. The stochastic parroting is suddenly desirable insofar as it is parroting best practices and conventions.… Read the rest

Sentience is Physical

Sentience is all the rage these days. With large language models (LLMs) based on deep learning neural networks, question-answering behavior of these systems takes on curious approximations to talking with a smart person. Recently a member of Google’s AI team was fired after declaring one of their systems sentient. His offense? Violating public disclosure rules. I and many others who have a firm understanding of how these systems work—by predicting next words from previous productions crossed with the question token stream—are quick to dismiss the claims of sentience. But what does sentience really amount to and how can we determine if a machine becomes sentient?

Note that there are those who differentiate sentience (able to have feelings), from sapience (able to have thoughts), and consciousness (some private, subjective phenomenal sense of self). I am willing to blend them together a bit since the topic here isn’t narrowly trying to address the ethics of animal treatment, for example, where the distinction can be useful.

First we have the “imitation game” Turing test-style approach to the question of how we might ever determine if a machine becomes sentient. If a remote machine can fool a human into believing it is a person, it must be as intelligent as a person and therefore sentient like we presume of people. But this is a limited goal line. If the interaction is only over a limited domain like solving your cable internet installation problems, we don’t think of that as a sentient machine. Even against a larger domain of open-ended question and answering, if the human doesn’t hit upon a revealing kind of error that a machine might make that a human would not, we remain unconvinced that the target is sentient.… Read the rest