Making Everything Awesome Again

Yeah, everything is boring. Streaming video, books, art—everything. It is the opposite of “everything is awesome” and, once again, it came about as a result of the internet attention economy. Or at least that is what Michelle Goldberg of the New York Times tells us, rounding up some thoughts from a literary critic as a first step and then jumping into some new social criticism that suggests the internet has ruined snobbery.

I was thinking back to the 1990s after I read the piece. I was a working computational linguist who dabbled in simulated evolution and spent time at Santa Fe Institute studying dreamy artificial life concepts. In my downtime I was in an experimental performance art group that detonated televisions and projected their explosions on dozens of televisions in a theater. I did algorithmic music composition using edge-of-chaos self-assembling systems. I read transgressive fiction and Behavioral and Brain Sciences for pleasure. I listened to Brian Eno and Jane Sibbery and Hole while reading Mondo 2000. My girlfriend and I danced until our necks ached at industrial/pop-crossover clubs and house parties. An early “tech nomad” visited us at one of our desert parties. Both in my Peace Corps service in Fiji and then traveling in Europe and Japan, I was without a cell phone, tablet, and only occasionally was able to touch email when at academic conferences where the hosts had kindly considered our unique culture. There was little on the internet—just a few pre-memes struggling for viability on USENET.

Everything was awesome.

But there was always a lingering doubt about the other cultural worlds that we were missing, from the rise of grunge to its plateau into industrial, and of the cultural behemoth cities on the coasts.… Read the rest

Sentience is Physical

Sentience is all the rage these days. With large language models (LLMs) based on deep learning neural networks, question-answering behavior of these systems takes on curious approximations to talking with a smart person. Recently a member of Google’s AI team was fired after declaring one of their systems sentient. His offense? Violating public disclosure rules. I and many others who have a firm understanding of how these systems work—by predicting next words from previous productions crossed with the question token stream—are quick to dismiss the claims of sentience. But what does sentience really amount to and how can we determine if a machine becomes sentient?

Note that there are those who differentiate sentience (able to have feelings), from sapience (able to have thoughts), and consciousness (some private, subjective phenomenal sense of self). I am willing to blend them together a bit since the topic here isn’t narrowly trying to address the ethics of animal treatment, for example, where the distinction can be useful.

First we have the “imitation game” Turing test-style approach to the question of how we might ever determine if a machine becomes sentient. If a remote machine can fool a human into believing it is a person, it must be as intelligent as a person and therefore sentient like we presume of people. But this is a limited goal line. If the interaction is only over a limited domain like solving your cable internet installation problems, we don’t think of that as a sentient machine. Even against a larger domain of open-ended question and answering, if the human doesn’t hit upon a revealing kind of error that a machine might make that a human would not, we remain unconvinced that the target is sentient.… Read the rest