Entanglements: Collected Short Works

Now available in Kindle, softcover, and hardcover versions, Entanglements assembles a decade of short works by author, scientist, entrepreneur, and inventor Mark William Davis.

The fiction includes an intimate experimental triptych on the evolution of sexual identities. A genre-defying poetic meditation on creativity and environmental holocaust competes with conventional science fiction about quantum consciousness and virtual worlds. A postmodern interrogation of the intersection of storytelling and film rounds out the collected works as a counterpoint to an introductory dive into the ethics of altruism.

The nonfiction is divided into topics ranging from literary theory to philosophical concerns of religion, science, and artificial intelligence. Legal theories are magnified to examine the meaning of liberty and autonomy. A qualitative mathematics of free will is developed over the course of two essays and contextualized as part of the algorithm of evolution. What meaning really amounts to is always a central concern, whether discussing politics, culture, or ideas.

The works show the author’s own evolution in his thinking of our entanglement with reality as driven by underlying metaphors that transect science, reason, and society. For Davis, metaphors and the constellations of words that help frame them are the raw materials of thought, and their evolution and refinement is the central narrative of our growth as individuals in a webwork of societies and systems.

Entanglements is for readers who are in love with ideas and the networks of language that support and enervate them. It is a metalinguistic swim along a polychromatic reef of thought where fiction and nonfictional analysis coexist like coral and fish in a greater ecosystem.

Mark William Davis is the author of three dozen scientific papers and patents in cognitive science, search, machine translation, and even the structure of art.… Read the rest

Sentience is Physical, Part 2

Having recently moved to downtown Portland within spitting distance of Powell’s Books, I had to wander through the bookstore despite my preference for digital books these days. Digital books are easily transported, can be instantly purchased, and can be effortlessly carried in bulk. More, apps like Kindle Reader synchronize across platforms allowing me to read wherever and whenever I want without interruption. But is there a discovery feature to the shopping experience that is missing in the digital universe? I had to find out and hit the poetry and Western Philosophy sections at Powell’s as an experiment. And I did end up with new discoveries that I took home in physical form (I see it as rude to shop brick-and-mortar and then order via Amazon/Kindle), including a Borges poetry compilation and an unexpected little volume, The Body in the Mind, from 1987 by the then-head of University of Oregon’s philosophy department, Mark Johnson.

A physical book seemed apropos of the topic of the second book that focuses on the role of our physical bodies and experiences as central to the construction of meaning. Did our physical evolution and the associated requirements for survival also translate into a shaping of how our minds work? Psychologists and biologists would be surprised that there is any puzzlement over this likelihood, but Johnson is working against the backdrop of analytical philosophy that puts propositional structure as the backbone of linguistic productions and the reasoning that drives them. Mind is disconnected from body in this tradition, and subjects like metaphors are often considered “noncognitive,” which is the negation of something like “reasoned through propositional logic.”

But how do we convert these varied metaphorical concepts derived from physicality into something structured that we can reason about using effective procedures?… Read the rest

Sentience is Physical

Sentience is all the rage these days. With large language models (LLMs) based on deep learning neural networks, question-answering behavior of these systems takes on curious approximations to talking with a smart person. Recently a member of Google’s AI team was fired after declaring one of their systems sentient. His offense? Violating public disclosure rules. I and many others who have a firm understanding of how these systems work—by predicting next words from previous productions crossed with the question token stream—are quick to dismiss the claims of sentience. But what does sentience really amount to and how can we determine if a machine becomes sentient?

Note that there are those who differentiate sentience (able to have feelings), from sapience (able to have thoughts), and consciousness (some private, subjective phenomenal sense of self). I am willing to blend them together a bit since the topic here isn’t narrowly trying to address the ethics of animal treatment, for example, where the distinction can be useful.

First we have the “imitation game” Turing test-style approach to the question of how we might ever determine if a machine becomes sentient. If a remote machine can fool a human into believing it is a person, it must be as intelligent as a person and therefore sentient like we presume of people. But this is a limited goal line. If the interaction is only over a limited domain like solving your cable internet installation problems, we don’t think of that as a sentient machine. Even against a larger domain of open-ended question and answering, if the human doesn’t hit upon a revealing kind of error that a machine might make that a human would not, we remain unconvinced that the target is sentient.… Read the rest

Notes on Pumps: Sensibilities and Framing with Algorithmic Feedback

“A sensibility is one of the hardest things to talk about.” So begins Sontag’s Notes on “Camp” in the 1964 Partisan Review. And what of the political anger and disillusionment across the United States and in the developed world? What of the gnawing desire towards superiority and control that accompanies authoritarian urges? What of the fear of loss of power to minority ethnic and religious groups? These may be the most discussed sociopolitical aspects of our modern political sensibility since Trump’s election in 2016 when a bitter, vindictive, hostile, crude, fat thug briefly took the reigns of America, then pushed and conspired to oppose the election of his successor.

What attracted his followers to him? I never encountered a George W. Bush fanatic during his presidency. Though not physically small, he talked about “compassionate conservatism” with a voice that hung in the upper register of middle pitches for men. He was neither sonorous nor mean. His eyebrows often had a look of surprise and self-doubt that was hinted at in claims he was a very reluctant candidate for president. I met people who voted for him but they seemed to accept him as an acceptable alternative to Gore or, later, to Kerry—not as a figure of passionate intrigue. Bush Jr. did receive a rally-around-the-flag effect that was based on circumstances that would later bring rebuke over the casus belli of the Iraq War. Similar sensibilities were true of the Obama years—there was a low positivity for him on the Left combined with a mildly deranged antagonism towards him on the Right.

Was the lack of Trump-like animating fanaticism due to the feeling that Bush Jr. was a compromise made to the electorate while Trump was, finally, a man who expressed the real hostility of those who vote Republican?… Read the rest

Wordle and the Hard Problem of Philosophy

I occasionally do Wordles at the New York Times. If you are not familiar, the game is very simple. You have six chances to guess a five-letter word. When you make a guess, letters that are in the correct position turn green. Letters that are in the word but in the wrong position turn yellow. The mental process for solving them is best optimized by choosing a word initially that has high-frequency English letters, like “notes,” and then proceeding from there. At some point in the guessing process, one is confronted with anchoring known letters and trying to remember words that might fit the sequence. There is a handy virtual keyboard displayed below the word matrix that shows you the letters in black, yellow, green, and gray that you have tried, that are required, that are fit to position, and that remain untested, respectively. After a bit, you start to apply little algorithms and exclusionary rules to the process: What if I anchor an S at the beginning? There are no five-letter words that end in “yi” in English, etc. There is a feeling of working through these mental strategies and even a feeling of green and yellow as signposts along the way.

I decided this morning to write the simplest one-line Wordle helper I could and solved the puzzle in two guesses:

Sorry for the spoiler if you haven’t gotten to it yet! Here’s what I needed to do the job: a five letter word list for English and a word frequency list for English. I could have derived the first from the second but found the first first, here. The second required I log into Kaggle to get a good CSV searchable list.… Read the rest

We Are Weak Chaos

Recent work in deep learning networks has been largely driven by the capacity of modern computing systems to compute gradient descent over very large networks. We use gaming cards with GPUs that are great for parallel processing to perform the matrix multiplications and summations that are the primitive operations central to artificial neural network formalisms. Conceptually, another primary advance is the pre-training of networks as autocorrelators that helps with smoothing out later “fine tuning” training programs over other data. There are some additional contributions that are notable in impact and that reintroduce the rather old idea of recurrent neural networks, networks with outputs attached back to inputs that create resonant kinds of running states within the network. The original motivation of such architectures was to emulate the vast interconnectivity of real neural systems and to capture a more temporal appreciation of data where past states affect ongoing processing, rather than a pure feed-through architecture. Neural networks are already nonlinear systems, so adding recurrence just ups the complexity of trying to figure out how to train them. Treating them as black boxes and using evolutionary algorithms was fashionable for me in the 90s, though the computing capabilities just weren’t up for anything other than small systems, as I found out when chastised for overusing a Cray at Los Alamos.

But does any of this have anything to do with real brain systems? Perhaps. Here’s Toker, et. al. “Consciousness is supported by near-critical slow cortical electrodynamics,” in Proceedings of the National Academy of Sciences (with the unenviable acronym PNAS). The researchers and clinicians studied the electrical activity of macaque and human brains in a wide variety of states: epileptics undergoing seizures, macaque monkeys sleeping, people on LSD, those under the effects of anesthesia, and people with disorders of consciousness.… Read the rest

Triangulation Machinery, Poetry, and Politics

I was reading Muriel Rukeyser‘s poetry and marveling at some of the lucid yet novel constructions she employs. I was trying to avoid the grueling work of comparing and contrasting Biden’s speech on the anniversary of January 6th, 2021 with the responses from various Republican defenders of Trump. Both pulled into focus the effect of semantic and pragmatic framing as part of the poetic and political processes, respectively. Sorry, Muriel, I just compared your work to the slow boil of democracy.

Reaching in interlaced gods, animals, and men.
There is no background. The figures hold their peace
In a web of movement. There is no frustration,
Every gesture is taken, everything yields connections.

There is a theory about how language works that I’ve discussed here before. In this theory, from Donald Davidson primarily, the meaning of words and phrases are tied directly to a shared interrogation of what each person is trying to convey. Imagine a child observing a dog and a parent says “dog” and is fairly consistent with that usage across several different breeds that are presented to the child. The child may overuse the word, calling a cat a dog at some point, at which point the parent corrects the child with “cat” and the child proceeds along through this interrogatory process, triangulating in on the meaning of dog versus cat. Triangulation is Davidson’s term, reflecting three parties: two people discussing a thing or idea. In the case of human children, we also know that there are some innate preferences the child will apply during the triangulation process, like preferring “whole object” semantics to atomized ones, and assuming different words mean different things even when applied to the same object: so “canine” and “dog” must refer to the same object in slightly different ways since they are differing words, and indeed they do: dog IS-A canine but not vice-versa.… Read the rest

A Learning Smorgasbord

Compliments of a discovery by Futurism, the paper The Autodidactic Universe by a smorgasbord of contemporary science and technology thinkers caught my attention for several reasons. First was Jaron Lanier as a co-author. I knew Jaron’s dad, Ellery, when I was a researcher at NMSU’s now defunct Computing Research Laboratory. Ellery had returned to school to get his psychology PhD during retirement. In an odd coincidence, my brother had also rented a trailer next to the geodesic dome Jaron helped design and Ellery lived after my brother became emancipated in his teens. Ellery may have been his landlord, but I am not certain of that.

The paper is an odd piece of kit that I read over two days in fits and spurts with intervening power lifting interludes (I recently maxed out my Bowflex and am considering next steps!). It initially has the feel of physicists trying to reach into machine learning as if the domain specialists clearly missed something that the hardcore physical scientists have known all along. But that concern dissipated fairly quickly and the paper settled into showing isomorphisms between various physical theories and the state evolution of neural networks. OK, no big deal. Perhaps they were taken by the realization that the mathematics of tensors was a useful way to describe network matrices and gradient descent learning. They then riffed on that and looked at the broader similarities between the temporal evolution of learning and quantum field theory, approaches to quantum gravity, and cosmological ideas.

The paper, being a smorgasbord, then investigates the time evolution of graphs using a lens of graph theory. The core realization, as I gleaned it, is that there are more complex graphs (visually as well as based on the diversity of connectivity within the graph) and pointlessly uniform or empty ones.… Read the rest

Distributed Contexts in the Language Game

The meaning of words and phrases can be a bit hard to pin down. Indeed, the meaning of meaning itself is problematical. I can point to a dictionary and say, well, there is where we keep the meanings of things, but that is just a record of the way in which we use the language. I’m personally fond of a kind of philosophical perspective on this matter of meaning that relies on a form of holism. That is, words and their meanings are defined by our usages of them, our historical interactions with them in different contexts, and subtle distinctive cues that illuminate how words differ and compare. Often, but not always, the words are tied to things in the world, as well, and therefore have a fastness that resists distortions and distinctions.

This is, of course, a critical area of inquiry when trying to create intelligent machines that deal with language. How do we imbue the system with meaning, represent it within the machine, and apply it to novel problems that show intelligent behavior? In approaching the problem, we must therefore be achieving some semblance of intelligence in a fairly rigorous way since we are simulating it with logical steps.

The history of philosophical and linguistic interest in these topics is fascinating, ranging from Wittgenstein’s notion of a language game that builds up rules of use to Firth’s expansion to formalization of collocation of words as critical to meaning. In artificial intelligence, this concept of collocation has been expanded further to include interchangeability of contexts. Thus, boat and ship occur in more similar contexts than boat and bank.

A general approach to acquiring these contexts is based on the idea of dimensionality reduction in various forms.… Read the rest

Intelligent Borrowing

There has been a continuous bleed of biological, philosophical, linguistic, and psychological concepts into computer science since the 1950s. Artificial neural networks were inspired by real ones. Simulated evolution was designed around metaphorical patterns of natural evolution. Philosophical, linguistic, and psychological ideas transferred as knowledge representation and grammars, both natural and formal.

Since computer science is a uniquely synthetic kind of science and not quite a natural one, borrowing and applying metaphors seems to be part of the normal mode of advancement in this field. There is a purely mathematical component to the field in the fundamental questions around classes of algorithms and what is computable, but there are also highly synthetic issues that arise from architectures that are contingent on physical realizations. Finally, the application to simulating intelligent behavior relies largely on three separate modes of operation:

  1. Hypothesize about how intelligent beings perform such tasks
  2. Import metaphors based on those hypotheses
  3. Given initial success, use considerations of statistical features and their mappings to improve on the imported metaphors (and, rarely, improve with additional biological insights)

So, for instance, we import a simplified model of neural networks as connected sets of weights representing some kind of variable activation or inhibition potentials combined with sudden synaptic firing. Abstractly we already have an interesting kind of transfer function that takes a set of input variables and has a nonlinear mapping to the output variables. It’s interesting because being nonlinear means it can potentially compute very difficult relationships between the input and output.

But we see limitations, immediately, and these are observed in the history of the field. For instance, if you just have a single layer of these simulated neurons, the system isn’t fundamentally complex enough to compute any complex functions, so we add a few layers and then more and more.… Read the rest