Causally Emergent vs. Divine Spark Murder Otherwises

One might claim that a metaphysical commitment to strong determinism is only porous to quantum indeterminacy or atomic indeterminacy (decay behavior for instance). Those two can be lumped together and simply called subatomic indeterminacy or something. Everything else is conceptually derivative of state evolution and therefore deterministic. So does that mean that my model for R fails unless I can invoke these two candidates? My suggestion of amplifying thermodynamic noise doesn’t really cut the mustard (an amusing semantic drift from pass muster, perhaps) because it only appears random and solely characterizable by these macroscopic variables like pressure and temperature, not because it actually is random in the molecule swirl.

But I can substitute an atomic decay counter for my thermodynamic amplifier, or use a quantum random number generator based on laser measurements of vacuum fluctuations. There, I’ve righted the ship, though I’ve jettisoned my previous claim that randomness is not necessary for R’s otherwises. Now it is, but it is not sufficient because of the need for a device like the generative subsystem that uses randomness in a non-arbitrary way to revise decisions. We do encounter a difficulty in porting subatomic indeterminacy into a human analog, of course, though some have given it a try.

But there is some new mathematics for causal emergence that fits well with my model. In causal emergence, ideas like necessity and sufficiency for causal explanations can be shown to have properties in macroscale explanations that are not present at microscales. The model used is a simple Markov chain that flips between two states and information theory is applied to examine a range of conceptual structures for causation running from David Hume’s train of repeating objects (when one damn thing comes after another and then again and again, we may have a cause), up through David Lewis’s notion of counterfactuals in alternative probabilistic universes (could it have happened that way in all possible worlds?),… Read the rest

Indeterminacy and the Ethics of Emergence

Continuing on with this theme of an ethics of emergence, can we formulate something interesting that does better than just assert that freedom and coordination are inherent virtues in this new scheme? And what does that mean anyway in the dirty details? We certainly see natural, emergent systems that exhibit tight regulatory control where stability, equilibrium, and homeostasis prevent dissipation, like those hoped-for fascist organismic states. There is not much free about these lower level systems, but we think that though they are necessary they are insufficient for the higher-order challenges of a statistically uncertain world. And that uncertainty is what drives the emergence of control systems in the first place. The control breaks out at some level, though, in a kind of teleomatic inspiration, and applies stochastic exploration of the adaptive landscape. Freedom then arises as an additional control level, emergent itself.

We also have this lurking possibility that emergent systems may not be explainable in the same manner that we have come to expect scientific theories to work. Being highly contingent they can only be explained in specificity about their contingent emergence, not by these elegant little explanatory theories that we have now in fields like physics. Stephen Wolfram, and the Santa Fe Institute folks as well, investigated this idea but it has remained inconclusive in its predictive power so far, though that may be changing.

There is an interesting alternative application for deep learning models and, more generally, the application of enormous simulation systems: when emergent complexity is daunting, use simulation to uncover the spectrum of relationships that govern complex system behavior.

Can we apply that to this ethics or virtue system and gain insights from it?… Read the rest

When the Cranes Cry

The crane has a symbolic resonance in Celtic mythology. A magician, assuming an elaborate pose—one eye open and one leg drawn up—was said to see into the otherworld, just as the crane itself moved from sky to land to water. But there is the other meaning of the word crane: the ancient lifting contraption that helped build Greece and likely had a role in Egypt and Sumeria before that. And now they protrude into the urban sky, raising up our buildings and even other cranes as we densify our cities. It was this mechanical meaning that Dan Dennett at Tufts chose to contrast with conceptual skyhooks, the unsupported contrivances that save protagonists in plays by dangling gods above the stage. For Dennett, the building crane is the metaphor we should apply to the mindless, simple algorithm of evolution. The algorithm raises up species and thus creates our mysterious ideas about meaning and purpose. No skyhooks or Deus ex Machina are needed.

Dennett passed away at 82 in Maine leaving a legacy as a public intellectual who engaged in the pursuit of reason throughout his adult career. He was committed to the idea that this world—this teeming ensemble of living matter—is intrinsically miraculous, built up by something dead simple into all the convolutions and perilous ideas that we now use to parse its mysteries. He was one of the Four Horsemen of the Apocalypse during the so-called New Atheism craze of 2008-2010, along with Richard Dawkins, Christopher Hitchens, and Sam Harris, but even then he was committed to the crane metaphor to displace these ancient skyhooks of belief rather than, say, a satirical impact-analysis of religion a la Hitchens.

There is another phrase that Dennett championed in Darwin’s Dangerous Idea: Evolution and the Meanings of Life: universal acid.… Read the rest

Be Persistent and Evolve

If we think about the evolution of living things we generally start from the idea that evolution requires replicators, variation, and selection. But what if we loosened that up to the more everyday semantics of the word “evolution” when we talk about the evolution of galaxies or of societies or of crystals? Each changes, grows, contracts, and has some kind of persistence that is mediated by a range of internal and external forces. For crystals, the availability of heat and access to the necessary chemicals is key. For galaxies, elements and gravity and nuclear forces are paramount. In societies, technological invention and social revolution overlay the human replicators and their biological evolution. Should we make a leap and just declare that there is some kind of impetus or law to the universe such that when there are composable subsystems and composition constraints, there will be an exploration of the allowed state space for composition? Does this add to our understanding of the universe?

Wong, et. al. say exactly that in “On the roles of function and selection in evolving systems” in PNAS. The paper reminds me of the various efforts to explain genetic information growth given raw conceptions of entropy and, indeed, some of those papers appear in the cites. It was once considered an intriguing problem how organisms become increasingly complex in the face of, well, the grinding dissolution of entropy. It wasn’t really that hard for most scientists: Earth receives an enormous load of solar energy that supports the push of informational systems towards negentropy. But, to the earlier point about composability and constraints, the energy is in a proportion that supports the persistence of systems that are complex.… Read the rest

Find the Alien

Assembly Theory (AT) (original paper) is some new theoretical chemistry that tries to assess the relative complexity of the molecular underpinnings of life, even when the chemistry might be completely alien. For instance, if we send a probe to a Jovian moon and there are new microscopic creatures in the ocean, how will we figure that out? In AT, it is assumed that all living organisms require a certain complexity in order to function since that is a minimal requirement for life on Earth. The chemists experimentally confirmed that mass spectrometry is a fairly reliable way of differentiating the complexity of living things and their byproducts from other substances. Of course, they only have Earthly living things to test, but they had no false positives in their comparison set of samples, though some substances like beer tended to be unusually high in their spectral analysis. The theory is that when a mass spec ionizes a sample and routes it through a magnetic and electric field, the complexity of the original molecules is represented in the complexity of the spray of molecular masses recorded by the detectors.

But what is “complexity” exactly? There are a great number of candidates, as Seth Lloyd notes in this little round-up paper that I linked to previously. Complexity intuitively involves something like a trade-off between randomness and uniformity, but also reflects internal repetition with variety. There is a mathematical formalism that in full attribution is “Solomonoff-Chaitin-Kolmogorov Complexity”—but we can just call it algorithmic complexity (AC) for short—that has always been an idealized way to think about complexity: take the smallest algorithm (in terms of bits) that can produce a pattern and the length of the algorithm in bits is the complexity.… Read the rest

Follow the Paths

There is a little corner of philosophical inquiry that asks whether knowledge is justified based on all our other knowledge. This epistemological foundationalism rests on the concept that if we keep finding justifications for things we can literally get to the bottom of it all. So, for instance, if we ask why we think there is a planet called Earth, we can find reasons for that belief that go beyond just “’cause I know!” like “I sense the ground beneath my feet” and “I’ve learned empirically-verified facts about the planet during my education that have been validated by space missions.” Then, in turn, we need to justify the idea that empiricism is a valid way of attaining knowledge with something like, “It’s shown to be reliable over time.” This idea of reliability is certainly changing and variable, however, since scientific insights and theories have varied, depending on the domain in question and timeframe. And why should we in fact value our senses as being reliable (or mostly reliable) given what we know about hallucinations, apophenia, and optical illusions?

There is also a curious argument in philosophy that parallels this skepticism about the reliability of our perceptions, reason, and the “warrants” for our beliefs called the Evolutionary Argument Against Naturalism (EAAN). I’ve previously discussed some aspects of EAAN, but it is, amazingly, still discussed in academic circles. In a nutshell it asserts that our reliable reasoning can’t be evolved because evolution does not reliably deliver good, truthful ways of thinking about the world.

While it may seem obvious that the evolutionary algorithm does not deliver or guarantee completely reliable facilities for discerning true things from false things, the notion of epistemological pragmatism is a direct parallel to evolutionary search (as Fitelson and Sober hint).… Read the rest

Sentience is Physical, Part 3: Now with Flaming Birds

Moving to Portland brings all the positives and negatives of urban living. A notable positive is access to the arts and I’m looking forward to catching Stravinsky’s The Firebird this weekend with the Oregon Symphony. Part of the program is a new work by composer Vijay Iyer who has a history of incorporating concepts derived from African rhythms, hip hop, and jazz into his compositional efforts. I took the opportunity this morning to read his 1998 dissertation from Berkeley that capped off his interdisciplinary program in the cognitive science of music. I’ll just say up front that I’m not sure it rises to the level of a dissertation since it does not really provide any significant new results. He notes the development of a microtiming programming environment coded in MAX but doesn’t give significant results or novel experimental testing of the system or of human perceptions of microtiming. What the dissertation does do, however, is give a lucid overview and some new insights about how cognition and music interact, as well as point towards ways to test the theories that Iyer develops during the course of his work. A too-long master’s thesis might be a better category for it, but I’ve never been exposed to musicology dissertations so perhaps this level of work is normal.

Iyer’s core thesis is that musical cognition and expression arise from a physical engagement with our environments combined with cultural situatedness. That is, rhythm is tied to a basic “tactus” or spontaneously perceived regular pulse or beat of music that is physically associated with walking, heartbeats, tapping, chewing, and so forth. Similarly, the culture of musical production as well as the history that informs a given piece all combine to influence how music is produced and experienced.… Read the rest

Sentience is Physical

Sentience is all the rage these days. With large language models (LLMs) based on deep learning neural networks, question-answering behavior of these systems takes on curious approximations to talking with a smart person. Recently a member of Google’s AI team was fired after declaring one of their systems sentient. His offense? Violating public disclosure rules. I and many others who have a firm understanding of how these systems work—by predicting next words from previous productions crossed with the question token stream—are quick to dismiss the claims of sentience. But what does sentience really amount to and how can we determine if a machine becomes sentient?

Note that there are those who differentiate sentience (able to have feelings), from sapience (able to have thoughts), and consciousness (some private, subjective phenomenal sense of self). I am willing to blend them together a bit since the topic here isn’t narrowly trying to address the ethics of animal treatment, for example, where the distinction can be useful.

First we have the “imitation game” Turing test-style approach to the question of how we might ever determine if a machine becomes sentient. If a remote machine can fool a human into believing it is a person, it must be as intelligent as a person and therefore sentient like we presume of people. But this is a limited goal line. If the interaction is only over a limited domain like solving your cable internet installation problems, we don’t think of that as a sentient machine. Even against a larger domain of open-ended question and answering, if the human doesn’t hit upon a revealing kind of error that a machine might make that a human would not, we remain unconvinced that the target is sentient.… Read the rest

We Are Weak Chaos

Recent work in deep learning networks has been largely driven by the capacity of modern computing systems to compute gradient descent over very large networks. We use gaming cards with GPUs that are great for parallel processing to perform the matrix multiplications and summations that are the primitive operations central to artificial neural network formalisms. Conceptually, another primary advance is the pre-training of networks as autocorrelators that helps with smoothing out later “fine tuning” training programs over other data. There are some additional contributions that are notable in impact and that reintroduce the rather old idea of recurrent neural networks, networks with outputs attached back to inputs that create resonant kinds of running states within the network. The original motivation of such architectures was to emulate the vast interconnectivity of real neural systems and to capture a more temporal appreciation of data where past states affect ongoing processing, rather than a pure feed-through architecture. Neural networks are already nonlinear systems, so adding recurrence just ups the complexity of trying to figure out how to train them. Treating them as black boxes and using evolutionary algorithms was fashionable for me in the 90s, though the computing capabilities just weren’t up for anything other than small systems, as I found out when chastised for overusing a Cray at Los Alamos.

But does any of this have anything to do with real brain systems? Perhaps. Here’s Toker, et. al. “Consciousness is supported by near-critical slow cortical electrodynamics,” in Proceedings of the National Academy of Sciences (with the unenviable acronym PNAS). The researchers and clinicians studied the electrical activity of macaque and human brains in a wide variety of states: epileptics undergoing seizures, macaque monkeys sleeping, people on LSD, those under the effects of anesthesia, and people with disorders of consciousness.… Read the rest

Intelligent Borrowing

There has been a continuous bleed of biological, philosophical, linguistic, and psychological concepts into computer science since the 1950s. Artificial neural networks were inspired by real ones. Simulated evolution was designed around metaphorical patterns of natural evolution. Philosophical, linguistic, and psychological ideas transferred as knowledge representation and grammars, both natural and formal.

Since computer science is a uniquely synthetic kind of science and not quite a natural one, borrowing and applying metaphors seems to be part of the normal mode of advancement in this field. There is a purely mathematical component to the field in the fundamental questions around classes of algorithms and what is computable, but there are also highly synthetic issues that arise from architectures that are contingent on physical realizations. Finally, the application to simulating intelligent behavior relies largely on three separate modes of operation:

  1. Hypothesize about how intelligent beings perform such tasks
  2. Import metaphors based on those hypotheses
  3. Given initial success, use considerations of statistical features and their mappings to improve on the imported metaphors (and, rarely, improve with additional biological insights)

So, for instance, we import a simplified model of neural networks as connected sets of weights representing some kind of variable activation or inhibition potentials combined with sudden synaptic firing. Abstractly we already have an interesting kind of transfer function that takes a set of input variables and has a nonlinear mapping to the output variables. It’s interesting because being nonlinear means it can potentially compute very difficult relationships between the input and output.

But we see limitations, immediately, and these are observed in the history of the field. For instance, if you just have a single layer of these simulated neurons, the system isn’t fundamentally complex enough to compute any complex functions, so we add a few layers and then more and more.… Read the rest