Find the Alien

Assembly Theory (AT) (original paper) is some new theoretical chemistry that tries to assess the relative complexity of the molecular underpinnings of life, even when the chemistry might be completely alien. For instance, if we send a probe to a Jovian moon and there are new microscopic creatures in the ocean, how will we figure that out? In AT, it is assumed that all living organisms require a certain complexity in order to function since that is a minimal requirement for life on Earth. The chemists experimentally confirmed that mass spectrometry is a fairly reliable way of differentiating the complexity of living things and their byproducts from other substances. Of course, they only have Earthly living things to test, but they had no false positives in their comparison set of samples, though some substances like beer tended to be unusually high in their spectral analysis. The theory is that when a mass spec ionizes a sample and routes it through a magnetic and electric field, the complexity of the original molecules is represented in the complexity of the spray of molecular masses recorded by the detectors.

But what is “complexity” exactly? There are a great number of candidates, as Seth Lloyd notes in this little round-up paper that I linked to previously. Complexity intuitively involves something like a trade-off between randomness and uniformity, but also reflects internal repetition with variety. There is a mathematical formalism that in full attribution is “Solomonoff-Chaitin-Kolmogorov Complexity”—but we can just call it algorithmic complexity (AC) for short—that has always been an idealized way to think about complexity: take the smallest algorithm (in terms of bits) that can produce a pattern and the length of the algorithm in bits is the complexity.… Read the rest

We Are Weak Chaos

Recent work in deep learning networks has been largely driven by the capacity of modern computing systems to compute gradient descent over very large networks. We use gaming cards with GPUs that are great for parallel processing to perform the matrix multiplications and summations that are the primitive operations central to artificial neural network formalisms. Conceptually, another primary advance is the pre-training of networks as autocorrelators that helps with smoothing out later “fine tuning” training programs over other data. There are some additional contributions that are notable in impact and that reintroduce the rather old idea of recurrent neural networks, networks with outputs attached back to inputs that create resonant kinds of running states within the network. The original motivation of such architectures was to emulate the vast interconnectivity of real neural systems and to capture a more temporal appreciation of data where past states affect ongoing processing, rather than a pure feed-through architecture. Neural networks are already nonlinear systems, so adding recurrence just ups the complexity of trying to figure out how to train them. Treating them as black boxes and using evolutionary algorithms was fashionable for me in the 90s, though the computing capabilities just weren’t up for anything other than small systems, as I found out when chastised for overusing a Cray at Los Alamos.

But does any of this have anything to do with real brain systems? Perhaps. Here’s Toker, et. al. “Consciousness is supported by near-critical slow cortical electrodynamics,” in Proceedings of the National Academy of Sciences (with the unenviable acronym PNAS). The researchers and clinicians studied the electrical activity of macaque and human brains in a wide variety of states: epileptics undergoing seizures, macaque monkeys sleeping, people on LSD, those under the effects of anesthesia, and people with disorders of consciousness.… Read the rest

Incompressibility and the Mathematics of Ethical Magnetism

One of the most intriguing aspects of the current U.S. border crisis is the way that human rights and American decency get articulated in the public sphere of discourse. An initial pull is raw emotion and empathy, then there are counterweights where the long-term consequences of existing policies are weighed against the exigent effects of the policy, and then there are crackpot theories of “crisis actors” and whatnot as bizarro-world distractions. But, if we accept the general thesis of our enlightenment values carrying us ever forward into increasing rights for all, reduced violence and war, and the closing of the curtain on the long human history of despair, poverty, and hunger, we must also ask more generally how this comes to be. Steven Pinker certainly has rounded up some social theories, but what kind of meta-ethics might be at work that seems to push human civilization towards these positive outcomes?

Per the last post, I take the position that we can potentially formulate meaningful sentences about what “ought” to be done, and that those meaningful sentences are, in fact, meaningful precisely because they are grounded in the semantics we derive from real world interactions. How does this work? Well, we can invoke the so-called Cornell Realists argument that the semantics of a word like “ought” is not as flexible as Moore’s Open Question argument suggests. Indeed, if we instead look at the natural world and the theories that we have built up about it (generally “scientific theories” but, also, perhaps “folk scientific ideas” or “developing scientific theories”), certain concepts take on the character of being so-called “joints of reality.” That is, they are less changeable than other concepts and become referential magnets that have an elite status among the concepts we use for the world.… Read the rest