We Are Weak Chaos

Recent work in deep learning networks has been largely driven by the capacity of modern computing systems to compute gradient descent over very large networks. We use gaming cards with GPUs that are great for parallel processing to perform the matrix multiplications and summations that are the primitive operations central to artificial neural network formalisms. Conceptually, another primary advance is the pre-training of networks as autocorrelators that helps with smoothing out later “fine tuning” training programs over other data. There are some additional contributions that are notable in impact and that reintroduce the rather old idea of recurrent neural networks, networks with outputs attached back to inputs that create resonant kinds of running states within the network. The original motivation of such architectures was to emulate the vast interconnectivity of real neural systems and to capture a more temporal appreciation of data where past states affect ongoing processing, rather than a pure feed-through architecture. Neural networks are already nonlinear systems, so adding recurrence just ups the complexity of trying to figure out how to train them. Treating them as black boxes and using evolutionary algorithms was fashionable for me in the 90s, though the computing capabilities just weren’t up for anything other than small systems, as I found out when chastised for overusing a Cray at Los Alamos.

But does any of this have anything to do with real brain systems? Perhaps. Here’s Toker, et. al. “Consciousness is supported by near-critical slow cortical electrodynamics,” in Proceedings of the National Academy of Sciences (with the unenviable acronym PNAS). The researchers and clinicians studied the electrical activity of macaque and human brains in a wide variety of states: epileptics undergoing seizures, macaque monkeys sleeping, people on LSD, those under the effects of anesthesia, and people with disorders of consciousness.… Read the rest

Intelligent Borrowing

There has been a continuous bleed of biological, philosophical, linguistic, and psychological concepts into computer science since the 1950s. Artificial neural networks were inspired by real ones. Simulated evolution was designed around metaphorical patterns of natural evolution. Philosophical, linguistic, and psychological ideas transferred as knowledge representation and grammars, both natural and formal.

Since computer science is a uniquely synthetic kind of science and not quite a natural one, borrowing and applying metaphors seems to be part of the normal mode of advancement in this field. There is a purely mathematical component to the field in the fundamental questions around classes of algorithms and what is computable, but there are also highly synthetic issues that arise from architectures that are contingent on physical realizations. Finally, the application to simulating intelligent behavior relies largely on three separate modes of operation:

  1. Hypothesize about how intelligent beings perform such tasks
  2. Import metaphors based on those hypotheses
  3. Given initial success, use considerations of statistical features and their mappings to improve on the imported metaphors (and, rarely, improve with additional biological insights)

So, for instance, we import a simplified model of neural networks as connected sets of weights representing some kind of variable activation or inhibition potentials combined with sudden synaptic firing. Abstractly we already have an interesting kind of transfer function that takes a set of input variables and has a nonlinear mapping to the output variables. It’s interesting because being nonlinear means it can potentially compute very difficult relationships between the input and output.

But we see limitations, immediately, and these are observed in the history of the field. For instance, if you just have a single layer of these simulated neurons, the system isn’t fundamentally complex enough to compute any complex functions, so we add a few layers and then more and more.… Read the rest

Evolving Ought Beyond Is

The is-ought barrier is a regularly visited topic since its initial formulation by Hume. It certainly seems unassailable in that a syllogism designed to claim that what ought to be done is predicated on what is (observable and natural) must always fail. The reason for this is that the ought framework (call it ethics) can be formulated in any particular way to ascribe the good and the bad. A serial killer might believe that killing certain people eliminates demons from the world and is therefore good, regardless of a general prohibition that killing others is bad. In this case, we might argue that the killer is simply mistaken in her beliefs and that a lack of accurate information is guiding her. But even the claim that there is an “is” in this case (killing people results in a worse society/people are entitled to be free from murder/etc.) doesn’t really stay on the factual side of the barrier. The is evaporates into an ought at the very outset.

There are efforts to enliven some type of naturalistic underpinnings of moral reasoning, like Sam Harris’s The Moral Landscape that postulates an adaptive topology where the consequences of individual and group actions result in improvements or harm to humanity as a whole. The end result is a kind of abstract consequentialism beneath local observables that is enervated by some brain science. Here’s an example: (1) Better knowledge of the biological origins of disease can result in behavior that reduces disease harm; (2) It is therefore moral to improve education about biology; (3) Disease harm is reduced resulting in reduced suffering. This doesn’t quite make it across the barrier, though, because it presupposes an ought for humans that reduces the imperatives of the disease itself (what about its thriving?),… Read the rest

The Abnormal Normal

Another day, another COVID-19 conspiracy theory making the rounds. First there was the Chinese bioweapons idea, then the 5G radiation theory that led to tower vandalism, and now the Plandemic video. Washington Post covers the latter while complaining that tech companies are incompetently ineffectual in stopping the spread of these mind viruses that accompany the biological ones. Meanwhile, a scientist who appears in the video is reviewed and debunked in AAAS Science based on materials she provided them. I’m still interested in these “sequences” in the Pacific Ocean. I’ve spent some time in there and may need to again.

The WaPo article ends with a suggestion that we all need to be more skeptical of dumb shit, though I’m guessing that that message will probably not reach the majority of believers or propagators of Plandemic-style conspiracy thinking. So it goes with all the other magical nonsense that percolates through our ordinary lives, confined as they are to only flights of fancy and hopeful aspirations for a better world.

Broadly, though, it does appear that susceptibility to conspiracy theories correlates with certain mental traits that linger at the edge of mental illnesses. Evita March and Jordan Springer got 230 mostly undergraduate students to answer online questionnaires that polled them on mental traits of schizotypy, Machiavellianism, trait narcissism, and trait psychopathy. They also evaluated their belief in odd/magical ideas. Their paper, Belief in conspiracy theories: The predictive role of schizotypy, Machiavellianism, and primary psychopathy, shows significant correlations with belief in conspiracies. Interestingly, they suggest that the urge to manipulate others in Machiavellianism and psychopathy may, in turn, lead to an innate fear of being manipulated oneself.

Mental illness and certain psychological traits have always been a bit of an evolutionary mystery.… Read the rest

A Pause in Attention

I routinely take a pause in what I am doing to reflect on my goals and what I’ve learned. I’m sure you do too. I had been listening to the recorded works of Jean Sibelius and Carl Nielsen, but am now on to Sir Edward Elgar and Josef Suk. Billie Eilish and Vampire Weekend didn’t last long. I gave up on my deep learning startup to pursue another, less abstract technology. I revamped this site. I put trail running on pause and have been lifting weights more. I shifted writing efforts to a new series centered on manipulating animal physiologies for war and espionage.

These pauses feel like taking an expansive stretch after sitting still for a long period; a reset of the mental apparatus that repositions the mind for a new phase. For me, one take away from recent events, up to and including the great pause of the coronavirus pandemic, is a reconsideration of the amount of silly and pointless content we absorb. Just a few examples: The drama of Twitter feuds among the glitterati and the political class, cancel culture, and shaming. The endless technology, photography, audiophile, fashion, and food reporting and communal commenting that serves to channel our engagement with products and services. Even the lightweight philosophizing that goes with critiques of tradition or society has the same basic set of drivers.

What’s shared among them is the desire for attention, an intellectual posturing to attract and maintain the gaze of others. But it does have a counterpoint, I believe, in a grounding in facts, reason, and a careful attention to novelty. The latter may be a bit hard to pin down, though. It is easy to mistake randomness or chaos for novelty.… Read the rest

Pro-Individualism, Pro-Social, Anti-Cousin

I tend towards the skeptical in the face of monocausal explanatory frameworks, especially for ideas as big as human history and the factors that shaped it. The risk of being wrong is far too high while the payoff in terms of anything more than cocktail banter is too low, be it as a shaper of modern policy or bearer of moral prerogatives.

So the widely covered discussion of Schulz, et. al.’s AAAS Science paper, “The Church, intensive kinship, and global psychological variation” (paywall) is a curiosity that admits to cautious reading at the very least. The hypothesis is that Catholic Church prohibitions on consanguine marriage that began in the medieval period in Western Europe explain globally unusual aspects of the psychology of the people of those regions. By banning cousin marriage even out to the 6th degree in many cases, the Church forced people away from tribal ideas and more towards neolocal family structures. That, in turn, led to pro-social attitudes based on social trust rather than family power, and towards more individualistic and independent psychologies overall.

The methodology of the study is fairly complex: look at the correlations between consanguine marriages patterns and psychological attitudes, then try to explain those correlations away with a wide range of alternative data patterns, like the availability of irrigation or proximity to Roman roads, and so forth. Data experiments that look at Eastern Orthodox versus Western Church differences, or even between northern and southern Italy are used to test the theory further.

In the end, or at least until other data supervenes, the hypothesis stands as showing that reducing cousin or similar marriages is a “causal channel” for these patterns of individualism and social trust.… Read the rest

Deep Learning with Quantum Decoherence

Getting back to metaphors in science, Wojciech Zurek’s so-called Quantum Darwinism is in the news due to a series of experimental tests. In Quantum Darwinism (QD), the collapse of the wave function (more properly the “extinction” of states) is a result of decoherence from environmental entanglement. There is a kind of replication in QD, where pointer states are multiplied, and then a kind of environmental selection as well. There is no variation per se, however, though some might argue that the pointer states imprinted by the environment are variants of the originals. Still, it makes the metaphor a bit thin at the edges, but it is close enough for the core idea to fit most of the floor-plan of Darwinism. Indeed, some champion it as part of a more general model for everything. Even selection among viable multiverse bubbles has a similar feel to it: some survive while others perish.

I’ve been simultaneously studying quantum computing and complexity theories that are getting impressively well developed. Richard Cleve’s An Introduction to Quantum Complexity Theory and John Watrous’s Quantum Computational Complexity are notable in their bridging from traditional computational complexity to this newer world of quantum computing using qubits, wave functions, and even decoherence gates.

Decoherence sucks for quantum computing in general, but there may be a way to make use of it. For instance, an artificial neural network (ANN) also has some interesting Darwinian-like properties to it. The initial weight distribution in an ANN is typically a random real value. This is designed to simulate the relative strength of neural connections. Real neural connections are much more complex than this, doing interesting cyclic behavior, saturating and suppressing based on neurotransmitter availability, and so forth, but assuming just a straightforward pattern of connectivity has allowed for significant progress.… Read the rest

A Most Porous Barrier

Whenever there is a scientific—or even a quasi-scientific—theory invented, there are those who take an expansive view of the theory, broadly applying it to other areas of thought. This is perhaps inherent in the metaphorical nature of these kinds of thought patterns. Thus we see Darwinian theory influenced by Adam Smith’s “invisible hand” of economic optimization. Then we get Spencer’s Social Darwinism arising from Darwin. And E.O. Wilson’s sociobiology leads to evolutionary psychology, immediately following an activist’s  pitcher of ice water.

The is-ought barrier tends towards porousness, allowing the smuggling of insights and metaphors lifted from the natural world as explanatory footwork for our complex social and political interactions. After all, we are as natural as we are social. But at the same time, we know that science is best when it is tentative and subject to infernal levels of revision and reconsideration. Decisions about social policy derived from science, and especially those that have significant human impact, should be cushioned by a tentative level of trust as well.

E.O. Wilson’s most recent book, Genesis: The Deep Origin of Societies, is a continuation of his late conversion to what is now referred to as “multi-level selection,” where natural selection is believed to operate at multiple levels, from genes to whole societies. It remains a controversial theory that has been under development and under siege since Darwin’s time, when the mechanism of inheritance was not understood.

The book is brief and does not provide much, if any, new material since his Social Conquest of Earth, which was significantly denser and contained notes derived from his controversial 2010 Nature paper that called into question whether kin selection was overstated as a gene-level explanation of altruism and sacrifice within eusocial species.… Read the rest

Doubt at the Limit

I seem to have a central theme to many of the last posts that is related to the demarcation between science and non-science, and also to the limits of what rationality allows where we care about such limits. This is not purely abstract, though, as we can see in today’s anti-science movements, whether anti-vaccination, flat Earthers, climate change deniers, or intelligent design proponents. Just today, Ars Technica reports on the first of these. The speakers at the event, held in close proximity to a massive measles outbreak, ranged from a “disgraced former gastroenterologist” to an angry rabbi. Efforts to counter them, in the form of a letter from a county supervisor and another rabbi, may have had an impact on the broader community, but probably not the die-hards of the movement.

Meanwhile, Lee Mcyntire at Boston University suggests what we are missing in these engagements in a great piece in Newsweek. Mcyntire applies the same argument to flat Earthers that I have applied to climate change deniers: what we need to reinforce is the value and, importantly, the limits inherent in scientific reasoning. Insisting, for example, that climate change science is 100% squared away just fuels the micro-circuits in the so-called meta-cognitive strategies regions of the brains of climate change deniers. Instead, Mcyntire recommends science engages the public in thinking about the limits of science, showing how doubt and process lead us to useable conclusions about topics that are suddenly fashionably in dispute.

No one knows if this approach is superior to the alternatives like the letter-writing method by authorities in the vaccination seminar approach, and it certainly seems longer term in that it needs to build against entrenched ideas and opinions, but it at least argues for a new methodology.… Read the rest

Free Will and Algorithmic Information Theory (Part II)

Bad monkey

So we get some mild form of source determinism out of Algorithmic Information Complexity (AIC), but we haven’t addressed the form of free will that deals with moral culpability at all. That free will requires that we, as moral agents, are capable of making choices that have moral consequences. Another way of saying it is that given the same circumstances we could have done otherwise. After all, all we have is a series of if/then statements that must be implemented in wetware and they still respond to known stimuli in deterministic ways. Just responding in model-predictable ways to new stimuli doesn’t amount directly to making choices.

Let’s expand the problem a bit, however. Instead of a lock-and-key recognition of integer “foodstuffs” we have uncertain patterns of foodstuffs and fallible recognition systems. Suddenly we have a probability problem with P(food|n) [or even P(food|q(n)) where q is some perception function] governed by Bayesian statistics. Clearly we expect evolution to optimize towards better models, though we know that all kinds of historical and physical contingencies may derail perfect optimization. Still, if we did have perfect optimization, we know what that would look like for certain types of statistical patterns.

What is an optimal induction machine? AIC and variants have been used to define that machine. First, we have Solomonoff induction from around 1960. But we also have Jorma Rissanen’s Minimum Description Length (MDL) theory from 1978 that casts the problem more in terms of continuous distributions. Variants are available, too, from Minimum Message Length, to Akaike’s Information Criterion (AIC, confusingly again), Bayesian Information Criterion (BIC), and on to Structural Risk Minimization via Vapnik-Chervonenkis learning theory.

All of these theories involve some kind of trade-off between model parameters, the relative complexity of model parameters, and the success of the model on the trained exemplars.… Read the rest