The Goldilocks Complexity Zone

FractalSince my time in the early 90s at Santa Fe Institute, I’ve been fascinated by the informational physics of complex systems. What are the requirements of an abstract system that is capable of complex behavior? How do our intuitions about complex behavior or form match up with mathematical approaches to describing complexity? For instance, we might consider a snowflake complex, but it is also regular in it’s structure, driven by an interaction between crystal growth and the surrounding air. The classic examples of coastlines and fractal self-symmetry also seem complex but are not capable of complex behavior.

So what is a good way of thinking about complexity? There is actually a good range of ideas about how to characterize complexity. Seth Lloyd rounds up many of them, here. The intuition that drives many of them is that complexity seems to be associated with distributions of relationships and objects that are somehow juxtapositioned between a single state and a uniformly random set of states. Complex things, be they living organisms or computers running algorithms, should exist in a Goldilocks zone when each part is examined and those parts are somehow summed up to a single measure.

We can easily construct a complexity measure that captures some of these intuitions. Let’s look at three strings of characters:

x = aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa

y = menlqphsfyjubaoitwzrvcgxdkbwohqyxplerz

z = the fox met the hare and the fox saw the hare

Now we would likely all agree that y and z are more complex than x, and I suspect most would agree that y looks like gibberish compared with z. Of course, y could be a sequence of weirdly coded measurements or something, or encrypted such that the message appears random.… Read the rest

Bayesianism and Properly Basic Belief

Kircher-Diagram_of_the_names_of_GodXu and Tenebaum, in Word Learning as Bayesian Inference (Psychological Review, 2007), develop a very simple Bayesian model of how children (and even adults) build semantic associations based on accumulated evidence. In short, they find contrastive elimination approaches as well as connectionist methods unable to explain the patterns that are observed. Specifically, the most salient problem with these other methods is that they lack the rapid transition that is seen when three exemplars are presented for a class of objects associated with a word versus one exemplar. Adults and kids (the former even more so) just get word meanings faster than those other models can easily show. Moreover, a space of contending hypotheses that are weighted according to their Bayesian statistics, provides an escape from the all-or-nothing of hypothesis elimination and some of the “soft” commitment properties that connectionist models provide.

The mathematical trick for the rapid transition is rather interesting. They formulate a “size principle” that weights the likelihood of a given hypothesis (this object is most similar to a “feb,” for instance, rather than the many other object sets that are available) according to a scaling that is exponential in the number of exposures. Hence the rapid transition:

Hypotheses with smaller extensions assign greater probability than do larger hypotheses to the same data, and they assign exponentially greater probability as the number of consistent examples increases.

It should be noted that they don’t claim that the psychological or brain machinery implements exactly this algorithm. As is usual in these matters, it is instead likely that whatever machinery is involved, it simply has at least these properties. It may very well be that connectionist architectures can do the same but that existing approaches to connectionism simply don’t do it quite the right way.… Read the rest

Lucifer on the Beach

glowwormsI picked up a whitebait pizza while stopped along the West Coast of New Zealand tonight. Whitebait are tiny little swarming immature fish that can be scooped out of estuarial river flows using big-mouthed nets. They run, they dart, and it is illegal to change river exit points to try to channel them for capture. Hence, whitebait is semi-precious, commanding NZD70-130/kg, which explains why there was a size limit on my pizza: only the small one was available.

By the time I was finished the sky had aged from cinereal to iron in a satire of the vivid, watch-me colors of CNN International flashing Donald Trump’s linguistic indirection across the television. I crept out, setting my headlamp to red LEDs designed to minimally interfere with night vision. Just up away from the coast, hidden in the impossible tangle of cold rainforest, there was a glow worm dell. A few tourists conjured with flashlights facing the ground to avoid upsetting the tiny arachnocampa luminosa that clung to the walls inside the dark garden. They were like faint stars composed into irrelevant constellations, with only the human mind to blame for any observed patterns.

And the light, what light, like white-light LEDs recently invented, but a light that doesn’t flicker or change, and is steady under the calmest observation. Driven by luciferin and luciferase, these tiny creatures lure a few scant light-seeking creatures to their doom and as food for absorption until they emerge to mate, briefly, lay eggs, and then die.

Lucifer again, named properly from the Latin as the light bringer, the chemical basis for bioluminescence was largely isolated in the middle of the 20th Century. Yet there is this biblical stigma hanging over the term—one that really makes no sense at all.… Read the rest

Non-Cognitivist Trajectories in Moral Subjectivism

imageWhen I say that “greed is not good” the everyday mind creates a series of images and references, from Gordon Gekko’s inverse proposition to general feelings about inequality and our complex motivations as people. There is a network of feelings and, perhaps, some facts that might be recalled or searched for to justify the position. As a moral claim, though, it might most easily be considered connotative rather than cognitive in that it suggests a collection of secondary emotional expressions and networks of ideas that support or deny it.

I mention this (and the theories that are consonant with this kind of reasoning are called non-cognitivist and, variously, emotive and expressive), because there is a very real tendency to reduce moral ideas to objective versus subjective, especially in atheist-theist debates. I recently watched one such debate between Matt Dillahunty and an orthodox priest where the standard litany revolved around claims about objectivity versus subjectivity of truth. Objectivity of truth is often portrayed as something like, “without God there is no basis for morality. God provides moral absolutes. Therefore atheists are immoral.” The atheists inevitably reply that the scriptural God is a horrific demon who slaughters His creation and condones slavery and other ideas that are morally repugnant to the modern mind. And then the religious descend into what might be called “advanced apologetics” that try to diminish, contextualize, or dismiss such objections.

But we are fairly certain regardless of the tradition that there are inevitable nuances to any kind of moral structure. Thou shalt not kill gets revised to thou shalt not murder. So we have to parse manslaughter in pursuit of a greater good against any rules-based approach to such a simplistic commandment.… Read the rest

A Critique of Pure Randomness

Random MemeThe notion of randomness brings about many interesting considerations. For statisticians, randomness is a series of events with chances that are governed by a distribution function. In everyday parlance, equally-likely means random, while an even more common semantics is based on both how unlikely and how unmotivated an event might be (“That was soooo random!”) In physics, there are only certain physical phenomena that can be said to be truly random, including the probability of a given nucleus decomposing into other nuclei via fission. The exact position of a quantum thingy is equally random when it’s momentum is nailed down, and vice-versa. Vacuums have a certain chance of spontaneously creating matter, too, and that chance appears to be perfectly random. In algorithmic information theory, a random sequence of bits is a sequence that can’t be represented by a smaller descriptive algorithm–it is incompressible. Strangely enough, we simulate random number generators using a compact algorithm that has a complicated series of steps that lead to an almost impossible to follow trajectory through a deterministic space of possibilities; it’s acceptible to be random enough that the algorithm parameters can’t be easily reverse engineered and the next “random” number guessed.

One area where we often speak of randomness is in biological evolution. Random mutations lead to change and to deleterious effects like dead-end evolutionary experiments. Or so we hypothesized. The exact mechanism of the transmission of inheritance and of mutations were unknown to Darwin, but soon in the evolutionary synthesis notions like random genetic drift and the role of ionizing radiation and other external factors became exciting candidates for the explanation of the variation required for evolution to function. Amusingly, arguing largely from a stance that might be called a fallacy of incredulity, creationists have often seized on a logical disconnect they perceive between the appearance of purpose both in our lives and in the mechanisms of biological existence, and the assumption of underlying randomness and non-directedness as evidence for the paucity of arguments from randomness.… Read the rest

Informational Chaff and Metaphors

chaffI received word last night that our scholarship has received over 1400 applications, which definitely surprised me. I had worried that the regional restriction might be too limiting but Agricultural Sciences were added in as part of STEM so that probably magnified the pool.

Dan Dennett of Tufts and Deb Roy at MIT draw parallels between informational transparency in our modern world and biological mechanism in Scientific American (March 2015, 312:3). Their article, Our Transparent Future (related video here; you have to subscribe to read the full article), starts with Andrew Parker’s theory that the Cambrian Explosion may have been tied to the availability of light as cloud cover lifted and seas became transparent. An evolutionary arms race began for the development of sensors that could warn against predators, and predators that could acquire more prey.

They continue on drawing parallels to biological processes, including the concept of squid ink and how a similar notion, chaff, was used to mask radar signatures as aircraft became weapons of war. The explanatory mouthful of the Multiple Independent Reentry Vehicle (MIRV) with dummy warheads to counter anti-ballistic missiles were likewise a deceptive way of reducing the risk of interception. So Dennett and Roy “predict the introduction of chaff made of nothing but megabytes of misinformation,” designed to deceive search engines of the nature of real info.

This is a curious idea. Search engine optimization (SEO) is a whole industry that combines consulting with tricks and tools to try to raise the position of vendors in the Google rankings. Being in the first page of listings can be make-or-break for retail vendors, and they pay to try to make that happen. The strategies are based around trying to establish links to the vendor from individuals and other pages to try to game the PageRank algorithm.… Read the rest

Evolutionary Optimization and Environmental Coupling

Red QueensCarl Schulman and Nick Bostrom argue about anthropic principles in “How Hard is Artificial Intelligence? Evolutionary Arguments and Selection Effects” (Journal of Consciousness Studies, 2012, 19:7-8), focusing on specific models for how the assumption of human-level intelligence should be easy to automate are built upon a foundation of assumptions of what easy means because of observational bias (we assume we are intelligent, so the observation of intelligence seems likely).

Yet the analysis of this presumption is blocked by a prior consideration: given that we are intelligent, we should be able to achieve artificial, simulated intelligence. If this is not, in fact, true, then the utility of determining whether the assumption of our own intelligence being highly probable is warranted becomes irrelevant because we may not be able to demonstrate that artificial intelligence is achievable anyway. About this, the authors are dismissive concerning any requirement for simulating the environment that is a prerequisite for organismal and species optimization against that environment:

In the limiting case, if complete microphysical accuracy were insisted upon, the computational requirements would balloon to utterly infeasible proportions. However, such extreme pessimism seems unlikely to be well founded; it seems unlikely that the best environment for evolving intelligence is one that mimics nature as closely as possible. It is, on the contrary, plausible that it would be more efficient to use an artificial selection environment, one quite unlike that of our ancestors, an environment specifically designed to promote adaptations that increase the type of intelligence we are seeking to evolve (say, abstract reasoning and general problem-solving skills as opposed to maximally fast instinctual reactions or a highly optimized visual system).

Why is this “unlikely”? The argument is that there are classes of mental function that can be compartmentalized away from the broader, known evolutionary provocateurs.… Read the rest

Spurting into the Undiscovered Country

voyager_plaqueThere was glop on the windows of the International Space Station. Outside. It was algae. How? Now that is unclear, but there is a recent tradition of arguing against abiogenesis here on Earth and arguing for ideas like panspermia where biological material keeps raining down on the planet, carried by comets and meteorites, trapped in crystal matrices. And there may be evidence that some of that may have happened, if only in the local system, between Mars and Earth.

Panspermia includes as a subset the idea of Directed Panspermia whereby some alien intelligence for some reason sends biological material out to deliberately seed worlds with living things. Why? Well, maybe it is a biological prerogative or an ethical stance. Maybe they feel compelled to do so because they are in some dystopian sci-fi narrative where their star is dying. One last gasping hope for alien kind!

Directed Panspermia as an explanation for life on Earth only sets back the problem of abiogenesis to other ancient suns and other times, and implicitly posits that some of the great known achievements of life on Earth like multicellular forms are less spectacularly improbable than the initial events of proto-life as we hypothesize it might have been. Still, great minds have spent great mental energy on the topic to the point that elaborate schemes involving solar sails have been proposed so that we may someday engage in Directed Panspermia as needed. I give you:

Mautner, M; Matloff, G. (1979). “Directed panspermia: A technical evaluation of seeding nearby solar systems”. J. British Interplanetary Soc. 32: 419.

So we take solar sails and bioengineered lifeforms in tiny capsules. The solar sails are large and thin. They carry the tiny capsules into stellar formations and slow down due to friction.… Read the rest

Just So Disruptive

i-don-t-always-meme-generator-i-don-t-always-buy-companies-but-when-i-do-i-do-it-for-no-reason-925b08The “just so” story is a pejorative for cultural or physical traits that drive an evolutionary explanation. Things are “just so” when the explanation is unfalsifiable and theoretically fitted to current observations. Less controversial and pejorative is the essential character of evolutionary process where there is no doubt that genetic alternatives will mostly fail. The ones that survive this crucible are disruptive to the status quo, sure, but these disruptions tend to be geographically or sexually isolated from the main population anyway, so they are more an expansion than a disruption; little competition is tooth-and-claw, mostly species survive versus the environment, not one another.

Jill Lapore of Harvard subjects business theory to a similar crucible in the New Yorker, questioning Clayton Christensen’s classic argument in The Innovator’s Dilemma that businesses are unwilling to adapt to changing markets because they are making rational business decisions to maximize profits. After analyzing core business cases from Christensen’s books, Lapore concludes that the argument holds little water and that its predictions are both poor and inapplicable to other areas like journalism and college education.

Central to her critique is her analysis of the “just so” nature of disruptive innovation:

Christensen has compared the theory of disruptive innovation to a theory of nature: the theory of evolution. But among the many differences between disruption and evolution is that the advocates of disruption have an affinity for circular arguments. If an established company doesn’t disrupt, it will fail, and if it fails it must be because it didn’t disrupt. When a startup fails, that’s a success, since epidemic failure is a hallmark of disruptive innovation. (“Stop being afraid of failure and start embracing it,” the organizers of FailCon, an annual conference, implore, suggesting that, in the era of disruption, innovators face unprecedented challenges.

Read the rest

Trees of Lives

Tree of LifeWith a brief respite between vacationing in the canyons of Colorado and leaving tomorrow for Australia, I’ve open-sourced an eight-year-old computer program for converting one’s DNA sequences into an artistic rendering. The input to the program are the allelic patterns from standard DNA analysis services that use the Short Tandem Repeat Polymorphisms from forensic analysis, as well as poetry reflecting one’s ethnic heritage. The output is generative art: a tree that overlays the sequences with the poetry and a background rendered from the sequences.

Generative art is perhaps one of the greatest aesthetic achievements of the late 20th Century. Generative art is, fundamentally, a recognition that the core of our humanity can be understood and converted into meaningful aesthetic products–it is the parallel of effective procedures in cognitive science, and developed in lock-step with the constructive efforts to reproduce and simulate human cognition.

To use Tree of Lives, install Java 1.8, unzip the package, and edit the supplied markconfig.txt to enter in your STRs and the allele variant numbers in sequence per line 15 of the configuration file. Lines 16+ are for lines of poetry that will be rendered on the limbs of the tree. Other configuration parameters can be discerned by examining com.treeoflives.CTreeConfig.java, and involve colors, paths, etc. Execute the program with:

java -cp treeoflives.jar:iText-4.2.0-com.itextpdf.jar com.treeoflives.CAlleleRenderer markconfig.txt
Read the rest