Bostrom and Computational Irreducibility


Nick Bostrom elevated philosophical concerns to the level of the popular press with his paper, Are You Living in a Computer Simulation? which argues that:

at least one of the following propositions is
true: (1) the human species is very likely to go extinct before reaching a “posthuman” stage; (2) any posthuman civilization is extremely unlikely to run a significant number of simulations of their evolutionary history (or variations thereof); (3) we are almost certainly living in a computer simulation.

A critical prerequisite of (3) is that human brains can be simulated in some way. And a co-requisite of that requirement is that the environment must be at least partially simulated in order for the brain simulations to believe in the sensorium that they experience:

If the environment is included in the simulation, this will require additional computing power – how much depends on the scope and granularity of the simulation. Simulating the entire universe down to the quantum level is obviously infeasible, unless radically new physics is discovered. But in order to get a realistic simulation of human experience, much less is needed – only whatever is required to ensure that the simulated humans, interacting in normal human ways with their simulated environment, don’t notice any irregularities.

Bostrom’s efforts to minimize the required information content doesn’t ring true, however. In order for a perceived universe to provide even “local” consistency, then large-scale phenomena must be simulated with perfect accuracy. Even if, as Bostrom suggests, noticed inconsistencies can be rewritten in the brains of the simulated individuals, those inconsistencies would have to be eventually resolved into a consistent universe.

Further, creating local consistency without emulating quantum-level phenomena requires first computing the macroscopic phenomena that would be a consequence of those quantum events.… Read the rest

Sex and Error

Just in time for Valentine’s Day, the introduction to my (foster) father’s 1991 Animal Behavior treatise, On the Role of Males (don’t worry guys, we get to expurgate genetic errors):

The value of males to a species has often been regarded as enigmatic. An all-female, parthenogenetic population has significant theoretical advantages over a population that must reproduce sexually. But if sexuality is to be advocated as highly advantageous to the species, the questions surrounding gender differentiation must not be confused with the questions concerning the value of sex. Two distinct genders are not necessary to engage sexual recombination. A broad array of hypotheses for the evolution and persistence of sexuality appears in Michod & Levin (1988), yet for all of the postulated arguments, males are unnecessary. While purpose cannot always be easily ascribed to a specific trait or behavior, the converse can be argued with confidence. The widespread, common existence of a specific trait, behavior or caste insures that the persistence of the attribute possesses some fundamental purpose.

Protracted demonstrations of competitive vigor are common in males, especially so in polygynous species. Darwin (1874) outlined in detail the virtual ubiquity of male aggressive “pugnacity” in animals, concluding that “It is incredible that all this should be purposeless” (1874, p. 615). The hypotheses to be argued here are threefold: (1) males are an auxiliary, relatively sacrificial sex of enhanced fragility, whose demonstrations of competitive vigor operate to expose, exaggerate, and expurgate significant gene error from the germline, (2) the aggressively competitive behavior of polygynous males is but one component of a hierarchy of genetic information assurance mechanisms that must be inevitably evolved, and (3) gene defect expurgation from the germline greatly accelerates the evolutionary optimization, and thus the competitiveness, of the species.

Read the rest

Radical Triangulation

Donald Davidson argued that descriptive theories of semantics suffered from untenable complications that could, in turn, be solved by a holistic theory of meaning. Holism, in this sense, is due to the dependency of words and phrases as part of a complex linguistic interchange. He proposed “triangulation” as a solution, where we zero-in on a tentatively held belief about a word based on other beliefs about oneself, about others, and about the world we think we know.

This seems daringly obvious, but it is merely the starting point of the hard work of what mechanisms and steps are involved in fixing the meaning of words through triangulation. There are certainly some predispositions that are innate and fit nicely with triangulation. These are subsumed under The Principle of Charity and even the notion of the Intentional Stance in how we regard others like us.

Fixing meaning via model-making has some curious results. The language used to discuss aesthetics and art tends to borrow from other fields (“The narrative of the painting,” “The functional grammar of the architecture.”) Religious and spiritual terminology often has extremely porous models: I recently listened to Episcopalians discuss the meaning of “grace” for almost an hour with great glee but almost no progress; it was the belief that they were discussing something of ineffable greatness that was moving to them. Even seemingly simple scientific ideas become elaborately complex for both children and adults: we begin with atoms as billiard balls that mutate into mini solar systems that become vibrating clouds of probabilistic wave-particles around groups of properties in energetic suspension by virtual particle exchange.

Can we apply more formal models to the task of staking out this method of triangulation?… Read the rest

Puritanical Warfare

The LA Times sheds additional light on the complex question of America’s founding and the religious ideals of historical figures in this piece.  Author John M. Barry described Roger Williams breaking away from the Massachusetts Pilgrims to found Rhode Island, quoting his view of religious liberty:

[even] “the most Paganish, Jewish, Turkish, or Antichristian consciences and worships” [should be allowed to pray or not pray]

“forced worship stinks in God’s nostrils.”

Williams is notable because he stands in stark contrast to John Winthrop who is the source of the “city upon a hill” that is a common reference point in presidential aspirational speeches:

For we must Consider that we shall be as a City upon a Hill, the eyes of all people are upon us

Yet, for all that shiny exceptionalism, Puritans believed slavery was justified by the Old Testament, harassed and executed Quakers, reviled one another as heretics, and believed that God had killed Native Americans using smallpox to give the land to the Puritans:

But for the natives in these parts, God hath so pursued them, as for 300 miles space the greatest part of them are swept away by smallpox which still continues among them. So as God hath thereby cleared our title to this place, those who remain in these parts, being in all not 50, have put themselves under our protection.

The goal of a GOP candidate using the “hill” quote is to invoke the ghost of Reagan. Sadly, the important historical lessons about tolerance and the evolutionary seeds of our modern understanding of the ethics of freedom get lost when it becomes jingoistic.… Read the rest

Learning around the Non Sequiturs

If Solomonoff Induction and its conceptual neighbors have not yet found application in enhancing human reasoning, there are definitely areas where they have potential value.  Automatic, unsupervised learning of sequential patterns is an intriguing area of application. It also fits closely with the sequence inferencing problem that is at the heart of algorithmic information theory.

Pragmatically, the problem of how children learn the interchangeability of words that is the basic operation of grammaticality is one area where this kind of system might be useful. Given a sequence of words or symbols, what sort of information is available for figuring out the grammatical groupings? Not much beyond memories of repetitions, often inferred implicitly.

Could we apply some variant of Solomonoff Induction at this point? Recall that we want to find the most compact explanation for the observed symbol stream. Recall also that the form of the explanation is a computer program of some sort that consists of logical functions. It turns out that creating a program that, for every possible sequence, finds the absolutely most compact program is uncomputable. The notion of what is “uncomputable” (or incomputable) is a mathematical result that has to do with how many different potential programs must be investigated to try to find the shortest one. If that number grows faster than the length of a program, it becomes uncomputable. Being uncomputable is not a death sentence, however. We can come up with approximate methods that try to follow the same procedure because any method that incrementally compresses the explanatory program will get closer to the hypothetical best program.

Sequitur by Nevill-Manning and Witten is an example of a procedure that approximates Algorithmic Information Theory optimization for string sequences.… Read the rest

Adaptive Ethics via iBooks Author

For fun, I decided to try writing a partial post using Apple’s iBooks Author. The application runs on Mac OS X Lion and is available for free. It appears to be derivative of Keynote, which explains Apple’s rapid development of the authoring tool.

There are some limitations, though. I couldn’t embed equations from Word for Mac 2011 without converting them into images. It also only publishes to iBookstore,  although you can export to PDF (as below). There are few PDF export options, however, and the metadata and labeling includes Apple logos.

Tearing apart the .iba format via unzip showed a collection of .jpg and .tiff images, a binary color array, and an .xml specification of the project. Fairly simple, but not including the compiled .epub file that iBookstore generally takes.

Total elapsed time: 1 hour (including download/installation). With improvements to the software and with more experience, that should be halved.

Read the rest

Evolutionary Oneirology

I was recently contacted by a startup that is developing a dream-recording app. The startup wants to automatically extract semantic relationships and correlate the narratives that dreamers type into their phones. I assume that the goal is to help the user try to understand their dreams. But why should we assume that dreams are understandable? We now know that waking cognition is unreliable, that meta-cognitive strategies influence decision making, that base rate fallacies are the norm, that perceptions are shaped by apophenia, that framing and language choices dominate decision-making under ambiguity, and that moral judgments are driven by impulse and feeling rather than any rational calculus.

Yet there are some remarkable consistencies about dream content that have led to elaborate theorization down through the ages. Dreams, by being cryptic, want to be explained. But the content of dreams, when sorted out, leads us less to Kerkule’s Rings or to Freud and Jung, and more to asking why there is so much anxiety present in dreams? The Evolutionary Theory of Dreaming by Finnish researcher Revonsuo tries to explain the overrepresentation of threats and fear in dreams by suggesting that the brain is engaged in a process of reliving conflict events as a form of implicit learning. Evidence in support of this theory includes experimental observations that threatening dreams increase in frequency for children who experienced trauma in childhood combined with the cross-cultural persistence of threatening dream content (and likely cross-species as anyone who has watched a cat twitch in deep sleep suspects). To date, however, the question of whether these dream cycles result in learning or improved responses to future conflict remains unanswered.

I turned down consulting for the startup because of time constraints, but the topic of dream anxiety comes back to me every few years when I startle out of one of those recurring dreams where I have not studied for the final exam and am desperately pawing through a sheaf of empty paper trying to find my notes.… Read the rest

Jingles and Thought

In the last post the issue of inductive inference was the focus, but human cognition, as Luke pointed out, is not just fallible but is unreliable by its very nature. Recent work has been revelatory in the ways our minds fail us when confronted with new information, and in the many ways that our experiences influence thought.

The idea that language and thought are tightly intertwined and that language may influence thought has been of interest since Sapir-Whorf‘s many flavors of snow, but became academically disreputable in the mid-20th Century as universality dominated linguistics via Chomsky. But the notion that language and thought are intertwined has continued to be investigated and recent work shows remarkable interactions.

Watching the messaging of the GOP Primaries between Gingrich and Romney (with Paul’s Libertarian dogs barking in the background), reminded me that there are other meta-cognitive strategies at work in our minds that are being exploited by the negative ads as well as the anti-Obama sentiment. But what is interesting is that some of the easiest ways of building positive sentiment are not being exploited. For example, a memorable jingle might be cheesy and retro, but would exploit the tendency for rhymes to be regarded as true. Using big, bold fonts for the message also helps.

I think it is time for the political jingle to return.… Read the rest

Solomonoff Induction, Truth, and Theism

LukeProg of CommonSenseAtheism fame created a bit of a row when he declared that Solomonoff Induction largely rules out theism, continuing on to expand on the theme:

If I want to pull somebody away from magical thinking, I don’t need to mention atheism. Instead, I teach them Kolmogorov complexity and Bayesian updating. I show them the many ways our minds trick us. I show them the detailed neuroscience of human decision-making. I show them that we can see (in the brain) a behavior being selected up to 10 seconds before a person is consciously aware of ‘making’ that decision. I explain timelessness.

There were several reasons for the CSA community to get riled up about these statements and they took on several different forms:

  • The focus on Solomonoff Induction/Kolmogorov Complexity is obscurantist in using radical technical terminology.
  • The author is ignoring deductive arguments that support theist claims.
  • The author has joined a cult.
  • Inductive claims based on Solomonoff/Kolmogorov are no different from Reasoning to the Best Explanation.

I think all of these critiques are partially valid, though I don’t think there are any good reasons for thinking theism is true, but the fourth one (which I contributed) was a personal realization for me. Though I have been fascinated with the topics related to Kolmogorov since the early 90s, I don’t think they are directly applicable to the topic of theism/atheism.  Whether we are discussing the historical validity of Biblical claims or the logical consistency of extensions to notions of omnipotence or omniscience, I can’t think of a way that these highly mathematical concepts have direct application.

But what are we talking about? Solomonoff Induction, Kolmogorov Complexity, Minimum Description Length, Algorithmic Information Theory, and related ideas are formalizations of the idea of William of Occam (variously Ockham) known as Occam’s Razor that given multiple explanations of a given phenomena, one should prefer the simpler explanation.… Read the rest

Experimental Psychohistory

Kalev Leetaru at UIUC highlights the use of sentiment analysis to retrospectively predict the Arab Spring using Big Data in this paper. Dr. Leetaru took English transcriptions of Egyptian press sources and looked at aggregate measures of positive and negative sentiment terminology. Sentiment terminology is fairly simple in this case, consisting of positive and negative adjectives primarily, but could be more discriminating by checking for negative modifiers (“not happy,” “less than happy,” etc.). Leetaru points out some of the other follies that can arise from semi-intelligent broad measures like this one applied too liberally:

It is important to note that computer–based tone scores capture only the overall language used in a news article, which is a combination of both factual events and their framing by the reporter. A classic example of this is a college football game: the hometown papers of both teams will report the same facts about the game, but the winning team’s paper will likely cast the game as a positive outcome, while the losing team’s paper will have a more negative take on the game, yielding insight into their respective views towards it.

This is an old issue in computational linguistics. In the “pragmatics” of automatic machine translation, for example, the classic example is how do you translate fighters in a rebellion. They could be anything from “terrorists” to “freedom fighters,” depending on the perspective of the translator and the original writer.

In Leetaru’s work, the end result was an unusually high churn of negative-going sentiment as the events of the Egyptian revolution unfolded.

But is it repeatable or generalizable? I’m skeptical. The rise of social media, enhanced government suppression of the media, spamming, disinformation, rapid technological change, distributed availability of technology, and the evolving government understanding of social dynamics can all significantly smear-out the priors associated with the positive signal relative to the indeterminacy of the messaging.… Read the rest