Word Salad Wednesday: Ergodic Cybernetic Textuality and Games

Salad with catsWell the title is a mouthful, yet it relates to an article in The Guardian concerning the literary significance of role-playing games. Norway’s Aarseth coined the term “ergodic” to describe literary systems that evolve according to the choices of the reader/player.

First, this is just incorrect. Ergodic has a very specific meaning in thermodynamics. Ergodic means that the temporal evolution of a system will be random and irreversible. Aarseth takes the Greek meanings too literally choosing to equate the ergo (work) and hodos (path) with the temporal evolution of hypertexts (where one chooses the next step) or RPGs (where players choose the next steps but there may be random decisions dictated by dice roles). He also likes the term “cybernetic” which was literally “pilot” and was given its modern meaning by Norbert Weiner wherein it refers to autonomous control of a system to stabilize against environmental signals.

Neither of these relate to RPGs or hypertext per se, nor to the general class of reader/engager-based control of media access or fiction. The concept of generative art might be more apt, though it should be modified to include the guidance of the reader. Oddly, guided evolution or change might be the best metaphor altogether leading us to something like Lamarkian Literature (though that is too culturally loaded, perhaps).

Or we could just say “games.” After all, these are games, aren’t they?… Read the rest

Non-Cognitivist Trajectories in Moral Subjectivism

imageWhen I say that “greed is not good” the everyday mind creates a series of images and references, from Gordon Gekko’s inverse proposition to general feelings about inequality and our complex motivations as people. There is a network of feelings and, perhaps, some facts that might be recalled or searched for to justify the position. As a moral claim, though, it might most easily be considered connotative rather than cognitive in that it suggests a collection of secondary emotional expressions and networks of ideas that support or deny it.

I mention this (and the theories that are consonant with this kind of reasoning are called non-cognitivist and, variously, emotive and expressive), because there is a very real tendency to reduce moral ideas to objective versus subjective, especially in atheist-theist debates. I recently watched one such debate between Matt Dillahunty and an orthodox priest where the standard litany revolved around claims about objectivity versus subjectivity of truth. Objectivity of truth is often portrayed as something like, “without God there is no basis for morality. God provides moral absolutes. Therefore atheists are immoral.” The atheists inevitably reply that the scriptural God is a horrific demon who slaughters His creation and condones slavery and other ideas that are morally repugnant to the modern mind. And then the religious descend into what might be called “advanced apologetics” that try to diminish, contextualize, or dismiss such objections.

But we are fairly certain regardless of the tradition that there are inevitable nuances to any kind of moral structure. Thou shalt not kill gets revised to thou shalt not murder. So we have to parse manslaughter in pursuit of a greater good against any rules-based approach to such a simplistic commandment.… Read the rest

A Critique of Pure Randomness

Random MemeThe notion of randomness brings about many interesting considerations. For statisticians, randomness is a series of events with chances that are governed by a distribution function. In everyday parlance, equally-likely means random, while an even more common semantics is based on both how unlikely and how unmotivated an event might be (“That was soooo random!”) In physics, there are only certain physical phenomena that can be said to be truly random, including the probability of a given nucleus decomposing into other nuclei via fission. The exact position of a quantum thingy is equally random when it’s momentum is nailed down, and vice-versa. Vacuums have a certain chance of spontaneously creating matter, too, and that chance appears to be perfectly random. In algorithmic information theory, a random sequence of bits is a sequence that can’t be represented by a smaller descriptive algorithm–it is incompressible. Strangely enough, we simulate random number generators using a compact algorithm that has a complicated series of steps that lead to an almost impossible to follow trajectory through a deterministic space of possibilities; it’s acceptible to be random enough that the algorithm parameters can’t be easily reverse engineered and the next “random” number guessed.

One area where we often speak of randomness is in biological evolution. Random mutations lead to change and to deleterious effects like dead-end evolutionary experiments. Or so we hypothesized. The exact mechanism of the transmission of inheritance and of mutations were unknown to Darwin, but soon in the evolutionary synthesis notions like random genetic drift and the role of ionizing radiation and other external factors became exciting candidates for the explanation of the variation required for evolution to function. Amusingly, arguing largely from a stance that might be called a fallacy of incredulity, creationists have often seized on a logical disconnect they perceive between the appearance of purpose both in our lives and in the mechanisms of biological existence, and the assumption of underlying randomness and non-directedness as evidence for the paucity of arguments from randomness.… Read the rest

Informational Chaff and Metaphors

chaffI received word last night that our scholarship has received over 1400 applications, which definitely surprised me. I had worried that the regional restriction might be too limiting but Agricultural Sciences were added in as part of STEM so that probably magnified the pool.

Dan Dennett of Tufts and Deb Roy at MIT draw parallels between informational transparency in our modern world and biological mechanism in Scientific American (March 2015, 312:3). Their article, Our Transparent Future (related video here; you have to subscribe to read the full article), starts with Andrew Parker’s theory that the Cambrian Explosion may have been tied to the availability of light as cloud cover lifted and seas became transparent. An evolutionary arms race began for the development of sensors that could warn against predators, and predators that could acquire more prey.

They continue on drawing parallels to biological processes, including the concept of squid ink and how a similar notion, chaff, was used to mask radar signatures as aircraft became weapons of war. The explanatory mouthful of the Multiple Independent Reentry Vehicle (MIRV) with dummy warheads to counter anti-ballistic missiles were likewise a deceptive way of reducing the risk of interception. So Dennett and Roy “predict the introduction of chaff made of nothing but megabytes of misinformation,” designed to deceive search engines of the nature of real info.

This is a curious idea. Search engine optimization (SEO) is a whole industry that combines consulting with tricks and tools to try to raise the position of vendors in the Google rankings. Being in the first page of listings can be make-or-break for retail vendors, and they pay to try to make that happen. The strategies are based around trying to establish links to the vendor from individuals and other pages to try to game the PageRank algorithm.… Read the rest

Evolutionary Optimization and Environmental Coupling

Red QueensCarl Schulman and Nick Bostrom argue about anthropic principles in “How Hard is Artificial Intelligence? Evolutionary Arguments and Selection Effects” (Journal of Consciousness Studies, 2012, 19:7-8), focusing on specific models for how the assumption of human-level intelligence should be easy to automate are built upon a foundation of assumptions of what easy means because of observational bias (we assume we are intelligent, so the observation of intelligence seems likely).

Yet the analysis of this presumption is blocked by a prior consideration: given that we are intelligent, we should be able to achieve artificial, simulated intelligence. If this is not, in fact, true, then the utility of determining whether the assumption of our own intelligence being highly probable is warranted becomes irrelevant because we may not be able to demonstrate that artificial intelligence is achievable anyway. About this, the authors are dismissive concerning any requirement for simulating the environment that is a prerequisite for organismal and species optimization against that environment:

In the limiting case, if complete microphysical accuracy were insisted upon, the computational requirements would balloon to utterly infeasible proportions. However, such extreme pessimism seems unlikely to be well founded; it seems unlikely that the best environment for evolving intelligence is one that mimics nature as closely as possible. It is, on the contrary, plausible that it would be more efficient to use an artificial selection environment, one quite unlike that of our ancestors, an environment specifically designed to promote adaptations that increase the type of intelligence we are seeking to evolve (say, abstract reasoning and general problem-solving skills as opposed to maximally fast instinctual reactions or a highly optimized visual system).

Why is this “unlikely”? The argument is that there are classes of mental function that can be compartmentalized away from the broader, known evolutionary provocateurs.… Read the rest

The Rise and Triumph of the Bayesian Toolshed

Bayes LawIn Asimov’s Foundation, psychohistory is the mathematical treatment of history, sociology, and psychology to predict the future of human populations. Asimov was inspired by Gibbon’s Decline and Fall of the Roman Empire that postulated that Roman society was weakened by Christianity’s focus on the afterlife and lacked the pagan attachment to Rome as an ideal that needed defending. Psychohistory detects seeds of ideas and social movements that are predictive of the end of the galactic empire, creating foundations to preserve human knowledge against a coming Dark Age.

Applying statistics and mathematical analysis to human choices is a core feature of economics, but Richard Carrier’s massive tome, On the Historicity of Jesus: Why We Might Have Reason for Doubt, may be one of the first comprehensive applications to historical analysis (following his other related work). Amusingly, Carrier’s thesis dovetails with Gibbon’s own suggestion, though there is a certain irony to a civilization dying because of a fictional being.

Carrier’s methods use Bayesian analysis to approach a complex historical problem that has a remarkably impoverished collection of source material. First century A.D. (C.E. if you like; I agree with Carrier that any baggage about the convention is irrelevant) sources are simply non-existent or sufficiently contradictory that the background knowledge of paradoxography (tall tales), rampant messianism, and general political happenings at the time lead to a likelihood that Jesus was made up. Carrier constructs the argument around equivalence classes of prior events that then reduce or strengthen the evidential materials (a posteriori). And he does this without ablating the richness of the background information. Indeed, his presentation and analysis of works like Inanna’s Descent into the Underworld and its relationship to the Ascension of Isaiah are both didactic and beautiful in capturing the way ancient minds seem to have worked.… Read the rest

Active Deep Learning

BrainDeep Learning methods that use auto-associative neural networks to pre-train (with bottlenecking methods to ensure generalization) have recently been shown to perform as well and even better than human beings at certain tasks like image categorization. But what is missing from the proposed methods? There seem to be a range of challenges that revolve around temporal novelty and sequential activation/classification problems like those that occur in natural language understanding. The most recent achievements are more oriented around relatively static data presentations.

Jürgen Schmidhuber revisits the history of connectionist research (dating to the 1800s!) in his October 2014 technical report, Deep Learning in Neural Networks: An Overview. This is one comprehensive effort at documenting the history of this reinvigorated area of AI research. What is old is new again, enhanced by achievements in computing that allow for larger and larger scale simulation.

The conclusions section has an interesting suggestion: what is missing so far is the sensorimotor activity loop that allows for the active interrogation of the data source. Human vision roams over images while DL systems ingest the entire scene. And the real neural systems have energy constraints that lead to suppression of neural function away from the active neural clusters.

Read the rest

The Great Crustacean

little-lobster-costumeDavid Foster Wallace’s Joseph Frank’s Dostoevsky in Consider the Lobster is worth reading for nothing else than the following two paragraphs:

The big thing that makes Dostoevsky invaluable for American readers and writers is that he appears to possess degrees of passion, conviction, and engagement with deep moral issues that we—here, today—cannot or do not permit ourselves. Joseph Frank does an admirable job of tracing out the interplay of factors that made this engagement possible—[Dostoevsky]’s own beliefs and talents, the ideological and aesthetic climates of his day, etc. Upon his finishing Frank’s books, though, I think that any serious American reader/writer will find himself driven to think hard about what exactly it is that makes many of the novelists of our own place and time look so thematically shallow and lightweight, so morally impoverished, in comparison to Gogol or Dostoevsky (or even to lesser lights like Lermontov and Turgenev). Frank’s bio prompts us to ask ourselves why we seem to require of our art an ironic distance from deep convictions or desperate questions, so that contemporary writers have to either make jokes of them or else try to work them in under cover of some formal trick like intertextual quotation or incongruous juxtaposition, sticking the really urgent stuff inside asterisks as part of some multivalent defamiliarization-flourish or some such shit.

Part of the explanation for our own lit’s thematic poverty obviously includes our century and situation. The good old modernists, among their other accomplishments, elevated aesthetics to the level of ethics—maybe even metaphysics—and Serious Novels after Joyce tend to be valued and studied mainly for their formal ingenuity. Such is the modernist legacy that we now presume as a matter of course that “serious” literature will be aesthetically distanced from real lived life.

Read the rest

On Killing Kids

Mark S. Smith’s The Early History of God is a remarkable piece of scholarship. I was recently asked what I read for fun and had to admit that I have been on a trajectory towards reading books that have, on average, more footnotes than text. J.P. Mallory’s In Search of the Indo-Europeans kindly moves the notes to the end of the volume. Smith’s Chapter 5, Yahwistic Cult Practices, and particularly Section 3, The mlk sacrifice, are illuminating on the widespread belief that killing children could propitiate the gods. This practice was likely widespread among the Western Semitic peoples, including the Israelites and Canaanites (Smith’s preference for Western Semitic is to lump the two together ca. 1200 BC because they appear to have been culturally the same, possibly made distinct after the compilation of OT following the Exile).

I recently argued with some young street preachers about violence and horror in Yahweh’s name and by His command while waiting outside a rock shop in Old Sacramento. Human sacrifice came up, too, with the apologetics being that, despite the fact that everyone was bad back then, the Chosen People did not perform human sacrifice and therefore they were marginally better than the other people around them. They passed quickly on the topic of slavery, which was wise for rhetorical purposes, because slavery was widespread and acceptable. I didn’t remember the particulars of the examples of human sacrifice in OT, but recalled them broadly to which they responded that there were translation and interpretation errors with “burnt offering” and “fire offerings of first borns” that, of course, immediately contradicted their assertion of acceptance and perfection of the scriptures.

More interesting, though, is the question of why might human sacrifice be so pervasive, whether among Yahwists and Carthiginians or Aztecs?… Read the rest

Alien Singularities and Great Filters

Life on MarsNick Bostrom at Oxford’s Future of Humanity Institute takes on Fermi’s question “Where are they?” in a new paper on the possibility of life on other planets. The paper posits probability filters (Great Filters) that may have existed in the past or might be still to come and that limit the likelihood of the outcome that we currently observe: our own, ahem, intelligent life. If a Great Filter existed in our past—say the event of abiogenesis or prokaryote to eukaryote transition—then we can somewhat explain the lack of alien contact thus far: our existence is of very low probability. Moreover, we can expect to not find life on Mars.

If, however, the Great Filter exists in our future then we might see life all over the place (including the theme of his paper, Mars). Primitive life is abundant but the Great Filter is somewhere in our future where we annihilate ourselves, thus explaining why Fermi’s They are not here while little strange things thrive on Mars, and beyond. It is only advanced life that got squeezed out by the Filter.

Bostrom’s Simulation Hypothesis provides a potential way out of this largely pessimistic perspective. If there is a very high probability that civilizations achieve sufficient simulation capabilities that they can create artificial universes prior to conquering the vast interstellar voids needed to move around and signal with adequate intensity, it is equally possible that their “exit strategy” is a benign incorporation into artificial realities that prevents corporeal destruction by other means. It seems unlikely that every advanced civilization would “give up” physical being under these circumstances (in Teleology there are hold-outs from the singularity though they eventually die out), which would mean that there might remain a sparse subset of active alien contact possibilities.… Read the rest