Humbly Evolving in a Non-Simulated Universe

darwin-changeThe New York Times seems to be catching up to me, first with an interview of Alvin Plantinga by Gary Cutting in The Stone on February 9th, and then with notes on Bostrom’s Simulation Hypothesis in the Sunday Times.

I didn’t see anything new in the Plantinga interview, but reviewed my previous argument that adaptive fidelity combined with adaptive plasticity must raise the probability of rationality at a rate that is much greater than the contributions that would be “deceptive” or even mildly cognitively or perceptually biased. Worth reading is Branden Fitelsen and Eliot Sober’s very detailed analysis of Plantinga’s Evolutionary Argument Against Naturalism (EAAN), here. Most interesting are the beginning paragraphs of Section 3, which I reproduce here because it is a critical addition that should surprise no one but often does:

Although Plantinga’s arguments don’t work, he has raised a question that needs to be answered by people who believe evolutionary theory and who also believe that this theory says that our cognitive abilities are in various ways imperfect. Evolutionary theory does say that a device that is reliable in the environment in which it evolved may be highly unreliable when used in a novel environment. It is perfectly possible that our mental machinery should work well on simple perceptual tasks, but be much less reliable when applied to theoretical matters. We hasten to add that this is possible, not inevitable. It may be that the cognitive procedures that work well in one domain also work well in another; Modus Ponens may be useful for avoiding tigers and for doing quantum physics.

Anyhow, if evolutionary theory does say that our ability to theorize about the world is apt to be rather unreliable, how are evolutionists to apply this point to their own theoretical beliefs, including their belief in evolution?

Read the rest

Contingency and Irreducibility

JaredTarbell2Thomas Nagel returns to defend his doubt concerning the completeness—if not the efficacy—of materialism in the explanation of mental phenomena in the New York Times. He quickly lays out the possibilities:

  1. Consciousness is an easy product of neurophysiological processes
  2. Consciousness is an illusion
  3. Consciousness is a fluke side-effect of other processes
  4. Consciousness is a divine property supervened on the physical world

Nagel arrives at a conclusion that all four are incorrect and that a naturalistic explanation is possible that isn’t “merely” (1), but that is at least (1), yet something more. I previously commented on the argument, here, but the refinement of the specifications requires a more targeted response.

Let’s call Nagel’s new perspective Theory 1+ for simplicity. What form might 1+ take on? For Nagel, the notion seems to be a combination of Chalmers-style qualia combined with a deep appreciation for the contingencies that factor into the personal evolution of individual consciousness. The latter is certainly redundant in that individuality must be absolutely tied to personal experiences and narratives.

We might be able to get some traction on this concept by looking to biological evolution, though “ontogeny recapitulates phylogeny” is about as close as we can get to the topic because any kind of evolutionary psychology must be looking for patterns that reinforce the interpretation of basic aspects of cognitive evolution (sex, reproduction, etc.) rather than explore the more numinous aspects of conscious development. So we might instead look for parallel theories that focus on the uniqueness of outcomes, that reify the temporal evolution without reference to controlling biology, and we get to ideas like uncomputability as a backstop. More specifically, we can explore ideas like computational irreducibility to support the development of Nagel’s new theory; insofar as the environment lapses towards weak predictability, a consciousness that self-observes, regulates, and builds many complex models and metamodels is superior to those that do not.… Read the rest

Red Queens of Hearts

redqueenAn incomplete area of study in philosophy and science is the hows and whys of social cooperation. We can easily assume that social organisms gain benefits in terms of the propagation of genes by speculating about the consequences of social interactions versus individual ones, but translating that speculation into deep insights has remained a continuing research program. The consequences couldn’t be more significant because we immediately gain traction on the Naturalistic Fallacy and build a bridge towards a clearer understanding of human motivation in arguing for a type of Moral Naturalism that embodies much of the best we know and hope for from human history.

So worth tracking are continued efforts to understand how competition can be outdone by cooperation in the most elementary and mathematical sense. The superlatively named Freeman Dyson (who doesn’t want to be a free man?) cast a cloud of doubt on the ability of cooperation to be a working strategy when he and colleague William Press analyzed the payoff matrixes of iterated prisoner’s dilemma games and discovered a class of play strategies called “Zero-Determinant” strategies that always pay-off regardless of the opponent’s strategies. Hence, the concern that there is a large corner in the adaptive topology where strong-arming always wins. And evolutionary search must seek out that corner and winners must accumulate there, thus ruling out cooperation as a prominent feature of evolutionary success.

But that can’t reflect the reality we think we see, where cooperation in primates and other eusocial organisms seems to be the precursor to the kinds of virtues that are reflected in moral, religious, and ethical traditions. So what might be missing in this analysis? Christophe Adami and Arend Hintze at Michigan State may have some of the answers in their paper, Evolutionary instability of zero-determinant strategies demonstrates that winning is not everything.… Read the rest

Chinese Feudal Wasps

waspsIn Fukuyama’s The Origins of Political Order, the author points out that Chinese feudalism was not at all like European feudalism. In the latter, vassals were often unrelated to lords and the relationship between them was consensual and renewed annually. Only later did patriarchal lineages become important in preserving the line of descent among the lords. But that was not the case in China where extensive networks of blood relations dominated the lord-vassal relationship; the feudalism was more like tribalism and clans than the European model, but with Confucianism layered on top.

So when E.O. Wilson, still intellectually agile in his twilight years, describes the divide between kin selection and multi-level selection in the New York Times, we start to see a similar pattern of explanation for both models at far more basic level than just in the happenstances of Chinese versus European cultures. Kin selection predicts that genetic co-representation can lead an individual to self-sacrifice in an evolutionary sense (from loss of breeding possibilities in Hymenoptera like bees and ants, through to sacrificial behavior like standing watch against predators and thus becoming a target, too). This is the traditional explanation and the one that fits well for the Chinese model. But we also have the multi-level selection model that posits that selection operates at the group level, too. In kin selection there is no good explanation for the European feudal tradition unless the vassals are inbred with their lords, which seems unlikely in such a large, diverse cohort. Consolidating power among the lords and intermarrying practices possibly did result in inbreeding depression later on, but the overall model was one based on social ties that were not based on genetic familiarity.… Read the rest

Bats and Belfries

Thomas Nagel proposes a radical form of skepticism in his new book, Minds and Cosmos, continuing his trajectory through subjective experience and moral realism first began with bats zigging and zagging among the homunculi of dualism reimagined in the form of qualia. The skepticism involves disputing materialistic explanations and proposing, instead, that teleological ones of an unspecified form will likely apply, for how else could his subtitle that paints the “Neo-Darwinian Concept of Nature” as likely false hold true?

Nagel is searching for a non-religious explanation, of course, because just enervating nature through fiat is hardly an explanation at all; any sort of powerful, non-human entelechy could be gaming us and the universe in a non-coherent fashion. But what parameters might support his argument? Since he apparently requires a “significant likelihood” argument to hold sway in support of the origins of life, for instance, we might imagine what kind of thinking could result in highly likely outcomes that begin with inanimate matter and lead to goal-directed behavior while supporting a significant likelihood of that outcome. The parameters might involve the conscious coordination of the events leading towards the emergence of goal-directed life, thus presupposing a consciousness that is not our own. We are back then to our non-human entelechy looming like an alien or like a strange creator deity (which is not desirable to Nagel). We might also consider the possibility that there are properties to the universe itself that result in self-organization and that either we don’t yet know or that we are only beginning to understand. Elliot Sober’s critique suggests that the 2nd Law of Thermodynamics results in what I might call “patterned” behavior while not becoming “goal-directed” per se.… Read the rest

Evolutionary Art and Architecture

With every great scientific advance there has been a coordinated series of changes in the Zeitgeist. Evolutionary theory has impacted everything from sociology through to literature, but there are some very sophisticated efforts in the arts that deserve more attention.

John Frazer’s Evolutionary Architecture is a great example. Now available as downloadable PDFs since it is out-of-print, Evolutionary Architecture asks the question, without fully answering it (how could it?), about how evolution-like processes can contribute to the design of structures:

And then there is William Latham’s evolutionary art that explores form derived from generative functions dating to 1989:

And the art extends to functional virtual creatures:

Read the rest

Multitudes and the Mathematics of the Individual

The notion that there is a path from reciprocal altruism to big brains and advanced cognitive capabilities leads us to ask whether we can create “effective” procedures that shed additional light on the suppositions that are involved, and their consequences. Any skepticism about some virulent kind of scientism then gets whisked away by the imposition of a procedure combined with an earnest interest in careful evaluation of the outcomes. That may not be enough, but it is at least a start.

I turn back to Marcus Hutter, Solomonoff, and Chaitin-Kolmogorov at this point.  I’ll be primarily referencing Hutter’s Universal Algorithmic Intelligence (A Top-Down Approach) in what follows. And what follows is an attempt to break down how three separate factors related to intelligence can be explained through mathematical modeling. The first and the second are covered in Hutter’s paper, but the third may represent a new contribution, though perhaps an obvious one without the detail work that is needed to provide good support.

First, then, we start with a core requirement of any goal-seeking mechanism: the ability to predict patterns in the environment external to the mechanism. This is well-covered since Solomonoff in the 60s who formalized the implicit arguments in Kolmogorov algorithmic information theory (AIT), and that were subsequently expanded on by Greg Chaitin. In essence, given a range of possible models represented by bit sequences of computational states, the shortest sequence that predicts the observed data is also the optimal predictor for any future data also produced by the underlying generator function. The shortest sequence is not computable, but we can keep searching for shorter programs and come up with unique optimizations for specific data landscapes. And that should sound familiar because it recapitulates Occam’s Razor and, in a subset of cases, Epicurus’ Principle of Multiple Explanations.… Read the rest

Reciprocity and Abstraction

Fukuyama’s suggestion is intriguing but needs further development and empirical support before it can be considered more than a hypothesis. To be mildly repetitive, ideology derived from scientific theories should be subject to even more scrutiny than religious-political ideologies if for no other reason than it can be. But in order to drill down into the questions surrounding how reciprocal altruism might enable the evolution of linguistic and mental abstractions, we need to simplify the problems down to basics, then work outward.

So let’s start with reciprocal altruism as a mere mathematical game. The iterated prisoner’s dilemma is a case study: you and a compatriot are accused of a heinous crime and put in separate rooms. If you deny involvement and so does your friend you will each get 3 years prison. If you admit to the crime and so does your friend you will both get 1 year (cooperation behavior). But if you or your co-conspirator deny involvement while fingering the other, one gets to walk free while the other gets 6 years (defection strategy). Joint fingering is equivalent to two denials at 3 years since the evidence is equivocal. What does one do as a “rational actor” in order to minimize penalization? The only solution is to betray your friend while denying involvement (deny, deny, deny): you get either 3 years (assuming he also denies involvement), or you walk (he denies), or he fingers you also which is the same as dual denials at 3 years each. The average years served are 1/3*3 + 1/3*0 + 1/3*3 = 3 years versus 1/2*1 + 1/2*6 = 3.5 years for admitting to the crime.

In other words it doesn’t pay to cooperate.… Read the rest

Bostrom on the Hardness of Evolving Intelligence

At 38,000 feet somewhere above Missouri, returning from a one day trip to Washington D.C., it is easy to take Nick Bostrom’s point that bird flight is not the end-all of what is possible for airborne objects and mechanical contrivances like airplanes in his paper, How Hard is Artificial Intelligence? Evolutionary Arguments and Selection Effects. His efforts to try to bound and distinguish the evolution of intelligence as either Hard or Not-Hard runs up against significant barriers, however. As a practitioner of the art, finding similarities between a purely physical phenomena like flying and something as complex as human intelligence falls flat for me.

But Bostrom is not taking flying as more than a starting point for arguing that there is an engineer-able possibility for intelligence. And that possibility might be bounded by a number of current and foreseeable limitations, not least of which is that computer simulations of evolution require a certain amount of computing power and representational detail in order to be a sufficient simulation. His conclusion is that we may need as much as another 100 years of improvements in computing technology just to get to a point where we might succeed at a massive-scale evolutionary simulation (I’ll leave to the reader to investigate his additional arguments concerning convergent evolution and observer selection effects).

Bostrom dismisses as pessimistic the assumption that a sufficient simulation would, in fact, require a highly detailed emulation of some significant portion of the real environment and the history of organism-environment interactions:

A skeptic might insist that an abstract environment would be inadequate for the evolution of general intelligence, believing instead that the virtual environment would need to closely resemble the actual biological environment in which our ancestors evolved … However, such extreme pessimism seems unlikely to be well founded; it seems unlikely that the best environment for evolving intelligence is one that mimics nature as closely as possible.

Read the rest

From Smith to Darwin

The notion that all the contingencies of human history can be rendered down into law-like principles is the greatest reflection of the human desire for order and understanding. Adam Smith appears in that mirrored pool alongside Karl Marx and, in his original form, even Charles Darwin. That’s only the beginning: Freud, Machiavelli, Rousseau, Hegel, and a host of others are reflected there in varying, and transitory clarity.

Adam Smith is a iconic case, as I discovered reading Adam Smith’s View of History: Consistent or Paradoxical? by James Alvey. The paradoxical component arises from a merger of a belief in the inevitability of commercial society and, at various points in Smith’s intellectual development, a cynicism about the probability of forward progress towards that goal. Ever behind the curtain, however, was the invisible hand represented by a kind of teleological divine presence moving history and economics forward.

The paper uncovers some of the idiosyncrasies of Smith’s economic history:

[T]he burghers felt secure enough to import ‘improved manufactures and expensive luxuries’. The lords now had something beside hospitality for which they could exchange the whole of their agricultural surplus. Previously they had to share, but ‘frivolous and useless’ things, such as ‘a pair of diamond [shoe] buckles’, and ‘trinkets and baubles’, could be consumed by the lords alone. The lords were fascinated with such finely crafted items and wanted to own and vainly display them. As the lords ‘eagerly purchased’ these luxury items they were forced to reduce the number of their dependents and eventually dismiss them entirely.

The lords ultimately have to trade off economic freedom of the artisans in exchange for more diamond shoe buckles. Odd, but perhaps reflective of the excesses of the wealthy in Smith’s era–something that needed explanation.… Read the rest