Just So Disruptive

i-don-t-always-meme-generator-i-don-t-always-buy-companies-but-when-i-do-i-do-it-for-no-reason-925b08The “just so” story is a pejorative for cultural or physical traits that drive an evolutionary explanation. Things are “just so” when the explanation is unfalsifiable and theoretically fitted to current observations. Less controversial and pejorative is the essential character of evolutionary process where there is no doubt that genetic alternatives will mostly fail. The ones that survive this crucible are disruptive to the status quo, sure, but these disruptions tend to be geographically or sexually isolated from the main population anyway, so they are more an expansion than a disruption; little competition is tooth-and-claw, mostly species survive versus the environment, not one another.

Jill Lapore of Harvard subjects business theory to a similar crucible in the New Yorker, questioning Clayton Christensen’s classic argument in The Innovator’s Dilemma that businesses are unwilling to adapt to changing markets because they are making rational business decisions to maximize profits. After analyzing core business cases from Christensen’s books, Lapore concludes that the argument holds little water and that its predictions are both poor and inapplicable to other areas like journalism and college education.

Central to her critique is her analysis of the “just so” nature of disruptive innovation:

Christensen has compared the theory of disruptive innovation to a theory of nature: the theory of evolution. But among the many differences between disruption and evolution is that the advocates of disruption have an affinity for circular arguments. If an established company doesn’t disrupt, it will fail, and if it fails it must be because it didn’t disrupt. When a startup fails, that’s a success, since epidemic failure is a hallmark of disruptive innovation. (“Stop being afraid of failure and start embracing it,” the organizers of FailCon, an annual conference, implore, suggesting that, in the era of disruption, innovators face unprecedented challenges.

Read the rest

Trees of Lives

Tree of LifeWith a brief respite between vacationing in the canyons of Colorado and leaving tomorrow for Australia, I’ve open-sourced an eight-year-old computer program for converting one’s DNA sequences into an artistic rendering. The input to the program are the allelic patterns from standard DNA analysis services that use the Short Tandem Repeat Polymorphisms from forensic analysis, as well as poetry reflecting one’s ethnic heritage. The output is generative art: a tree that overlays the sequences with the poetry and a background rendered from the sequences.

Generative art is perhaps one of the greatest aesthetic achievements of the late 20th Century. Generative art is, fundamentally, a recognition that the core of our humanity can be understood and converted into meaningful aesthetic products–it is the parallel of effective procedures in cognitive science, and developed in lock-step with the constructive efforts to reproduce and simulate human cognition.

To use Tree of Lives, install Java 1.8, unzip the package, and edit the supplied markconfig.txt to enter in your STRs and the allele variant numbers in sequence per line 15 of the configuration file. Lines 16+ are for lines of poetry that will be rendered on the limbs of the tree. Other configuration parameters can be discerned by examining com.treeoflives.CTreeConfig.java, and involve colors, paths, etc. Execute the program with:

java -cp treeoflives.jar:iText-4.2.0-com.itextpdf.jar com.treeoflives.CAlleleRenderer markconfig.txt
Read the rest

Humbly Evolving in a Non-Simulated Universe

darwin-changeThe New York Times seems to be catching up to me, first with an interview of Alvin Plantinga by Gary Cutting in The Stone on February 9th, and then with notes on Bostrom’s Simulation Hypothesis in the Sunday Times.

I didn’t see anything new in the Plantinga interview, but reviewed my previous argument that adaptive fidelity combined with adaptive plasticity must raise the probability of rationality at a rate that is much greater than the contributions that would be “deceptive” or even mildly cognitively or perceptually biased. Worth reading is Branden Fitelsen and Eliot Sober’s very detailed analysis of Plantinga’s Evolutionary Argument Against Naturalism (EAAN), here. Most interesting are the beginning paragraphs of Section 3, which I reproduce here because it is a critical addition that should surprise no one but often does:

Although Plantinga’s arguments don’t work, he has raised a question that needs to be answered by people who believe evolutionary theory and who also believe that this theory says that our cognitive abilities are in various ways imperfect. Evolutionary theory does say that a device that is reliable in the environment in which it evolved may be highly unreliable when used in a novel environment. It is perfectly possible that our mental machinery should work well on simple perceptual tasks, but be much less reliable when applied to theoretical matters. We hasten to add that this is possible, not inevitable. It may be that the cognitive procedures that work well in one domain also work well in another; Modus Ponens may be useful for avoiding tigers and for doing quantum physics.

Anyhow, if evolutionary theory does say that our ability to theorize about the world is apt to be rather unreliable, how are evolutionists to apply this point to their own theoretical beliefs, including their belief in evolution?

Read the rest

Parsimonious Portmanteaus

portmanteauMeaning is a problem. We think we might know what something means but we keep being surprised by the facts, research, and logical difficulties that surround the notion of meaning. Putnam’s Representation and Reality runs through a few different ways of thinking about meaning, though without reaching any definitive conclusions beyond what meaning can’t be.

Children are a useful touchstone concerning meaning because we know that they acquire linguistic skills and consequently at least an operational understanding of meaning. And how they do so is rather interesting: first, presume that whole objects are the first topics for naming; next, assume that syntactic differences lead to semantic differences (“the dog” refers to the class of dogs while “Fido” refers to the instance); finally, prefer that linguistic differences point to semantic differences. Paul Bloom slices and dices the research in his Précis of How Children Learn the Meanings of Words, calling into question many core assumptions about the learning of words and meaning.

These preferences become useful if we want to try to formulate an algorithm that assigns meaning to objects or groups of objects. Probabilistic Latent Semantic Analysis, for example, assumes that words are signals from underlying probabilistic topic models and then derives those models by estimating all of the probabilities from the available signals. The outcome lacks labels, however: the “meaning” is expressed purely in terms of co-occurrences of terms. Reconciling an approach like PLSA with the observations about children’s meaning acquisition presents some difficulties. The process seems too slow, for example, which was always a complaint about connectionist architectures of artificial neural networks as well. As Bloom points out, kids don’t make many errors concerning meaning and when they do, they rapidly compensate.… Read the rest

In Like Flynn

The exceptionally interesting James Flynn explains the cognitive history of the past century and what it means in terms of human intelligence in this TED talk:

What does the future hold? While we might decry the “twitch” generation and their inundation by social media, gaming stimulation, and instant interpersonal engagement, the slowing observed in the Flynn Effect might be getting ready for another ramp-up over the next 100 years.

Perhaps most intriguing is the discussion of the ability to think in terms of hypotheticals as a a core component of ethical reasoning. Ethics is about gaming outcomes and also about empathizing with others. The influence of media as a delivery mechanism for narratives about others emerged just as those changes in cognitive capabilities were beginning to mature in the 20th Century. Widespread media had a compounding effect on the core abstract thinking capacity, and with the expansion of smartphones and informational flow, we may only have a few generations to go before the necessary ingredients for good ethical reasoning are widespread even in hard-to-reach areas of the world.… Read the rest

Contingency and Irreducibility

JaredTarbell2Thomas Nagel returns to defend his doubt concerning the completeness—if not the efficacy—of materialism in the explanation of mental phenomena in the New York Times. He quickly lays out the possibilities:

  1. Consciousness is an easy product of neurophysiological processes
  2. Consciousness is an illusion
  3. Consciousness is a fluke side-effect of other processes
  4. Consciousness is a divine property supervened on the physical world

Nagel arrives at a conclusion that all four are incorrect and that a naturalistic explanation is possible that isn’t “merely” (1), but that is at least (1), yet something more. I previously commented on the argument, here, but the refinement of the specifications requires a more targeted response.

Let’s call Nagel’s new perspective Theory 1+ for simplicity. What form might 1+ take on? For Nagel, the notion seems to be a combination of Chalmers-style qualia combined with a deep appreciation for the contingencies that factor into the personal evolution of individual consciousness. The latter is certainly redundant in that individuality must be absolutely tied to personal experiences and narratives.

We might be able to get some traction on this concept by looking to biological evolution, though “ontogeny recapitulates phylogeny” is about as close as we can get to the topic because any kind of evolutionary psychology must be looking for patterns that reinforce the interpretation of basic aspects of cognitive evolution (sex, reproduction, etc.) rather than explore the more numinous aspects of conscious development. So we might instead look for parallel theories that focus on the uniqueness of outcomes, that reify the temporal evolution without reference to controlling biology, and we get to ideas like uncomputability as a backstop. More specifically, we can explore ideas like computational irreducibility to support the development of Nagel’s new theory; insofar as the environment lapses towards weak predictability, a consciousness that self-observes, regulates, and builds many complex models and metamodels is superior to those that do not.… Read the rest

Red Queens of Hearts

redqueenAn incomplete area of study in philosophy and science is the hows and whys of social cooperation. We can easily assume that social organisms gain benefits in terms of the propagation of genes by speculating about the consequences of social interactions versus individual ones, but translating that speculation into deep insights has remained a continuing research program. The consequences couldn’t be more significant because we immediately gain traction on the Naturalistic Fallacy and build a bridge towards a clearer understanding of human motivation in arguing for a type of Moral Naturalism that embodies much of the best we know and hope for from human history.

So worth tracking are continued efforts to understand how competition can be outdone by cooperation in the most elementary and mathematical sense. The superlatively named Freeman Dyson (who doesn’t want to be a free man?) cast a cloud of doubt on the ability of cooperation to be a working strategy when he and colleague William Press analyzed the payoff matrixes of iterated prisoner’s dilemma games and discovered a class of play strategies called “Zero-Determinant” strategies that always pay-off regardless of the opponent’s strategies. Hence, the concern that there is a large corner in the adaptive topology where strong-arming always wins. And evolutionary search must seek out that corner and winners must accumulate there, thus ruling out cooperation as a prominent feature of evolutionary success.

But that can’t reflect the reality we think we see, where cooperation in primates and other eusocial organisms seems to be the precursor to the kinds of virtues that are reflected in moral, religious, and ethical traditions. So what might be missing in this analysis? Christophe Adami and Arend Hintze at Michigan State may have some of the answers in their paper, Evolutionary instability of zero-determinant strategies demonstrates that winning is not everything.… Read the rest

Novelty in the Age of Criticism

Gary Cutting from Notre Dame and the New York Times knows how to incite an intellectual riot, as demonstrated by his most recent The Stone piece, Mozart vs. the Beatles. “High art” is superior to “low art” because of its “stunning intellectual and emotional complexity.” He sums up:

My argument is that this distinctively aesthetic value is of great importance in our lives and that works of high art achieve it much more fully than do works of popular art.

But what makes up these notions of complexity and distinctive aesthetic value? One might try to enumerate those values or create a list. Or, alternatively, one might instead claim that time serves as a sieve for the values that Cutting is claiming make one work of art superior to another, thus leaving open the possibility for the enumerated list approach to be incomplete but still a useful retrospective system of valuation.

I previously argued in a 1994 paper (published in 1997), Complexity Formalisms, Order and Disorder in the Structure of Art, that simplicity and random chaos exist in a careful balance in art that reflects our underlying grammatical systems that are used to predict the environment. And Jürgen Schmidhuber took the approach further by applying algorithmic information theory to novelty seeking behavior that leads, in turn, to aesthetically pleasing models. The reflection of this behavioral optimization in our sideline preoccupations emerges as art, with the ultimate causation machine of evolution driving the proximate consequences for men and women.

But let’s get back to the flaw I see in Cutting’s argument that, in turn, fits better with Schmidhuber’s approach: much of what is important in art is cultural novelty. Picasso is not aesthetically superior to the detailed hyper-reality of Dutch Masters, for instance, but is notable for his cultural deconstruction of the role of art as photography and reproduction took hold.… Read the rest

Singularity and its Discontents

Kimmel botIf a machine-based process can outperform a human being is it significant? That weighty question hung in the background as I reviewed Jürgen Schmidhuber’s work on traffic sign classification. Similar results have emerged from IBM’s Watson competition and even on the TOEFL test. In each case, machines beat people.

But is that fact significant? There are a couple of ways we can look at these kinds of comparisons. First, we can draw analogies to other capabilities that were not accessible by mechanical aid and show that the fact that they outperformed humans was not overly profound. The wheel quickly outperformed human legs for moving heavy objects. The cup outperformed the hands for drinking water. This then invites the realization that the extension of these physical comparisons leads to extraordinary juxtapositions: the airline really outperformed human legs for transport, etc. And this, in turn, justifies the claim that since we are now just outperforming human mental processes, we can only expect exponential improvements moving forward.

But this may be a category mistake in more than the obvious differentiator of the mental and the physical. Instead, the category mismatch is between levels of complexity. The number of parts in a Boeing 747 is 6 million versus one moving human as the baseline (we could enumerate the cells and organelles, etc., but then we would need to enumerate the crystal lattices of the aircraft steel, so that level of granularity is a wash). The number of memory addresses in a big server computer is 64 x 10^9 or higher, with disk storage in the TBs (10^12). Meanwhile, the human brain has 100 x 10^9 neurons and 10^14 connections. So, with just 2 orders of magnitude between computers and brains versus 6 between humans and planes, we find ourselves approaching Kurzweil’s argument that we have to wait until 2040.… Read the rest

Curiouser and Curiouser

georgeJürgen Schmidhuber’s work on algorithmic information theory and curiosity is worth a few takes, if not more, for the researcher has done something that is both flawed and rather brilliant at the same time. The flaws emerge when we start to look deeply into the motivations for ideas like beauty (is symmetry and noncomplex encoding enough to explain sexual attraction? Well-understood evolutionary psychology is probably a better bet), but the core of his argument is worth considering.

If induction is an essential component of learning (and we might suppose it is for argument’s sake), then why continue to examine different parameterizations of possible models for induction? Why be creative about how to explain things, like we expect and even idolize of scientists?

So let us assume that induction is explained by the compression of patterns into better and better models using an information theoretic-style approach. Given this, Schmidhuber makes the startling leap that better compression and better models are best achieved by information harvesting behavior that involves finding novelty in the environment. Thus curiosity. Thus the implementation of action in support of ideas.

I proposed a similar model to explain aesthetic preferences for mid-ordered complex systems of notes, brush-strokes, etc. around 1994, but Schmidhuber’s approach has the benefit of not just characterizing the limitations and properties of aesthetic systems, but also justifying them. We find interest because we are programmed to find novelty, and we are programmed to find novelty because we want to optimize our predictive apparatus. The best optimization is actively seeking along the contours of the perceivable (and quantifiable) universe, and isolating the unknown patterns to improve our current model.… Read the rest