Parsimonious Portmanteaus

portmanteauMeaning is a problem. We think we might know what something means but we keep being surprised by the facts, research, and logical difficulties that surround the notion of meaning. Putnam’s Representation and Reality runs through a few different ways of thinking about meaning, though without reaching any definitive conclusions beyond what meaning can’t be.

Children are a useful touchstone concerning meaning because we know that they acquire linguistic skills and consequently at least an operational understanding of meaning. And how they do so is rather interesting: first, presume that whole objects are the first topics for naming; next, assume that syntactic differences lead to semantic differences (“the dog” refers to the class of dogs while “Fido” refers to the instance); finally, prefer that linguistic differences point to semantic differences. Paul Bloom slices and dices the research in his Précis of How Children Learn the Meanings of Words, calling into question many core assumptions about the learning of words and meaning.

These preferences become useful if we want to try to formulate an algorithm that assigns meaning to objects or groups of objects. Probabilistic Latent Semantic Analysis, for example, assumes that words are signals from underlying probabilistic topic models and then derives those models by estimating all of the probabilities from the available signals. The outcome lacks labels, however: the “meaning” is expressed purely in terms of co-occurrences of terms. Reconciling an approach like PLSA with the observations about children’s meaning acquisition presents some difficulties. The process seems too slow, for example, which was always a complaint about connectionist architectures of artificial neural networks as well. As Bloom points out, kids don’t make many errors concerning meaning and when they do, they rapidly compensate.… Read the rest

Predicting Black Swans

black-swanNasim Taleb’s 2nd Edition of The Black Swan argues—not unpersuasively—that rare, cataclysmic events dominate ordinary statistics. Indeed, he notes that almost all wealth accumulation is based on long-tail distributions where a small number of individuals reap unexpected rewards. The downsides are also equally challenging, where he notes that casinos lose money not in gambling where the statistics are governed by Gaussians (the house always wins), but instead when tigers attack, when workers sue, and when other external factors intervene.

Black Swan Theory adds an interesting challenge to modern inference theories like Algorithmic Information Theory (AIT) that anticipate predictability to the universe. Even variant coding approaches like Minimum Description Length theory modify the anticipatory model based on relatively smooth error functions rather than high “kurtosis” distributions of variable change. And for the most part, for the regular events of life and our sensoriums, that is adequate. It is only where we start to look at rare existential threats that we begin to worry about Black Swans and inference.

How might we modify the typical formulations of AIT and the trade-offs between model complexity and data to accommodate the exceedingly rare? Several approaches are possible. First, if we are combining a predictive model with a resource accumulation criteria, we can simply pad out the model memory by reducing kurtosis risk through additional resource accumulation; any downside is mitigated by the storing of nuts for a rainy day. Good strategy for moderately rare events like weather change, droughts and whatnot. But what about even rarer events like little ice ages and dinosaur extinction-level meteorite hits? An alternative strategy is to maintain sufficient diversity in the face of radical unknowns that coping becomes a species-level achievement.… Read the rest

Instrumenting Others

slave-marketJerry Coyne takes down Ross Douthat’s New York Times column in The New Republic along multiple dimensions, but perhaps the most interesting one is his draw-down of the question of what exactly Christian morality amounts to? We can equally question any other religious morality or even secular ones.

For instance, we mostly agree that slavery is a bad idea in the modern world. Slavery involves treating others instrumentally, using them for selfish outcomes, and exploiting their human capacity. Slavery is almost unquestionable; it lacks many of the conventional ambiguities that dominate controversial social issues. Yet slavery was quite acceptable in the Old Testament, with the only relief coming for the enslavement of Jews by Jews with the release of the slaves after six years (under certain circumstances). Literal interpretations of the Bible resort to expansive apologetics to try to minimize these kinds of problems, but they are just the finer chantilly skimmed off human sacrifice, oppression, and genocide.

So how do people make moral choices? They only occasionally invoke religious sentiments or ideas even when they are believers, though they may often articulate a claim of prayer or meditation. Instead, the predominant moral calculus is girded by modern ideas and conflicts that are evolving faster than even generational change. Pot is OK, gay marriage is just a question of equality, and miscegenation is none of our business. Note that only the second item has a clear reference point in JCM (Judeo-Christian-Muslim) scripture. The others might get some traction using expansive interpretations, but those are expansive interpretations that just justify my central thesis that moral decision-making is really underdetermined by religious thinking (or even formal philosophical ones). Moral decision making is determined by knowledge and education in an ad hoc way that relies on empathic and intellectual reasoning.… Read the rest

Algorithmic Aesthetics

Jared Tarbell’s work in algorithmic composition via processing.org continues to amaze me. See more, here. The relatively compact descriptions of complex landscapes lend themselves to treatment as aesthetic phenomena where the scale of the grammars versus the complexity of the results asks the question what is art and how does it relate to human neurosystems?

 

 … Read the rest

Substitutions, Permutations, and Economic Uncertainty

500px-SHA-2.svgWhen Robert Schiller was awarded the near-Nobel for economics there was also a tacit blessing that the limits of economics as a science were being recognized. You see, Schiller’s most important contributions included debunking the essentials of market behavior and replacing it with the irrationals of behavioral psychology.

Schiller’s pairing with Eugene Fama in the Nobel award is ironic in that Fama is the father of the efficient market hypothesis that suggests that rational behavior should overcome those irrational tendencies to reach a cybernetic homeostasis…if only the system were free of regulatory entanglements that drag on the clarity of the mass signals. And all these bubbles that grow and burst would be smoothed out of the economy.

But technological innovation can sometimes trump old school musings and analysis: BitCoin represents a bubble in value under the efficient market hypothesis because the currency value has no underlying factual basis. As the economist John Quinnen points out in The National Interest:

But in the case of Bitcoin, there is no source of value whatsoever. The computing power used to mine the Bitcoin is gone once the run has finished and cannot be reused for a more productive purpose. If Bitcoins cease to be accepted in payment for goods and services, their value will be precisely zero.

In fact, that specific computing power consists of just two basic functions: substitution and permutation. So some long string of transactions have all their bits substituted with other bits, then blocks of those bits are rotated and generally permuted until we end up with a bit signature that is of fixed length but that is statistically uncorrelated with the original content. And there is no other value to those specific (and hard to do) computations.… Read the rest

In Like Flynn

The exceptionally interesting James Flynn explains the cognitive history of the past century and what it means in terms of human intelligence in this TED talk:

What does the future hold? While we might decry the “twitch” generation and their inundation by social media, gaming stimulation, and instant interpersonal engagement, the slowing observed in the Flynn Effect might be getting ready for another ramp-up over the next 100 years.

Perhaps most intriguing is the discussion of the ability to think in terms of hypotheticals as a a core component of ethical reasoning. Ethics is about gaming outcomes and also about empathizing with others. The influence of media as a delivery mechanism for narratives about others emerged just as those changes in cognitive capabilities were beginning to mature in the 20th Century. Widespread media had a compounding effect on the core abstract thinking capacity, and with the expansion of smartphones and informational flow, we may only have a few generations to go before the necessary ingredients for good ethical reasoning are widespread even in hard-to-reach areas of the world.… Read the rest

Contingency and Irreducibility

JaredTarbell2Thomas Nagel returns to defend his doubt concerning the completeness—if not the efficacy—of materialism in the explanation of mental phenomena in the New York Times. He quickly lays out the possibilities:

  1. Consciousness is an easy product of neurophysiological processes
  2. Consciousness is an illusion
  3. Consciousness is a fluke side-effect of other processes
  4. Consciousness is a divine property supervened on the physical world

Nagel arrives at a conclusion that all four are incorrect and that a naturalistic explanation is possible that isn’t “merely” (1), but that is at least (1), yet something more. I previously commented on the argument, here, but the refinement of the specifications requires a more targeted response.

Let’s call Nagel’s new perspective Theory 1+ for simplicity. What form might 1+ take on? For Nagel, the notion seems to be a combination of Chalmers-style qualia combined with a deep appreciation for the contingencies that factor into the personal evolution of individual consciousness. The latter is certainly redundant in that individuality must be absolutely tied to personal experiences and narratives.

We might be able to get some traction on this concept by looking to biological evolution, though “ontogeny recapitulates phylogeny” is about as close as we can get to the topic because any kind of evolutionary psychology must be looking for patterns that reinforce the interpretation of basic aspects of cognitive evolution (sex, reproduction, etc.) rather than explore the more numinous aspects of conscious development. So we might instead look for parallel theories that focus on the uniqueness of outcomes, that reify the temporal evolution without reference to controlling biology, and we get to ideas like uncomputability as a backstop. More specifically, we can explore ideas like computational irreducibility to support the development of Nagel’s new theory; insofar as the environment lapses towards weak predictability, a consciousness that self-observes, regulates, and builds many complex models and metamodels is superior to those that do not.… Read the rest

Death Comes for the Visionary

soleri_babelI sadly missed the announcement that Paolo Soleri died in April of this year. I’m happy that my family and I got to tour Arcosanti during his lifetime, of course, and I pledge to make no further negative comments about his writings during my lifetime, though I reserve the right to passing ambivalent mentions.

Long live Paolo Soleri!… Read the rest

Red Queens of Hearts

redqueenAn incomplete area of study in philosophy and science is the hows and whys of social cooperation. We can easily assume that social organisms gain benefits in terms of the propagation of genes by speculating about the consequences of social interactions versus individual ones, but translating that speculation into deep insights has remained a continuing research program. The consequences couldn’t be more significant because we immediately gain traction on the Naturalistic Fallacy and build a bridge towards a clearer understanding of human motivation in arguing for a type of Moral Naturalism that embodies much of the best we know and hope for from human history.

So worth tracking are continued efforts to understand how competition can be outdone by cooperation in the most elementary and mathematical sense. The superlatively named Freeman Dyson (who doesn’t want to be a free man?) cast a cloud of doubt on the ability of cooperation to be a working strategy when he and colleague William Press analyzed the payoff matrixes of iterated prisoner’s dilemma games and discovered a class of play strategies called “Zero-Determinant” strategies that always pay-off regardless of the opponent’s strategies. Hence, the concern that there is a large corner in the adaptive topology where strong-arming always wins. And evolutionary search must seek out that corner and winners must accumulate there, thus ruling out cooperation as a prominent feature of evolutionary success.

But that can’t reflect the reality we think we see, where cooperation in primates and other eusocial organisms seems to be the precursor to the kinds of virtues that are reflected in moral, religious, and ethical traditions. So what might be missing in this analysis? Christophe Adami and Arend Hintze at Michigan State may have some of the answers in their paper, Evolutionary instability of zero-determinant strategies demonstrates that winning is not everything.… Read the rest

Novelty in the Age of Criticism

Gary Cutting from Notre Dame and the New York Times knows how to incite an intellectual riot, as demonstrated by his most recent The Stone piece, Mozart vs. the Beatles. “High art” is superior to “low art” because of its “stunning intellectual and emotional complexity.” He sums up:

My argument is that this distinctively aesthetic value is of great importance in our lives and that works of high art achieve it much more fully than do works of popular art.

But what makes up these notions of complexity and distinctive aesthetic value? One might try to enumerate those values or create a list. Or, alternatively, one might instead claim that time serves as a sieve for the values that Cutting is claiming make one work of art superior to another, thus leaving open the possibility for the enumerated list approach to be incomplete but still a useful retrospective system of valuation.

I previously argued in a 1994 paper (published in 1997), Complexity Formalisms, Order and Disorder in the Structure of Art, that simplicity and random chaos exist in a careful balance in art that reflects our underlying grammatical systems that are used to predict the environment. And Jürgen Schmidhuber took the approach further by applying algorithmic information theory to novelty seeking behavior that leads, in turn, to aesthetically pleasing models. The reflection of this behavioral optimization in our sideline preoccupations emerges as art, with the ultimate causation machine of evolution driving the proximate consequences for men and women.

But let’s get back to the flaw I see in Cutting’s argument that, in turn, fits better with Schmidhuber’s approach: much of what is important in art is cultural novelty. Picasso is not aesthetically superior to the detailed hyper-reality of Dutch Masters, for instance, but is notable for his cultural deconstruction of the role of art as photography and reproduction took hold.… Read the rest