Bayesianism and Properly Basic Belief

Kircher-Diagram_of_the_names_of_GodXu and Tenebaum, in Word Learning as Bayesian Inference (Psychological Review, 2007), develop a very simple Bayesian model of how children (and even adults) build semantic associations based on accumulated evidence. In short, they find contrastive elimination approaches as well as connectionist methods unable to explain the patterns that are observed. Specifically, the most salient problem with these other methods is that they lack the rapid transition that is seen when three exemplars are presented for a class of objects associated with a word versus one exemplar. Adults and kids (the former even more so) just get word meanings faster than those other models can easily show. Moreover, a space of contending hypotheses that are weighted according to their Bayesian statistics, provides an escape from the all-or-nothing of hypothesis elimination and some of the “soft” commitment properties that connectionist models provide.

The mathematical trick for the rapid transition is rather interesting. They formulate a “size principle” that weights the likelihood of a given hypothesis (this object is most similar to a “feb,” for instance, rather than the many other object sets that are available) according to a scaling that is exponential in the number of exposures. Hence the rapid transition:

Hypotheses with smaller extensions assign greater probability than do larger hypotheses to the same data, and they assign exponentially greater probability as the number of consistent examples increases.

It should be noted that they don’t claim that the psychological or brain machinery implements exactly this algorithm. As is usual in these matters, it is instead likely that whatever machinery is involved, it simply has at least these properties. It may very well be that connectionist architectures can do the same but that existing approaches to connectionism simply don’t do it quite the right way.… Read the rest

Rationality and the Intelligibility of Philosophy

6a00d83542d51e69e20133f5650edd970b-800wiThere is a pervasive meme in the physics community that holds as follows: there are many physical phenomena that don’t correspond in any easy way to our ordinary experiences of life on earth. We have wave-particle duality wherein things behave like waves sometimes and particles other times. We have simultaneous entanglement of physically distant things. We have quantum indeterminacy and the emergence of stuff out of nothing. The tiny world looks like some kind of strange hologram with bits connected together by virtual strings. We have a universe that began out of nothing and that begat time itself. It is, in this framework, worthwhile to recognize that our every day experiences are not necessarily useful (and are often confounding) when trying to understand the deep new worlds of quantum and relativistic physics.

And so it is worthwhile to ask whether many of the “rational” queries that have been made down through time have any intelligible meaning given our modern understanding of the cosmos. For instance, if we were to state the premise “all things are either contingent or necessary” that underlies a poor form of the Kalam Cosmological Argument, we can immediately question the premise itself. And a failed premise leads to a failed syllogism. Maybe the entanglement of different things is piece-part of the entanglement of large-scale space time, and that the insights we have so far are merely shadows of the real processes acting behind the scenes? Who knows what happened before the Big Bang?

In other words, do the manipulations of logic and the assumptions built into the terms lead us to empty and destructive conclusions? There is no reason not to suspect that and therefore the bits of rationality that don’t derive from empirical results are immediately suspect.… Read the rest

A Critique of Pure Randomness

Random MemeThe notion of randomness brings about many interesting considerations. For statisticians, randomness is a series of events with chances that are governed by a distribution function. In everyday parlance, equally-likely means random, while an even more common semantics is based on both how unlikely and how unmotivated an event might be (“That was soooo random!”) In physics, there are only certain physical phenomena that can be said to be truly random, including the probability of a given nucleus decomposing into other nuclei via fission. The exact position of a quantum thingy is equally random when it’s momentum is nailed down, and vice-versa. Vacuums have a certain chance of spontaneously creating matter, too, and that chance appears to be perfectly random. In algorithmic information theory, a random sequence of bits is a sequence that can’t be represented by a smaller descriptive algorithm–it is incompressible. Strangely enough, we simulate random number generators using a compact algorithm that has a complicated series of steps that lead to an almost impossible to follow trajectory through a deterministic space of possibilities; it’s acceptible to be random enough that the algorithm parameters can’t be easily reverse engineered and the next “random” number guessed.

One area where we often speak of randomness is in biological evolution. Random mutations lead to change and to deleterious effects like dead-end evolutionary experiments. Or so we hypothesized. The exact mechanism of the transmission of inheritance and of mutations were unknown to Darwin, but soon in the evolutionary synthesis notions like random genetic drift and the role of ionizing radiation and other external factors became exciting candidates for the explanation of the variation required for evolution to function. Amusingly, arguing largely from a stance that might be called a fallacy of incredulity, creationists have often seized on a logical disconnect they perceive between the appearance of purpose both in our lives and in the mechanisms of biological existence, and the assumption of underlying randomness and non-directedness as evidence for the paucity of arguments from randomness.… Read the rest

Informational Chaff and Metaphors

chaffI received word last night that our scholarship has received over 1400 applications, which definitely surprised me. I had worried that the regional restriction might be too limiting but Agricultural Sciences were added in as part of STEM so that probably magnified the pool.

Dan Dennett of Tufts and Deb Roy at MIT draw parallels between informational transparency in our modern world and biological mechanism in Scientific American (March 2015, 312:3). Their article, Our Transparent Future (related video here; you have to subscribe to read the full article), starts with Andrew Parker’s theory that the Cambrian Explosion may have been tied to the availability of light as cloud cover lifted and seas became transparent. An evolutionary arms race began for the development of sensors that could warn against predators, and predators that could acquire more prey.

They continue on drawing parallels to biological processes, including the concept of squid ink and how a similar notion, chaff, was used to mask radar signatures as aircraft became weapons of war. The explanatory mouthful of the Multiple Independent Reentry Vehicle (MIRV) with dummy warheads to counter anti-ballistic missiles were likewise a deceptive way of reducing the risk of interception. So Dennett and Roy “predict the introduction of chaff made of nothing but megabytes of misinformation,” designed to deceive search engines of the nature of real info.

This is a curious idea. Search engine optimization (SEO) is a whole industry that combines consulting with tricks and tools to try to raise the position of vendors in the Google rankings. Being in the first page of listings can be make-or-break for retail vendors, and they pay to try to make that happen. The strategies are based around trying to establish links to the vendor from individuals and other pages to try to game the PageRank algorithm.… Read the rest

Language Games

Word GamesOn The Thinking Atheist, C.J. Werleman promotes the idea that atheists can’t be Republicans based on his new book. Why? Well, for C.J. it’s because the current Republican platform is not grounded in any kind of factual reality. Supply-side economics, Libertarianism, economic stimuli vs. inflation, Iraqi WMDs, Laffer curves, climate change denial—all are grease for the wheels of a fantastical alternative reality where macho small businessmen lift all boats with their steely gaze, the earth is forever resilient to our plunder, and simple truths trump obscurantist science. Watch out for the reality-based community!

Is politics essentially religion in that it depends on ideology not grounded in reality, spearheaded by ideologues who serve as priests for building policy frameworks?

Likely. But we don’t really seem to base our daily interactions on rationality either. 538 Science tells us that it has taken decades to arrive at the conclusion that vitamin supplements are probably of little use to those of us lucky enough to live in the developed world. Before that we latched onto indirect signaling about vitamin C, E, D, B12, and others to decide how to proceed. The thinking typically took on familiar patterns: someone heard or read that vitamin X is good for us/I’m skeptical/why not?/maybe there are negative side-effects/it’s expensive anyway/forget it. The language games are at all levels in promoting, doubting, processing, and reinforcing the microclaims for each option. We embrace signals about differences and nuances but it often takes many months and collections of those signals in order to make up our minds. And then we change them again.

Among the well educated, I’ve variously heard the wildest claims about the effectiveness of chiropractors, pseudoscientific remedies, the role of immunizations in autism (not due to preservatives in this instance; due to immune responses themselves), and how karma works in software development practice.… Read the rest

Just So Disruptive

i-don-t-always-meme-generator-i-don-t-always-buy-companies-but-when-i-do-i-do-it-for-no-reason-925b08The “just so” story is a pejorative for cultural or physical traits that drive an evolutionary explanation. Things are “just so” when the explanation is unfalsifiable and theoretically fitted to current observations. Less controversial and pejorative is the essential character of evolutionary process where there is no doubt that genetic alternatives will mostly fail. The ones that survive this crucible are disruptive to the status quo, sure, but these disruptions tend to be geographically or sexually isolated from the main population anyway, so they are more an expansion than a disruption; little competition is tooth-and-claw, mostly species survive versus the environment, not one another.

Jill Lapore of Harvard subjects business theory to a similar crucible in the New Yorker, questioning Clayton Christensen’s classic argument in The Innovator’s Dilemma that businesses are unwilling to adapt to changing markets because they are making rational business decisions to maximize profits. After analyzing core business cases from Christensen’s books, Lapore concludes that the argument holds little water and that its predictions are both poor and inapplicable to other areas like journalism and college education.

Central to her critique is her analysis of the “just so” nature of disruptive innovation:

Christensen has compared the theory of disruptive innovation to a theory of nature: the theory of evolution. But among the many differences between disruption and evolution is that the advocates of disruption have an affinity for circular arguments. If an established company doesn’t disrupt, it will fail, and if it fails it must be because it didn’t disrupt. When a startup fails, that’s a success, since epidemic failure is a hallmark of disruptive innovation. (“Stop being afraid of failure and start embracing it,” the organizers of FailCon, an annual conference, implore, suggesting that, in the era of disruption, innovators face unprecedented challenges.

Read the rest

Computing the Madness of People

Bubble playing cardThe best paper I’ve read so far this year has to be Pseudo-Mathematics and Financial Charlatanism: The Effects of Backtest Overfitting on Out-of-sample Performance by David Bailey, Jonathan Borwein, Marcos López de Prado, and Qiji Jim Zhu. The title should ring alarm bells with anyone who has ever puzzled over the disclaimers made by mutual funds or investment strategists that “past performance is not a guarantee of future performance.” No, but we have nothing but that past performance to judge the fund or firm on; we could just pick based on vague investment “philosophies” like the heroizing profiles in Kiplingers seem to promote or trust that all the arbitraging has squeezed the markets into perfect equilibria and therefore just use index funds.

The paper’s core tenets extend well beyond financial charlatanism, however. They point out that the same problem arises in drug discovery where main effects of novel compounds may be due to pure randomness in the sample population in a way that is masked by the sample selection procedure. The history of mental illness research has similar failures, with the head of NIMH remarking that clinical trials and the DSM for treating psychiatric symptoms is too often “shooting in the dark.”

The core suggestion of the paper is remarkably simple, however: use held-out data to validate models. Remarkably simple but apparently rarely done in quantitative financial analysis. The researchers show how simple random walks can look like a seasonal price pattern, and how by sending binary signals about market performance to clients (market will rise/market will fall) investment advisors can create a subpopulation that thinks they are geniuses as other clients walk away due to losses. These rise to the level of charlatanism but the problem of overfitting is just one of pseudo-mathematics where insufficient care is used in managing the data.… Read the rest

Humbly Evolving in a Non-Simulated Universe

darwin-changeThe New York Times seems to be catching up to me, first with an interview of Alvin Plantinga by Gary Cutting in The Stone on February 9th, and then with notes on Bostrom’s Simulation Hypothesis in the Sunday Times.

I didn’t see anything new in the Plantinga interview, but reviewed my previous argument that adaptive fidelity combined with adaptive plasticity must raise the probability of rationality at a rate that is much greater than the contributions that would be “deceptive” or even mildly cognitively or perceptually biased. Worth reading is Branden Fitelsen and Eliot Sober’s very detailed analysis of Plantinga’s Evolutionary Argument Against Naturalism (EAAN), here. Most interesting are the beginning paragraphs of Section 3, which I reproduce here because it is a critical addition that should surprise no one but often does:

Although Plantinga’s arguments don’t work, he has raised a question that needs to be answered by people who believe evolutionary theory and who also believe that this theory says that our cognitive abilities are in various ways imperfect. Evolutionary theory does say that a device that is reliable in the environment in which it evolved may be highly unreliable when used in a novel environment. It is perfectly possible that our mental machinery should work well on simple perceptual tasks, but be much less reliable when applied to theoretical matters. We hasten to add that this is possible, not inevitable. It may be that the cognitive procedures that work well in one domain also work well in another; Modus Ponens may be useful for avoiding tigers and for doing quantum physics.

Anyhow, if evolutionary theory does say that our ability to theorize about the world is apt to be rather unreliable, how are evolutionists to apply this point to their own theoretical beliefs, including their belief in evolution?

Read the rest

Substitutions, Permutations, and Economic Uncertainty

500px-SHA-2.svgWhen Robert Schiller was awarded the near-Nobel for economics there was also a tacit blessing that the limits of economics as a science were being recognized. You see, Schiller’s most important contributions included debunking the essentials of market behavior and replacing it with the irrationals of behavioral psychology.

Schiller’s pairing with Eugene Fama in the Nobel award is ironic in that Fama is the father of the efficient market hypothesis that suggests that rational behavior should overcome those irrational tendencies to reach a cybernetic homeostasis…if only the system were free of regulatory entanglements that drag on the clarity of the mass signals. And all these bubbles that grow and burst would be smoothed out of the economy.

But technological innovation can sometimes trump old school musings and analysis: BitCoin represents a bubble in value under the efficient market hypothesis because the currency value has no underlying factual basis. As the economist John Quinnen points out in The National Interest:

But in the case of Bitcoin, there is no source of value whatsoever. The computing power used to mine the Bitcoin is gone once the run has finished and cannot be reused for a more productive purpose. If Bitcoins cease to be accepted in payment for goods and services, their value will be precisely zero.

In fact, that specific computing power consists of just two basic functions: substitution and permutation. So some long string of transactions have all their bits substituted with other bits, then blocks of those bits are rotated and generally permuted until we end up with a bit signature that is of fixed length but that is statistically uncorrelated with the original content. And there is no other value to those specific (and hard to do) computations.… Read the rest

Towards an Epistemology of Uncertainty (the “I Don’t Know” club)

space-timeToday there was an acute overlay of reinforcing ideas when I encountered Sylvia McLain’s piece in Occam’s Corner on The Guardian drawing out Niall Ferguson for deriving Keynesianism from Keynes’ gayness. And just when I was digesting Lee Smolin’s new book, Time Reborn: From the Crisis in Physics to the Future of the Universe.

The intersection was a tutorial in the limits of expansive scientism and in the conclusions that led to unexpected outcomes. We get to euthanasia and forced sterilization down that path–or just a perception of senility when it comes to Ferguson. The fix to this kind of programme is fairly simple: doubt. I doubt that there is any coherent model that connects sexual orientation to economic theory. I doubt that selective breeding and euthanasia can do anything more than lead to inbreeding depression. Or, for Smolin, I doubt that the scientific conclusions that we have reached so far are the end of the road.

That wasn’t too hard, was it?

The I Don’t Know club is pretty easy to join. All one needs is intellectual honesty and earnesty.… Read the rest