Language Games

Word GamesOn The Thinking Atheist, C.J. Werleman promotes the idea that atheists can’t be Republicans based on his new book. Why? Well, for C.J. it’s because the current Republican platform is not grounded in any kind of factual reality. Supply-side economics, Libertarianism, economic stimuli vs. inflation, Iraqi WMDs, Laffer curves, climate change denial—all are grease for the wheels of a fantastical alternative reality where macho small businessmen lift all boats with their steely gaze, the earth is forever resilient to our plunder, and simple truths trump obscurantist science. Watch out for the reality-based community!

Is politics essentially religion in that it depends on ideology not grounded in reality, spearheaded by ideologues who serve as priests for building policy frameworks?

Likely. But we don’t really seem to base our daily interactions on rationality either. 538 Science tells us that it has taken decades to arrive at the conclusion that vitamin supplements are probably of little use to those of us lucky enough to live in the developed world. Before that we latched onto indirect signaling about vitamin C, E, D, B12, and others to decide how to proceed. The thinking typically took on familiar patterns: someone heard or read that vitamin X is good for us/I’m skeptical/why not?/maybe there are negative side-effects/it’s expensive anyway/forget it. The language games are at all levels in promoting, doubting, processing, and reinforcing the microclaims for each option. We embrace signals about differences and nuances but it often takes many months and collections of those signals in order to make up our minds. And then we change them again.

Among the well educated, I’ve variously heard the wildest claims about the effectiveness of chiropractors, pseudoscientific remedies, the role of immunizations in autism (not due to preservatives in this instance; due to immune responses themselves), and how karma works in software development practice.… Read the rest

Humbly Evolving in a Non-Simulated Universe

darwin-changeThe New York Times seems to be catching up to me, first with an interview of Alvin Plantinga by Gary Cutting in The Stone on February 9th, and then with notes on Bostrom’s Simulation Hypothesis in the Sunday Times.

I didn’t see anything new in the Plantinga interview, but reviewed my previous argument that adaptive fidelity combined with adaptive plasticity must raise the probability of rationality at a rate that is much greater than the contributions that would be “deceptive” or even mildly cognitively or perceptually biased. Worth reading is Branden Fitelsen and Eliot Sober’s very detailed analysis of Plantinga’s Evolutionary Argument Against Naturalism (EAAN), here. Most interesting are the beginning paragraphs of Section 3, which I reproduce here because it is a critical addition that should surprise no one but often does:

Although Plantinga’s arguments don’t work, he has raised a question that needs to be answered by people who believe evolutionary theory and who also believe that this theory says that our cognitive abilities are in various ways imperfect. Evolutionary theory does say that a device that is reliable in the environment in which it evolved may be highly unreliable when used in a novel environment. It is perfectly possible that our mental machinery should work well on simple perceptual tasks, but be much less reliable when applied to theoretical matters. We hasten to add that this is possible, not inevitable. It may be that the cognitive procedures that work well in one domain also work well in another; Modus Ponens may be useful for avoiding tigers and for doing quantum physics.

Anyhow, if evolutionary theory does say that our ability to theorize about the world is apt to be rather unreliable, how are evolutionists to apply this point to their own theoretical beliefs, including their belief in evolution?

Read the rest

Predicting Black Swans

black-swanNasim Taleb’s 2nd Edition of The Black Swan argues—not unpersuasively—that rare, cataclysmic events dominate ordinary statistics. Indeed, he notes that almost all wealth accumulation is based on long-tail distributions where a small number of individuals reap unexpected rewards. The downsides are also equally challenging, where he notes that casinos lose money not in gambling where the statistics are governed by Gaussians (the house always wins), but instead when tigers attack, when workers sue, and when other external factors intervene.

Black Swan Theory adds an interesting challenge to modern inference theories like Algorithmic Information Theory (AIT) that anticipate predictability to the universe. Even variant coding approaches like Minimum Description Length theory modify the anticipatory model based on relatively smooth error functions rather than high “kurtosis” distributions of variable change. And for the most part, for the regular events of life and our sensoriums, that is adequate. It is only where we start to look at rare existential threats that we begin to worry about Black Swans and inference.

How might we modify the typical formulations of AIT and the trade-offs between model complexity and data to accommodate the exceedingly rare? Several approaches are possible. First, if we are combining a predictive model with a resource accumulation criteria, we can simply pad out the model memory by reducing kurtosis risk through additional resource accumulation; any downside is mitigated by the storing of nuts for a rainy day. Good strategy for moderately rare events like weather change, droughts and whatnot. But what about even rarer events like little ice ages and dinosaur extinction-level meteorite hits? An alternative strategy is to maintain sufficient diversity in the face of radical unknowns that coping becomes a species-level achievement.… Read the rest

Contingency and Irreducibility

JaredTarbell2Thomas Nagel returns to defend his doubt concerning the completeness—if not the efficacy—of materialism in the explanation of mental phenomena in the New York Times. He quickly lays out the possibilities:

  1. Consciousness is an easy product of neurophysiological processes
  2. Consciousness is an illusion
  3. Consciousness is a fluke side-effect of other processes
  4. Consciousness is a divine property supervened on the physical world

Nagel arrives at a conclusion that all four are incorrect and that a naturalistic explanation is possible that isn’t “merely” (1), but that is at least (1), yet something more. I previously commented on the argument, here, but the refinement of the specifications requires a more targeted response.

Let’s call Nagel’s new perspective Theory 1+ for simplicity. What form might 1+ take on? For Nagel, the notion seems to be a combination of Chalmers-style qualia combined with a deep appreciation for the contingencies that factor into the personal evolution of individual consciousness. The latter is certainly redundant in that individuality must be absolutely tied to personal experiences and narratives.

We might be able to get some traction on this concept by looking to biological evolution, though “ontogeny recapitulates phylogeny” is about as close as we can get to the topic because any kind of evolutionary psychology must be looking for patterns that reinforce the interpretation of basic aspects of cognitive evolution (sex, reproduction, etc.) rather than explore the more numinous aspects of conscious development. So we might instead look for parallel theories that focus on the uniqueness of outcomes, that reify the temporal evolution without reference to controlling biology, and we get to ideas like uncomputability as a backstop. More specifically, we can explore ideas like computational irreducibility to support the development of Nagel’s new theory; insofar as the environment lapses towards weak predictability, a consciousness that self-observes, regulates, and builds many complex models and metamodels is superior to those that do not.… Read the rest

Towards an Epistemology of Uncertainty (the “I Don’t Know” club)

space-timeToday there was an acute overlay of reinforcing ideas when I encountered Sylvia McLain’s piece in Occam’s Corner on The Guardian drawing out Niall Ferguson for deriving Keynesianism from Keynes’ gayness. And just when I was digesting Lee Smolin’s new book, Time Reborn: From the Crisis in Physics to the Future of the Universe.

The intersection was a tutorial in the limits of expansive scientism and in the conclusions that led to unexpected outcomes. We get to euthanasia and forced sterilization down that path–or just a perception of senility when it comes to Ferguson. The fix to this kind of programme is fairly simple: doubt. I doubt that there is any coherent model that connects sexual orientation to economic theory. I doubt that selective breeding and euthanasia can do anything more than lead to inbreeding depression. Or, for Smolin, I doubt that the scientific conclusions that we have reached so far are the end of the road.

That wasn’t too hard, was it?

The I Don’t Know club is pretty easy to join. All one needs is intellectual honesty and earnesty.… Read the rest

A Paradigm of Guessing

boxesThe most interesting thing I’ve read this week comes from Jurgen Schmidhuber’s paper, Algorithmic Theories of Everything, which should be provocative enough to pique the most jaded of interests. And the quote is from way into the paper:

The first number is 2, the second is 4, the third is 6, the fourth is 8. What is the fifth? The correct answer is “250,” because the nth number is n 5 −5n^4 −15n^3 + 125n^2 −224n+ 120. In certain IQ tests, however, the answer “250” will not yield maximal score, because it does not seem to be the “simplest” answer consistent with the data (compare [73]). And physicists and others favor “simple” explanations of observations.

And this is the beginning and the end of logical positivism. How can we assign truth to inductive judgments without crossing from fact to value, and what should that value system be?… Read the rest

The Churches of Evil

The New York Times continues to mine the dark territory between religious belief and atheism in a series of articles in the opinion section, with the most recent being Gary Cutting’s thoughtful meditation on agnosticism, ways of knowing, and the contributions of religion to individual lives and society. In response, Penn Jillette and others discuss atheism as a religion-like venture.

We can dissect Cutting’s argument while still being generous to his overall thrust. It is certainly true that aside from the specific knowledge claims of religious people that there are traditions of practice that result in positive outcomes for religious folk. But when we drill into the knowledge dimension, Cutting props up Alvin Plantinga and Richard Swinburne as representing “the role of evidence and argument” in advanced religious argument. He might have been better to restrict the statement to “argument” in this case, because both philosophers focus primarily on argument in their philosophical works. So evidence remains elusively private in the eyes of the believer.

Interestingly, many of the arguments of both are simply arguments against a counter-assumption that anticipates a secular universe. For instance, Plantinga shows that the Logical Problem of Evil is not incoherent, resulting in a conclusion that evil (neglect “natural evil” for the moment) is not logically incompatible with omnibenevolence, omnipotence, and omniscience. But, and here we get back to Cutting, it does nothing to persuade us that the rapacious cruelty of Yahweh much less the moral evil expressed in the new concept of Hell in the New Testament are anything more than logically possible. The human dimension and the appropriate moral outrage are unabated and we loop back to the generosity of Cutting towards the religious: shouldn’t we provide equal generosity to the scriptural problem of evil as expressed in everything from the Hebrew Bible through to the Book of Mormon?… Read the rest

Randomness and Meaning

The impossibility of the Chinese Room has implications across the board for understanding what meaning means. Mark Walker’s paper “On the Intertranslatability of all Natural Languages” describes how the translation of words and phrases may be achieved:

  1. Through a simple correspondence scheme (word for word)
  2. Through “syntactic” expansion of the languages to accommodate concepts that have no obvious equivalence (“optometrist” => “doctor for eye problems”, etc.)
  3. Through incorporation of foreign words and phrases as “loan words”
  4. Through “semantic” expansion where the foreign word is defined through its coherence within a larger knowledge network.

An example for (4) is the word “lepton” where many languages do not have a corresponding concept and, in fact, the concept is dependent on a bulwark of advanced concepts from particle physics. There may be no way to create a superposition of the meanings of other words using (2) to adequately handle “lepton.”

These problems present again for trying to understand how children acquire meaning in learning a language. As Walker points out, language learning for a second language must involve the same kinds of steps as learning translations, so any simple correspondence theory has to be supplemented.

So how do we make adequate judgments about meanings and so rapidly learn words, often initially with a course granularity but later with increasingly sharp levels of focus? What procedure is required for expanding correspondence theories to operate in larger networks? Methods like Latent Semantic Analysis and Random Indexing show how this can be achieved in ways that are illuminating about human cognition. In each case, the methods provide insights into how relatively simple transformations of terms and their occurrence contexts can be viewed as providing a form of “triangulation” about the meaning of words.… Read the rest

The Unreasonable Success of Reason

Math and natural philosophy were discovered several times in human history: Classical Greece, Medieval Islam, Renaissance Europe. Arguably, the latter two were strongly influenced by the former, but even so they built additional explanatory frameworks. Moreover, the explosion that arose from Europe became the Enlightenment and the modern edifice of science and technology

So, on the eve of an eclipse that sufficiently darkened the skies of Northern California, it is worth noting the unreasonable success of reason. The gods are not angry. The spirits are not threatening us over a failure to properly propitiate their symbolic requirements. Instead, the mathematics worked predictively and perfectly to explain a wholly natural phenomenon.

But why should the mathematics work so exceptionally well? It could be otherwise, as Eugene Wigner’s marvelous 1960 paper, The Unreasonable Effectiveness of Mathematics in the Natural Sciences, points out:

All the laws of nature are conditional statements which permit a prediction of some future events on the basis of the knowledge of the present, except that some aspects of the present state of the world, in practice the overwhelming majority of the determinants of the present state of the world, are irrelevant from the point of view of the prediction.

A possible explanation of the physicist’s use of mathematics to formulate his laws of nature is that he is a somewhat irresponsible person. As a result, when he finds a connection between two quantities which resembles a connection well-known from mathematics, he will jump at the conclusion that the connection is that discussed in mathematics simply because he does not know of any other similar connection.

Galileo’s rocks fall at the same rates but only provided that they are not unduly flat and light.… Read the rest