Solomonoff Induction, Truth, and Theism

LukeProg of CommonSenseAtheism fame created a bit of a row when he declared that Solomonoff Induction largely rules out theism, continuing on to expand on the theme:

If I want to pull somebody away from magical thinking, I don’t need to mention atheism. Instead, I teach them Kolmogorov complexity and Bayesian updating. I show them the many ways our minds trick us. I show them the detailed neuroscience of human decision-making. I show them that we can see (in the brain) a behavior being selected up to 10 seconds before a person is consciously aware of ‘making’ that decision. I explain timelessness.

There were several reasons for the CSA community to get riled up about these statements and they took on several different forms:

  • The focus on Solomonoff Induction/Kolmogorov Complexity is obscurantist in using radical technical terminology.
  • The author is ignoring deductive arguments that support theist claims.
  • The author has joined a cult.
  • Inductive claims based on Solomonoff/Kolmogorov are no different from Reasoning to the Best Explanation.

I think all of these critiques are partially valid, though I don’t think there are any good reasons for thinking theism is true, but the fourth one (which I contributed) was a personal realization for me. Though I have been fascinated with the topics related to Kolmogorov since the early 90s, I don’t think they are directly applicable to the topic of theism/atheism.  Whether we are discussing the historical validity of Biblical claims or the logical consistency of extensions to notions of omnipotence or omniscience, I can’t think of a way that these highly mathematical concepts have direct application.

But what are we talking about? Solomonoff Induction, Kolmogorov Complexity, Minimum Description Length, Algorithmic Information Theory, and related ideas are formalizations of the idea of William of Occam (variously Ockham) known as Occam’s Razor that given multiple explanations of a given phenomena, one should prefer the simpler explanation.… Read the rest

Simulated Experimental Morality

I’m deep in Steven Pinker’s The Better Angels of Nature: Why Violence Has Declined. It’s also only about the third book I’ve tried to read exclusively on the iPad, but I am finally getting used to the platform. The core thesis of Pinker’s book is something that I have been experimentally testing on people for several years: our moral facilities and decision-making are gradually improving. For Pinker, the thesis is built up elaborately from basic estimates of death rates due to war and homicide between non-state societies and state societies. It comes with an uncomfortable inversion of the nobility of the savage mind: primitive people had a lot to fight about and often did.

My first contact with the notion that morality is changing and improving was with Richard Dawkin’s observation in The God Delusion that most modern Westerners feel very uncomfortable with the fire bombing of Tokyo in World War II, the saturation bombing of Hanoi, nuclear attack against civilian populations, or treating people inhumanely based on race or ethnicity. Yet that wasn’t the case just decades ago. More moral drift can be seen in changing sentiments concerning the rights of gay people to marry. Experimentally, then, I would ask, over dinner or conversation, about simple moral trolley experiments and then move on to ask whether anyone would condone nuclear attack against civilian populations. There is always a first response of “no” to the latter, which reflects a gut moral sentiment, though a few people have agreed that it may be “permissible” (to use the language of these kinds of dilemmas) in response to a similar attack and when there may be “command and control assets” mixed into the attack area.… Read the rest

Evolution, Rationality, and Artificial Intelligence

We now know that our cognitive facilities are not perfectly rational. Indeed, our cultural memory has regularly reflected that fact. But we often thought we might be getting a handle on what it means to be rational by developing models for what good thinking might be like and using it in political, philosophical, and scientific discourse. The models were based on nascent ideas like the logical coherence of arguments, internal consistency, few tautologies, and the consistency with empirical data.

But an interesting and quite basic question is why should we be able to formulate logical rules and create increasingly impressive systems of theory and observations given a complex evolutionary history. We have big brains, sure, but they evolved to manage social relationships and find resources–not to understand the algebraic topology of prime numbers or the statistical oddities of quantum mechanics–yet they seem well suited for these newer and more abstract tasks.

Alvin Plantinga, a theist and modern philosopher whose work has touched everything from epistemology to philosophy of religion, formulated his Evolutionary Argument Against Naturalism (EANN) as a kind of complaint that the likelihood of rationality arising from evolutionary processes is very low (really he is most concerned with the probability of “reliability,” by which means that most conclusions and observations are true, but I am substituting rationality for this with an additional Bayesian overlay).

Plantinga mostly wants to advocate that maybe our faculties are rational because God made them rather than a natural process. The response to this from an evolutionary perspective is fairly simple: evolution is an adaptive process and adaptation to a series of niche signals involves not getting those signals wrong. There are technical issues that arise here concerning how specific adaptation can result in more general rational facilities but we can, at least in principle, imagine (and investigate) bridge rules that extend out from complex socialization to encompass the deep complexities of modern morality and the Leviathan state, and the extension of optimizing spear throwing to shooting rockets into orbit.… Read the rest