The Churches of Evil

The New York Times continues to mine the dark territory between religious belief and atheism in a series of articles in the opinion section, with the most recent being Gary Cutting’s thoughtful meditation on agnosticism, ways of knowing, and the contributions of religion to individual lives and society. In response, Penn Jillette and others discuss atheism as a religion-like venture.

We can dissect Cutting’s argument while still being generous to his overall thrust. It is certainly true that aside from the specific knowledge claims of religious people that there are traditions of practice that result in positive outcomes for religious folk. But when we drill into the knowledge dimension, Cutting props up Alvin Plantinga and Richard Swinburne as representing “the role of evidence and argument” in advanced religious argument. He might have been better to restrict the statement to “argument” in this case, because both philosophers focus primarily on argument in their philosophical works. So evidence remains elusively private in the eyes of the believer.

Interestingly, many of the arguments of both are simply arguments against a counter-assumption that anticipates a secular universe. For instance, Plantinga shows that the Logical Problem of Evil is not incoherent, resulting in a conclusion that evil (neglect “natural evil” for the moment) is not logically incompatible with omnibenevolence, omnipotence, and omniscience. But, and here we get back to Cutting, it does nothing to persuade us that the rapacious cruelty of Yahweh much less the moral evil expressed in the new concept of Hell in the New Testament are anything more than logically possible. The human dimension and the appropriate moral outrage are unabated and we loop back to the generosity of Cutting towards the religious: shouldn’t we provide equal generosity to the scriptural problem of evil as expressed in everything from the Hebrew Bible through to the Book of Mormon?… Read the rest

Industrial Revolution #4

Paul Krugman at New York Times consumes Robert Gordon’s analysis of economic growth and the role of technology and comes up more hopeful than Gordon. The kernel in Krugman’s hope is that Big Data analytics can provide a shortcut to intelligent machines by bypassing the requirement for specification and programming that was once assumed to be a requirement for artificial intelligence. Instead, we don’t specify but use “data-intensive ways” to achieve a better result. And we might get to IR#4, following Gordon’s taxonomy where IR stands for “industrial revolution.” IR#1 was steam and locomotives  IR#2 was everything up to computers. IR#3 is computers and cell phones and whatnot.

Krugman implies that IR#4 might spur the typical economic consequences of grand technological change, including the massive displacement of workers, but like in previous revolutions it is also assumed that economic growth built from new industries will ultimately eclipse the negatives. This is not new, of course. Robert Anton Wilson argued decades ago for the R.I.C.H. economy (Rising Income through Cybernetic Homeostasis). Wilson may have been on acid, but Krugman wasn’t yet tuned in, man. (A brief aside: the Krugman/Wilson notions probably break down over extraction and agribusiness/land rights issues. If labor is completely replaced by intelligent machines, the land and the ingredients it contains nevertheless remain a bottleneck for economic growth. Look at the global demand for copper and rare earth materials, for instance.)

But why the particular focus on Big Data technologies? Krugman’s hope teeters on the assumption that data-intensive algorithms possess a fundamentally different scale and capacity than human-engineered approaches. Having risen through the computational linguistics and AI community working on data-driven methods for approaching intelligence, I can certainly sympathize with the motivation, but there are really only modest results to report at this time.… Read the rest

Keep Suspicious and Carry On

I’ve previously argued that it is unlikely that resource-constrained simulations can achieve adequate levels of fidelity to be sufficient for what we observe around us. This argument was a combination of computational irreducibility and assumptions about the complexity of evolutionary trajectories of living beings. There may also be an argument about the observed contingency of the evolutionary process that is an argument against any kind of “intelligent” organizing principle though not against simulation itself.

Leave it to physicists to envision a test of the Bostrom hypothesis that we are living in a computer simulation. Martin Savage and his colleagues look at Quantum Chromodynamic (QCD) theory and current simulation methods for QCD. They conclude that if we are, in fact, living in a simulation, then we might observe specific inconsistencies that arise from finite computing power for the universe as a whole. Those inconsistencies would be observed in looking at the distribution of cosmic ray energies, specifically. Note that if the distribution is not unusual the universe could either be a simulation (just a sophisticated one) or could be a truly physical one (free running and not on another entity’s computational framework). It is only if the distribution is unusual that it might be a simulation.… Read the rest

Bravery and Restraint

In 1997, shortly after getting married and buying our first house, I was invited to travel to Japan and spend a little over a month researching Japanese-Chinese machine translation under a grant from the Japanese Ministry of Education. It was a disorienting experience, like most non-Japanese find Japan, and the hours spent studying my translation guide helped me very little. In the mornings I would jog through downtown, around the canals, and past the temples. Days were spent writing and optimizing statistical matching algorithms for lining up runs of characters that I didn’t understand in an early incarnation of the same approach currently used in Google Translate.

I, of course, visited the Peace Memorial Park several times and toured the museum there, ultimately purchasing a slim volume of recollections from the day the bomb fell that was written in Japanese and English on facing pages. There was also one thing that struck me and I later inquired about to a Japan expert who worked in the Intelligence Community: the narrative presented in the museum was that the Japanese commoner had little understanding of the war effort; they were victims of the emperor and the elite classes. It was a moral distancing that resonated with similar arguments about the German volk being non-complicit in the Holocaust, and an argument that I found distasteful.

With this background, then, I was intrigued when I discovered that the father of my new boss wrote a memoir on being perhaps the first Westerner to enter Hiroshima following the dropping of the atomic bomb. Kenneth Harrison’s book, The Brave Japanese, was originally published in 1966, then republished in 1982 under The Road to Hiroshima due, in part, to the controversy in Australia over ascribing bravery to the Japanese.… Read the rest

Sparse Grokking

Jeff Hawkins of Palm fame shows up in the New York Times hawking his Grok for Big Data predictions. Interestingly, if one drills down into the details of Grok, we see once again that randomized sparse representations are the core of the system. That is, if we assign symbols random representational vectors that are sparse, we can sum the vectors for co-occurring symbols and, following J.R. Firth’s pithy “words shall be known by the company that they keep” start to develop a theory of meaning that would not offend Wittgenstein.

Is there anything new to Hawkins’ effort? For certain types of time-series prediction, the approach parallels artificial neural network designs, replacing the complexity of shifting, multi-epoch training regimens that, in effect, build the high-dimensional distances between co-occurring events by gradually moving time-correlated data together and uncorrelated data apart with an end-run around all the computational complexity. But then there is Random Indexing, which I’ve previously discussed, here. If one restricts Random Indexing to operating on temporal patterns, or on spatial patterns, then the results start to look like Numenta’s offering.

While there is a bit of opportunism in Hawkins latching onto Big Data to promote an application of methods he has been working on for years, there are very real opportunities for trying to mine leading indicators to help with everything from ecommerce to research and development. Many flowers will bloom, grok, die, and be reborn.… Read the rest

Bats and Belfries

Thomas Nagel proposes a radical form of skepticism in his new book, Minds and Cosmos, continuing his trajectory through subjective experience and moral realism first began with bats zigging and zagging among the homunculi of dualism reimagined in the form of qualia. The skepticism involves disputing materialistic explanations and proposing, instead, that teleological ones of an unspecified form will likely apply, for how else could his subtitle that paints the “Neo-Darwinian Concept of Nature” as likely false hold true?

Nagel is searching for a non-religious explanation, of course, because just enervating nature through fiat is hardly an explanation at all; any sort of powerful, non-human entelechy could be gaming us and the universe in a non-coherent fashion. But what parameters might support his argument? Since he apparently requires a “significant likelihood” argument to hold sway in support of the origins of life, for instance, we might imagine what kind of thinking could result in highly likely outcomes that begin with inanimate matter and lead to goal-directed behavior while supporting a significant likelihood of that outcome. The parameters might involve the conscious coordination of the events leading towards the emergence of goal-directed life, thus presupposing a consciousness that is not our own. We are back then to our non-human entelechy looming like an alien or like a strange creator deity (which is not desirable to Nagel). We might also consider the possibility that there are properties to the universe itself that result in self-organization and that either we don’t yet know or that we are only beginning to understand. Elliot Sober’s critique suggests that the 2nd Law of Thermodynamics results in what I might call “patterned” behavior while not becoming “goal-directed” per se.… Read the rest

Pressing Snobs into Hell

Paul Vitanyi has been a deep advocate for Kolmogorov complexity for many years. His book with Ming Li, An Introduction to Kolmogorov Complexity and Its Applications, remains on my book shelf (and was a bit of an investment in grad school).

I came across a rather interesting paper by Vitanyi with Rudi Cilibrasi called “Clustering by Compression” that illustrates perhaps more easily and clearly than almost any other recent work the tight connections between meaning, repetition, and informational structure. Rather than describing the paper, however, I wanted to conduct an experiment that demonstrates their results. To do this, I asked the question: are the writings of Dante more similar to other writings of Dante than to Thackeray? And is the same true of Thackeray relative to Dante?

Now, we could pursue these questions at many different levels. We might ask scholars, well-versed in the works of each, to compare and contrast the two authors. They might invoke cultural factors, the memes of their respective eras, and their writing styles. Ultimately, though, the scholars would have to get down to some textual analysis, looking at the words on the page. And in so doing, they would draw distinctions by lifting features of the text, comparing and contrasting grammatical choices, word choices, and other basic elements of the prose and poetry on the page. We might very well be able to take parts of the knowledge of those experts and distill it into some kind of a logical procedure or algorithm that would parse the texts and draw distinctions based on the distributions of words and other structural cues. If asked, we might say that a similar method might work for the so-called language of life, DNA, but that it would require a different kind of background knowledge to build the analysis, much less create an algorithm to perform the same task.… Read the rest

Intelligence versus Motivation

Nick Bostrom adds to the dialog on desire, intelligence, and intentionality with his recent paper, The Superintelligent Will: Motivation and Instrumental Rationality in Advanced Artificial Agents. The argument is largely a deconstruction of the general assumption that there is somehow an inexorable linkage between intelligence and moral goodness. Indeed, he even proposes that intelligence and motivation are essentially orthogonal (“The Orthogonality Thesis”) but that there may be a particular subset of possible trajectories towards any goal that are common (self-preservation, etc.) The latter is scoped by his “instrumental convergence thesis” where there might be convergences towards central tenets that look an awful lot like the vagaries of human moral sentiments. But they remain vagaries and should not be taken to mean that advanced artificial agents will act in a predictable manner.… Read the rest

Talking Musical Heads

David Byrne gets all scientifical in the most recent Smithsonian, digging into the developmental and evolved neuropsychiatry of musical enjoyment. Now, you may ask yourself, how did DB get so clinical about the emotions of music? And you may ask yourself, how did he get here? And you may ask yourself, how did this music get written?

…one can envision a day when all types of music might be machine-generated. The basic, commonly used patterns that occur in various genres could become the algorithms that guide the manufacture of sounds. One might view much of corporate pop and hip-hop as being machine-made—their formulas are well established, and one need only choose from a variety of available hooks and beats, and an endless recombinant stream of radio-friendly music emerges. Though this industrial approach is often frowned on, its machine-made nature could just as well be a compliment—it returns musical authorship to the ether. All these developments imply that we’ve come full circle: We’ve returned to the idea that our universe might be permeated with music.

It seems fairly obvious that the music I’m listening to right now (Arvo Part) could be automatized, but just hasn’t been so far. And this points to the future world Byrne points to, where we are permeated with music and the contrast with silence is the most sophisticated distinction that can be drawn.… Read the rest

Hirsi Ali’s Social Evolution

Ayaan Hirsi Ali reminds us of the depressingly anti-freedom recent history of Islam in her recent Newsweek article, Muslim Rage & The Last Gasp of Islamic Hate. For Hirsi Ali, despite fatwas on Rushdie, 9/11, and the murder of  her friend and collaborator, Theo von Gogh, a kernel of hope is nascent in the democracy movements that emerged from the Arab Spring: when people have to govern themselves they will, ultimately, turn towards freedom of expression, thought, and worship.

But is that hope warranted?

Is there any sense of inevitability to the liberal programme that emerged from industrialization, affluence, and education? Or is the “progress” of the West more contingent than that, built from happenstance due to the geographic separation of America from Germany and Japan in World War II combined with the widespread availability of raw materials on the American continent, leading to success in that war and the growth of American post-War power in an unbombed industrial landscape that, ironically, led in turn to the defeat of Soviet Communism, itself claiming an inevitability to the flow of history?

Azar Gat raised a parallel question in The Return of Authoritarian Great Powers (Foreign Affairs, 86 (4), pp. 59-69) when he asked whether the rise of “Authoritarian Capitalism” in the form of China and recent Russia constitutes a viable challenge to the claims of liberal democracy. If so, then the notion that there is any sense of inevitability evaporates like the suppositions of dialectical materialism.

The underlying assumptions are taken for granted among most Americans: (1) all people are the same; (2) all people want freedom; (3) authoritarianism is anti-freedom; (4) people will oppose authoritarianism. It’s a nice thought that has some resonance in, say, the history of the Eastern Block, where economic limitations combined with cronyism and foreign political control led to (4).… Read the rest