The Obsessive Dreyfus-Hawking Conundrum

I’ve been obsessed lately. I was up at 5 A.M. yesterday and drove to Ruidoso to do some hiking (trails T93 to T92, if interested). The San Augustin Pass was desolate as the sun began breaking over, so I inched up into triple digit speeds in the M6. Because that is what the machine is made for. Booming across White Sands Missile Range, I recalled watching base police work with National Park Rangers to chase oryx down the highway while early F117s practiced touch-and-gos at Holloman in the background, and then driving my carpool truck out to the high energy laser site or desert ship to deliver documents.

I settled into Starbucks an hour and a half later and started writing on ¡Reconquista!, cranking out thousands of words before trying to track down the trailhead and starting on my hike. (I would have run the thing but wanted to go to lunch later and didn’t have access to a shower. Neither restaurant nor diners deserve an après-run moi.) And then I was on the trail and I kept stopping and taking plot and dialogue notes, revisiting little vignettes and annotating enhancements that I would later salt in to the main text over lunch. And I kept rummaging through the development of characters, refining and sifting the facts of their lives through different sets of sieves until they took on both a greater valence within the story arc and, often, more comedic value.

I was obsessed and remain so. It is a joyous thing to be in this state, comparable only to working on large-scale software systems when the hours melt away and meals slip as one cranks through problem after problem, building and modulating the subsystems until the units begin to sing together like a chorus.… Read the rest

The IQ of Machines

standard-dudePerhaps idiosyncratic to some is my focus in the previous post on the theoretical background to machine learning that derives predominantly from algorithmic information theory and, in particular, Solomonoff’s theory of induction. I do note that there are other theories that can be brought to bear, including Vapnik’s Structural Risk Minimization and Valiant’s PAC-learning theory. Moreover, perceptrons and vector quantization methods and so forth derive from completely separate principals that can then be cast into more fundamental problems in informational geometry and physics.

Artificial General Intelligence (AGI) is then perhaps the hard problem on the horizon that I disclaim as having had significant progress in the past twenty years of so. That is not to say that I am not an enthusiastic student of the topic and field, just that I don’t see risk levels from intelligent AIs rising to what we should consider a real threat. This topic of how to grade threats deserves deeper treatment, of course, and is at the heart of everything from so-called “nanny state” interventions in food and product safety to how to construct policy around global warming. Luckily–and unlike both those topics–killer AIs don’t threaten us at all quite yet.

But what about simply characterizing what AGIs might look like and how we can even tell when they arise? Mildly interesting is Simon Legg and Joel Veness’ idea of an Artificial Intelligence Quotient or AIQ that they expand on in An Approximation of the Universal Intelligence Measure. This measure is derived from, voilà, exactly the kind of algorithmic information theory (AIT) and compression arguments that I lead with in the slide deck. Is this the only theory around for AGI? Pretty much, but different perspectives tend to lead to slightly different focuses.… Read the rest

Machine Learning and the Coming Robot Apocalypse

Daliesque creepy dogsSlides from a talk I gave today on current advances in machine learning are available in PDF, below. The agenda is pretty straightforward: starting with some theory about overfitting based on algorithmic information theory, we proceed on through a taxonomy of ML types (not exhaustive), then dip into ensemble learning and deep learning approaches. An analysis of the difficulty and types of performance we get from various algorithms and problems is presented. We end with a discussion of whether we should be frightened about the progress we see around us.

Note: click on the gray square if you don’t see the embedded PDF…browsers vary.Read the rest

Alien Singularities and Great Filters

Life on MarsNick Bostrom at Oxford’s Future of Humanity Institute takes on Fermi’s question “Where are they?” in a new paper on the possibility of life on other planets. The paper posits probability filters (Great Filters) that may have existed in the past or might be still to come and that limit the likelihood of the outcome that we currently observe: our own, ahem, intelligent life. If a Great Filter existed in our past—say the event of abiogenesis or prokaryote to eukaryote transition—then we can somewhat explain the lack of alien contact thus far: our existence is of very low probability. Moreover, we can expect to not find life on Mars.

If, however, the Great Filter exists in our future then we might see life all over the place (including the theme of his paper, Mars). Primitive life is abundant but the Great Filter is somewhere in our future where we annihilate ourselves, thus explaining why Fermi’s They are not here while little strange things thrive on Mars, and beyond. It is only advanced life that got squeezed out by the Filter.

Bostrom’s Simulation Hypothesis provides a potential way out of this largely pessimistic perspective. If there is a very high probability that civilizations achieve sufficient simulation capabilities that they can create artificial universes prior to conquering the vast interstellar voids needed to move around and signal with adequate intensity, it is equally possible that their “exit strategy” is a benign incorporation into artificial realities that prevents corporeal destruction by other means. It seems unlikely that every advanced civilization would “give up” physical being under these circumstances (in Teleology there are hold-outs from the singularity though they eventually die out), which would mean that there might remain a sparse subset of active alien contact possibilities.… Read the rest

Spurting into the Undiscovered Country

voyager_plaqueThere was glop on the windows of the International Space Station. Outside. It was algae. How? Now that is unclear, but there is a recent tradition of arguing against abiogenesis here on Earth and arguing for ideas like panspermia where biological material keeps raining down on the planet, carried by comets and meteorites, trapped in crystal matrices. And there may be evidence that some of that may have happened, if only in the local system, between Mars and Earth.

Panspermia includes as a subset the idea of Directed Panspermia whereby some alien intelligence for some reason sends biological material out to deliberately seed worlds with living things. Why? Well, maybe it is a biological prerogative or an ethical stance. Maybe they feel compelled to do so because they are in some dystopian sci-fi narrative where their star is dying. One last gasping hope for alien kind!

Directed Panspermia as an explanation for life on Earth only sets back the problem of abiogenesis to other ancient suns and other times, and implicitly posits that some of the great known achievements of life on Earth like multicellular forms are less spectacularly improbable than the initial events of proto-life as we hypothesize it might have been. Still, great minds have spent great mental energy on the topic to the point that elaborate schemes involving solar sails have been proposed so that we may someday engage in Directed Panspermia as needed. I give you:

Mautner, M; Matloff, G. (1979). “Directed panspermia: A technical evaluation of seeding nearby solar systems”. J. British Interplanetary Soc. 32: 419.

So we take solar sails and bioengineered lifeforms in tiny capsules. The solar sails are large and thin. They carry the tiny capsules into stellar formations and slow down due to friction.… Read the rest

Inching Towards Shannon’s Oblivion

SkynetFollowing Bill Joy’s concerns over the future world of nanotechnology, biological engineering, and robotics in 2000’s Why the Future Doesn’t Need Us, it has become fashionable to worry over “existential threats” to humanity. Nuclear power and weapons used to be dreadful enough, and clearly remain in the top five, but these rapidly developing technologies, asteroids, and global climate change have joined Oppenheimer’s misquoted “destroyer of all things” in portending our doom. Here’s Max Tegmark, Stephen Hawking, and others in Huffington Post warning again about artificial intelligence:

One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand. Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.

I almost always begin my public talks on Big Data and intelligent systems with a presentation on industrial revolutions that progresses through Robert Gordon’s phases and then highlights Paul Krugman’s argument that Big Data and the intelligent systems improvements we are seeing potentially represent a next industrial revolution. I am usually less enthusiastic about the timeline than nonspecialists, but after giving a talk at PASS Business Analytics Friday in San Jose, I stuck around to listen in on a highly technical talk concerning statistical regularization and deep learning and I found myself enthused about the topic once again. Deep learning is using artificial neural networks to classify information, but is distinct from traditional ANNs in that the systems are pre-trained using auto-encoders to have a general knowledge about the data domain. To be clear, though, most of the problems that have been tackled are “subsymbolic” for image recognition and speech problems.… Read the rest

Predicting Black Swans

black-swanNasim Taleb’s 2nd Edition of The Black Swan argues—not unpersuasively—that rare, cataclysmic events dominate ordinary statistics. Indeed, he notes that almost all wealth accumulation is based on long-tail distributions where a small number of individuals reap unexpected rewards. The downsides are also equally challenging, where he notes that casinos lose money not in gambling where the statistics are governed by Gaussians (the house always wins), but instead when tigers attack, when workers sue, and when other external factors intervene.

Black Swan Theory adds an interesting challenge to modern inference theories like Algorithmic Information Theory (AIT) that anticipate predictability to the universe. Even variant coding approaches like Minimum Description Length theory modify the anticipatory model based on relatively smooth error functions rather than high “kurtosis” distributions of variable change. And for the most part, for the regular events of life and our sensoriums, that is adequate. It is only where we start to look at rare existential threats that we begin to worry about Black Swans and inference.

How might we modify the typical formulations of AIT and the trade-offs between model complexity and data to accommodate the exceedingly rare? Several approaches are possible. First, if we are combining a predictive model with a resource accumulation criteria, we can simply pad out the model memory by reducing kurtosis risk through additional resource accumulation; any downside is mitigated by the storing of nuts for a rainy day. Good strategy for moderately rare events like weather change, droughts and whatnot. But what about even rarer events like little ice ages and dinosaur extinction-level meteorite hits? An alternative strategy is to maintain sufficient diversity in the face of radical unknowns that coping becomes a species-level achievement.… Read the rest

Minimizing Existential Toaster Threats

tesla2Philosophy in the modern world has strived for a sense of relevance as the sciences (“natural philosophy”) have become dominant. But philosophy may have found a footing in the complicated space between technological advances and defining human virtues with efforts to address and understand change and its impact on human existence. These efforts have included the ethics of biological manipulation and, critically, existential threats to humanity, including climate change, artificial intelligence, and genetic engineering.

I mention all this because I’m really writing about cars but need to fit the discussion somehow into the theme of this blog. So the existential threat of climate change means we need to pollute less and burn less fossil fuels. More tactically, however, my wife and I also needed to buy a new toaster because our five-year-old Oster four-burner unit was failing. There was therefore only one solution to this dilemma: take the brand new Tesla S 70 miles away to the foothills of the Sierra on a test drive and, yes, to buy a new toaster.

I had taken delivery of our Tesla S Performance 85 with Tech Package a week before but didn’t really have any opportunity to drive it because of work obligations that kept me firmly planted in front of a computer monitor. I had driven it briefly but it mostly sat charging in the garage (at the slowish pace of a 120V circuit; Tesla did not deliver my dual charger station in time and I haven’t had the 100A circuit installed to support it either), so when Saturday came, I realized that it was an opportunity to justify a longish trek to test the drivability of the car and to seek out and use the Tesla “supercharger” stations that promise rapid charging in 30 minutes to an hour.… Read the rest