Deep Learning with Quantum Decoherence

Getting back to metaphors in science, Wojciech Zurek’s so-called Quantum Darwinism is in the news due to a series of experimental tests. In Quantum Darwinism (QD), the collapse of the wave function (more properly the “extinction” of states) is a result of decoherence from environmental entanglement. There is a kind of replication in QD, where pointer states are multiplied, and then a kind of environmental selection as well. There is no variation per se, however, though some might argue that the pointer states imprinted by the environment are variants of the originals. Still, it makes the metaphor a bit thin at the edges, but it is close enough for the core idea to fit most of the floor-plan of Darwinism. Indeed, some champion it as part of a more general model for everything. Even selection among viable multiverse bubbles has a similar feel to it: some survive while others perish.

I’ve been simultaneously studying quantum computing and complexity theories that are getting impressively well developed. Richard Cleve’s An Introduction to Quantum Complexity Theory and John Watrous’s Quantum Computational Complexity are notable in their bridging from traditional computational complexity to this newer world of quantum computing using qubits, wave functions, and even decoherence gates.

Decoherence sucks for quantum computing in general, but there may be a way to make use of it. For instance, an artificial neural network (ANN) also has some interesting Darwinian-like properties to it. The initial weight distribution in an ANN is typically a random real value. This is designed to simulate the relative strength of neural connections. Real neural connections are much more complex than this, doing interesting cyclic behavior, saturating and suppressing based on neurotransmitter availability, and so forth, but assuming just a straightforward pattern of connectivity has allowed for significant progress.… Read the rest

Free Will and Algorithmic Information Theory

I was recently looking for examples of applications of algorithmic information theory, also commonly called algorithmic information complexity (AIC). After all, for a theory to be sound is one thing, but when it is sound and valuable it moves to another level. So, first, let’s review the broad outline of AIC. AIC begins with the problem of randomness, specifically random strings of 0s and 1s. We can readily see that given any sort of encoding in any base, strings of characters can be reduced to a binary sequence. Likewise integers.

Now, AIC states that there are often many Turing machines that could generate a given string and, since we can represent those machines also as a bit sequence, there is at least one machine that has the shortest bit sequence while still producing the target string. In fact, if the shortest machine is as long or a bit longer (given some machine encoding requirements), then the string is said to be AIC random. In other words, no compression of the string is possible.

Moreover, we can generalize this generator machine idea to claim that given some set of strings that represent the data of a given phenomena (let’s say natural occurrences), the smallest generator machine that covers all the data is a “theoretical model” of the data and the underlying phenomena. An interesting outcome of this theory is that it can be shown that there is, in fact, no algorithm (or meta-machine) that can find the smallest generator for any given sequence. This is related to Turing Incompleteness.

In terms of applications, Gregory Chaitin, who is one of the originators of the core ideas of AIC, has proposed that the theory sheds light on questions of meta-mathematics and specifically that it demonstrates that mathematics is a quasi-empirical pursuit capable of producing new methods rather than being idealistically derived from analytic first-principles.… Read the rest

The Elusive in Art and Artificial Intelligence

Per caption.
Deep Dream (deepdreamgenerator.com) of my elusive inner Van Gogh.

How exactly deep learning models do what they do is at least elusive. Take image recognition as a task. We know that there are decision-making criteria inferred by the hidden layers of the networks. In Convolutional Neural Networks (CNNs), we have further knowledge that locally-receptive fields (or their simulated equivalent) provide a collection of filters that emphasize image features in different ways, from edge detection to rotation-invariant reductions prior to being subjected to a learned categorizer. Yet, the dividing lines between a chair and a small loveseat, or between two faces, is hidden within some non-linear equation composed of these field representations with weights tuned by exemplar presentation.

This elusiveness was at least part of the reason that neural networks and, generally, machine learning-based approaches have had a complicated position in AI research; if you can’t explain how they work, or even fairly characterize their failure modes, maybe we should work harder to understand the support for those decision criteria rather than just build black boxes to execute them?

So when groups use deep learning to produce visual artworks like the recently auctioned work sold by Christie’s for USD 432K, we can be reassured that the murky issue of aesthetics in art appreciation is at least paired with elusiveness in the production machine.

Or is it?

Let’s take Wittgenstein’s ideas about aesthetics as a perhaps slightly murky point of comparison. In Wittgenstein, we are almost always looking at what are effectively games played between and among people. In language, the rules are shared in a culture, a community, and even between individuals. These are semantic limits, dialogue considerations, standardized usages, linguistic pragmatics, expectations, allusions, and much more.… Read the rest

Running, Ancient Roman Science, Arizona Dive Bars, and Lightning Machine Learning

I just returned from running in Chiricahua National Monument, Sedona, Painted Desert, and Petrified Forest National Park, taking advantage of the late spring before the heat becomes too intense. Even so, though I got to Massai Point in Chiricahua through 90+ degree canyons and had around a liter of water left, I still had to slow down and walk out after running short of liquid nourishment two-thirds down. There is an eerie, uncertain nausea that hits when hydration runs low under high stress. Cliffs and steep ravines take on a wolfish quality. The mind works to control feet against stumbling and the lips get serrated edges of parched skin that bite off without relieving the dryness.

I would remember that days later as I prepped to overnight with a wilderness permit in Petrified Forest only to discover that my Osprey Exos pack frame had somehow been bent, likely due to excessive manhandling by airport checked baggage weeks earlier. I considered my options and drove eighty miles to Flagstaff to replace the pack, then back again.

I arrived in time to join Dr. Richard Carrier in an unexpected dive bar in Holbrook, Arizona as the sunlight turned to amber and a platoon of Navajo pool sharks descended on the place for billiards and beers. I had read that Dr. Carrier would be stopping there and it was convenient to my next excursion, so I picked up signed copies of his new book, The Scientist in the Early Roman Empire, as well as his classic, On the Historicity of Jesus, that remains part of the controversial samizdat of so-called “Jesus mythicism.”

If there is a distinguishing characteristic of OHJ it is the application of Bayesian Theory to the problems of historical method.… Read the rest

Instrumentality and Terror in the Uncanny Valley

I got an Apple HomePod the other day. I have several Airplay speakers already, two in one house and a third in my separate office. The latter, a Naim Mu-So, combines Airplay with internet radio and bluetooth, but I mostly use it for the streaming radio features (KMozart, KUSC, Capital Public Radio, etc.). The HomePod’s Siri implementation combined with Apple Music allows me to voice control playlists and experiment with music that I wouldn’t generally have bothered to buy and own. I can now sample at my leisure without needing to broadcast via a phone or tablet or computer. Steve Reich, Bill Evans, Theolonius Monk, Bach organ mixes, variations of Tristan and Isolde, and, yesterday, when I asked for “workout music” I was gifted with Springsteen’s Born to Run, which I would never have associated with working out, but now I have dying on the mean streets of New Jersey with Wendy in some absurd drag race conflagration replaying over and over again in my head.

Right after setup, I had a strange experience. I was shooting random play thoughts to Siri, then refining them and testing the limits. There are many, as reviewers have noted. Items easily found in Apple Music are occasionally fails for Siri in HomePod, but simple requests and control of a few HomeKit devices work acceptably. The strange experience was my own trepidation over barking commands at the device, especially when I was repeating myself: “Hey Siri. Stop. Play Bill Evans. Stop. Play Bill Evans’ Peace Piece.” (Oh my, homophony, what will happen? It works.) I found myself treating Siri as a bit of a human being in that I didn’t want to tell her to do a trivial task that I had just asked her to perform.… Read the rest

Black and Gray Boxes with Autonomous Meta-Cognition

Vijay Pande of VC Andreessen Horowitz (who passed on my startups twice but, hey, it’s just business!) has a relevant article in New York Times concerning fears of the “black box” of deep learning and related methods: is the lack of explainability and limited capacity for interrogation of the underlying decision making a deal-breaker for applications to critical areas like medical diagnosis or parole decisions? His point is simple, and related to the previous post’s suggestion of the potential limitations of our capacity to truly understand many aspects of human cognition. Even the doctor may only be able to point to a nebulous collection of clinical experiences when it comes to certain observational aspects of their jobs, like in reading images for indicators of cancer. At least the algorithm has been trained on a significantly larger collection of data than the doctor could ever encounter in a professional lifetime.

So the human is almost as much a black box (maybe a gray box?) as the algorithm. One difference that needs to be considered, however, is that the deep learning algorithm might make unexpected errors when confronted with unexpected inputs. The classic example from the early history of artificial neural networks involved a DARPA test of detecting military tanks in photographs. The apocryphal to legendary formulation of the story is that there was a difference in the cloud cover between the tank images and the non-tank images. The end result was that the system performed spectacularly on the training and test data sets but then failed miserably on new data that lacked the cloud cover factor. I recalled this slightly differently recently and substituted film grain for the cloudiness. In any case, it became a discussion point about the limits of data-driven learning that showed how radically incorrect solutions could be created without careful understanding of how the systems work.… Read the rest

Deep Simulation in the Southern Hemisphere

I’m unusually behind in my postings due to travel. I’ve been prepping for and now deep inside a fresh pass through New Zealand after two years away. The complexity of the place seems to have a certain draw for me that has lured me back, yet again, to backcountry tramping amongst the volcanoes and glaciers, and to leasurely beachfront restaurants painted with eruptions of summer flowers fueled by the regular rains.

I recently wrote a technical proposal that rounded up a number of the most recent advances in deep learning neural networks. In each case, like with Google’s transformer architecture, there is a modest enhancement that is based on a realization of a deficit in the performance of one of two broad types of networks, recurrent and convolutional.

An old question is whether we learn anything about human cognition if we just simulate it using some kind of automatically learning mechanism. That is, if we use a model acquired through some kind of supervised or unsupervised learning, can we say we know anything about the original mind and its processes?

We can at least say that the learning methodology appears to be capable of achieving the technical result we were looking for. But it also might mean something a bit different: that there is not much more interesting going on in the original mind. In this radical corner sits the idea that cognitive processes in people are tactical responses left over from early human evolution. All you can learn from them is that they may be biased and tilted towards that early human condition, but beyond that things just are the way they turned out.

If we take this position, then, we might have to discard certain aspects of the social sciences.… Read the rest

I, Robot and Us

What happens if artificial intelligence (AI) technologies become significant economic players? The topic has come up in various ways for the past thirty years, perhaps longer. One model, the so-called technological singularity, posits that self-improving machines may be capable of a level of knowledge generation and disruption that will eliminate humans from economic participation. How far out this singularity might be is a matter of speculation, but I have my doubts that we really understand intelligence enough to start worrying about the impacts of such radical change.

Barring something essentially unknowable because we lack sufficient priors to make an informed guess, we can use evidence of the impact of mechanization on certain economic sectors, like agribusiness or transportation manufacturing, to try to plot out how mechanization might impact other sectors. Aghion, Jones, and Jones’ Artificial Intelligence and Economic Growth, takes a deep dive into the topic. The math is not particularly hard, though the reasons for many of the equations are tied up in macro and microeconomic theory that requires a specialist’s understanding to fully grok.

Of special interest are the potential limiting role of inputs and organizational competition. For instance, automation speed-ups may be limited by human limitations within the economic activity. This may extend even further due to fundamental limitations of physics for a given activity. The pointed example is that power plants are limited by thermodynamics; no amount of additional mechanization can change that. Other factors related to inputs or the complexity of a certain stage of production may also drag economic growth to a capped, limiting level.

Organizational competition and intellectual property considerations come into play, as well. While the authors suggest that corporations will remain relevant, they should become more horizontal by eliminating much of the middle tier of management and outsourcing components of their productivity.… Read the rest

Ambiguously Slobbering Dogs

I was initially dismissive of this note from Google Research on improving machine translation via Deep Learning Networks by adding in a sentence-level network. My goodness, they’ve rediscovered anaphora and co-reference resolution! Next thing they will try is some kind of network-based slot-filler ontology to carry gender metadata. But their goal was to add a framework to their existing recurrent neural network architecture that would support a weak, sentence-level resolution of translational ambiguities while still allowing the TPU/GPU accelerators they have created to function efficiently. It’s a hack, but one that potentially solves yet another corner of the translation problem and might result in a few percent further improvements in the quality of the translation.

But consider the following sentences:

The dog had the ball. It was covered with slobber.

The dog had the ball. It was thinking about lunch while it played.

In these cases, the anaphora gets resolved by semantics and the resolution seems largely an automatic and subconscious process to us as native speakers. If we had to translate these into a second language, however, we would be able to articulate that there are specific reasons for correctly assigning the “It” to the ball in the first two sentences. Well, it might be possible for the dog to be covered with slobber, but we would guess the sentence writer would intentionally avoid that ambiguity. The second set of sentences could conceivably be ambiguous if, in the broader context, the ball was some intelligent entity controlling the dog. Still, when our guesses are limited to the sentence pairs in isolation we would assign the obvious interpretations. Moreover, we can resolve giant, honking passage-level ambiguities with ease, where the author is showing off in not resolving the co-referents until obscenely late in the text.… Read the rest

The Obsessive Dreyfus-Hawking Conundrum

I’ve been obsessed lately. I was up at 5 A.M. yesterday and drove to Ruidoso to do some hiking (trails T93 to T92, if interested). The San Augustin Pass was desolate as the sun began breaking over, so I inched up into triple digit speeds in the M6. Because that is what the machine is made for. Booming across White Sands Missile Range, I recalled watching base police work with National Park Rangers to chase oryx down the highway while early F117s practiced touch-and-gos at Holloman in the background, and then driving my carpool truck out to the high energy laser site or desert ship to deliver documents.

I settled into Starbucks an hour and a half later and started writing on ¡Reconquista!, cranking out thousands of words before trying to track down the trailhead and starting on my hike. (I would have run the thing but wanted to go to lunch later and didn’t have access to a shower. Neither restaurant nor diners deserve an après-run moi.) And then I was on the trail and I kept stopping and taking plot and dialogue notes, revisiting little vignettes and annotating enhancements that I would later salt in to the main text over lunch. And I kept rummaging through the development of characters, refining and sifting the facts of their lives through different sets of sieves until they took on both a greater valence within the story arc and, often, more comedic value.

I was obsessed and remain so. It is a joyous thing to be in this state, comparable only to working on large-scale software systems when the hours melt away and meals slip as one cranks through problem after problem, building and modulating the subsystems until the units begin to sing together like a chorus.… Read the rest