The Elusive in Art and Artificial Intelligence

Per caption.
Deep Dream (deepdreamgenerator.com) of my elusive inner Van Gogh.

How exactly deep learning models do what they do is at least elusive. Take image recognition as a task. We know that there are decision-making criteria inferred by the hidden layers of the networks. In Convolutional Neural Networks (CNNs), we have further knowledge that locally-receptive fields (or their simulated equivalent) provide a collection of filters that emphasize image features in different ways, from edge detection to rotation-invariant reductions prior to being subjected to a learned categorizer. Yet, the dividing lines between a chair and a small loveseat, or between two faces, is hidden within some non-linear equation composed of these field representations with weights tuned by exemplar presentation.

This elusiveness was at least part of the reason that neural networks and, generally, machine learning-based approaches have had a complicated position in AI research; if you can’t explain how they work, or even fairly characterize their failure modes, maybe we should work harder to understand the support for those decision criteria rather than just build black boxes to execute them?

So when groups use deep learning to produce visual artworks like the recently auctioned work sold by Christie’s for USD 432K, we can be reassured that the murky issue of aesthetics in art appreciation is at least paired with elusiveness in the production machine.

Or is it?

Let’s take Wittgenstein’s ideas about aesthetics as a perhaps slightly murky point of comparison. In Wittgenstein, we are almost always looking at what are effectively games played between and among people. In language, the rules are shared in a culture, a community, and even between individuals. These are semantic limits, dialogue considerations, standardized usages, linguistic pragmatics, expectations, allusions, and much more.… Read the rest

Indifference and the Cosmos

I am a political independent, though that does not mean that I vote willy-nilly. I have, in fact, been reliably center left for most of my adult life, save one youthfully rebellious moment when I voted Libertarian, more as a statement than a commitment to the principles of libertarianism per se. I regret that vote now, given additional exposure to the party and the kinds of people it attracts. To me, the extremes of the American political system build around radical positions, and the increasingly noxious conspiracy theories and unhinged rhetoric is nothing like the cautious, problem-solving utopia that might make me politically happy, or at least wince less.

Some might claim I am indifferent. I would not argue with that. In the face of revolution, I would require a likely impossible proof of a better outcome before committing. How can we possibly see into such a permeable and contingent future, or weigh the goods and harms in the face of the unknown? This idea of indifference, as a tempering of our epistemic insights, serves as a basis for an essential idea in probabilistic reasoning where it even has the name, the principle of indifference, or, variously, and in contradistinction with Leibniz’s principle of sufficient reason, the principle of insufficient reason.

So how does indifference work in probabilistic reasoning? Consider a Bayesian formulation: we inductively guess based on a combination of a priori probabilities combined with a posteriori evidences. What is the likelihood of the next word in an English sentence being “is”? Indifference suggests that we treat each word as likely as any other, but we know straight away that “is” occurs much more often than “Manichaeistic” in English texts because we can count words.… Read the rest

Incompressibility and the Mathematics of Ethical Magnetism

One of the most intriguing aspects of the current U.S. border crisis is the way that human rights and American decency get articulated in the public sphere of discourse. An initial pull is raw emotion and empathy, then there are counterweights where the long-term consequences of existing policies are weighed against the exigent effects of the policy, and then there are crackpot theories of “crisis actors” and whatnot as bizarro-world distractions. But, if we accept the general thesis of our enlightenment values carrying us ever forward into increasing rights for all, reduced violence and war, and the closing of the curtain on the long human history of despair, poverty, and hunger, we must also ask more generally how this comes to be. Steven Pinker certainly has rounded up some social theories, but what kind of meta-ethics might be at work that seems to push human civilization towards these positive outcomes?

Per the last post, I take the position that we can potentially formulate meaningful sentences about what “ought” to be done, and that those meaningful sentences are, in fact, meaningful precisely because they are grounded in the semantics we derive from real world interactions. How does this work? Well, we can invoke the so-called Cornell Realists argument that the semantics of a word like “ought” is not as flexible as Moore’s Open Question argument suggests. Indeed, if we instead look at the natural world and the theories that we have built up about it (generally “scientific theories” but, also, perhaps “folk scientific ideas” or “developing scientific theories”), certain concepts take on the character of being so-called “joints of reality.” That is, they are less changeable than other concepts and become referential magnets that have an elite status among the concepts we use for the world.… Read the rest

Running, Ancient Roman Science, Arizona Dive Bars, and Lightning Machine Learning

I just returned from running in Chiricahua National Monument, Sedona, Painted Desert, and Petrified Forest National Park, taking advantage of the late spring before the heat becomes too intense. Even so, though I got to Massai Point in Chiricahua through 90+ degree canyons and had around a liter of water left, I still had to slow down and walk out after running short of liquid nourishment two-thirds down. There is an eerie, uncertain nausea that hits when hydration runs low under high stress. Cliffs and steep ravines take on a wolfish quality. The mind works to control feet against stumbling and the lips get serrated edges of parched skin that bite off without relieving the dryness.

I would remember that days later as I prepped to overnight with a wilderness permit in Petrified Forest only to discover that my Osprey Exos pack frame had somehow been bent, likely due to excessive manhandling by airport checked baggage weeks earlier. I considered my options and drove eighty miles to Flagstaff to replace the pack, then back again.

I arrived in time to join Dr. Richard Carrier in an unexpected dive bar in Holbrook, Arizona as the sunlight turned to amber and a platoon of Navajo pool sharks descended on the place for billiards and beers. I had read that Dr. Carrier would be stopping there and it was convenient to my next excursion, so I picked up signed copies of his new book, The Scientist in the Early Roman Empire, as well as his classic, On the Historicity of Jesus, that remains part of the controversial samizdat of so-called “Jesus mythicism.”

If there is a distinguishing characteristic of OHJ it is the application of Bayesian Theory to the problems of historical method.… Read the rest

Zebras with Machine Guns

I was just rereading some of the literature on Plantinga’s Evolutionary Argument Against Naturalism (EAAN) as a distraction from trying to write too much on ¡Reconquista!, since it looks like I am on a much faster trajectory to finishing the book than I had thought. EAAN is a curious little argument that some have dismissed as a resurgent example of scholastic theology. It has some newer trappings that we see in modern historical method, however, especially in the use Bayes’ Theorem to establish the warrant of beliefs by trying to cast those warrants as probabilities.

A critical part of Plantinga’s argument hinges on the notion that evolutionary processes optimize against behavior and not necessarily belief. Therefore, it is plausible that an individual could hold false beliefs that are nonetheless adaptive. For instance, Plantinga gives the example of a man who desires to be eaten by tigers but always feels hopeless when confronted by a given tiger because he doesn’t feel worthy of that particular tiger, so he runs away and looks for another one. This may seem like a strange conjunction of beliefs and actions that happen to result in the man surviving, but we know from modern psychology that people can form elaborate justifications for perceived events and wild metaphysics to coordinate those justifications.

If that is the case, for Plantinga, the evolutionary consequence is that we should not trust our belief in our reasoning faculties because they are effectively arbitrary. There are dozens of responses to this argument that dissect it from many different dimensions. I’ve previously showcased Branden Fitelson and Elliot Sober’s Plantinga’s Probability Arguments Against Evolutionary Naturalism from 1997, which I think is one of the most complete examinations of the structure of the argument.… Read the rest

Apprendre à traduire

Google’s translate has always been a useful tool for awkward gists of short texts. The method used was based on building a phrase-based statistical translation model. To do this, you gather up “parallel” texts that are existing, human, translations. You then “align” them by trying to find the most likely corresponding phrases in each sentence or sets of sentences. Often, between languages, fewer or more sentences will be used to express the same ideas. Once you have that collection of phrasal translation candidates, you can guess the most likely translation of a new sentence by looking up the sequence of likely phrase groups that correspond to that sentence. IBM was the progenitor of this approach in the late 1980’s.

It’s simple and elegant, but it always was criticized for telling us very little about language. Other methods that use techniques like interlingual transfer and parsers showed a more linguist-friendly face. In these methods, the source language is parsed into a parse tree and then that parse tree is converted into a generic representation of the meaning of the sentence. Next a generator uses that representation to create a surface form rendering in the target language. The interlingua must be like the deep meaning of linguistic theories, though the computer science versions of it tended to look a lot like ontological representations with fixed meanings. Flexibility was never the strong suit of these approaches, but their flaws were much deeper than just that.

For one, nobody was able to build a robust parser for any particular language. Next, the ontology was never vast enough to accommodate the rich productivity of real human language. Generators, being the inverse of the parser, remained only toy projects in the computational linguistic community.… Read the rest

Boredom and Being a Decider

tds_decider2_v6Seth Lloyd and I have rarely converged (read: absolutely never) on a realization, but his remarkable 2013 paper on free will and halting problems does, in fact, converge on a paper I wrote around 1986 for an undergraduate Philosophy of Language course. I was, at the time, very taken by Gödel, Escher, Bach: An Eternal Golden Braid, Douglas Hofstadter’s poetic excursion around the topic of recursion, vertical structure in ricercars, and various other topics that stormed about in his book. For me, when combined with other musings on halting problems, it led to a conclusion that the halting problem could be probabilistically solved by an observer who decides when the recursion is too repetitive or too deep. Thus, it prescribes an overlay algorithm that guesses about the odds of another algorithm when subjected to a time or resource constraint. Thus we have a boredom algorithm.

I thought this was rather brilliant at the time and I ended up having a one-on-one with my prof who scoffed at GEB as a “serious” philosophical work. I had thought it was all psychedelically transcendent and had no deep understanding of more serious philosophical work beyond the papers by Kripke, Quine, and Davidson that we had been tasked to read. So I plead undergraduateness. Nevertheless, he had invited me to a one-on-one and we clashed over the concept of teleology and directedness in evolutionary theory. How we got to that from the original decision trees of halting or non-halting algorithms I don’t recall.

But now we have an argument that essentially recapitulates that original form, though with the help of the Hartmanis-Stearns theorem to support it. Whatever the algorithm that runs in our heads, it needs to simulate possible outcomes and try to determine what the best course of action might be (or the worst course, or just some preference).… Read the rest

Evolving Visions of Chaotic Futures

FlutterbysMost artificial intelligence researchers think unlikely the notion that a robot apocalypse or some kind of technological singularity is coming anytime soon. I’ve said as much, too. Guessing about the likelihood of distant futures is fraught with uncertainty; current trends are almost impossible to extrapolate.

But if we must, what are the best ways for guessing about the future? In the late 1950s the Delphi method was developed. Get a group of experts on a given topic and have them answer questions anonymously. Then iteratively publish back the group results and ask for feedback and revisions. Similar methods have been developed for face-to-face group decision making, like Kevin O’Connor’s approach to generating ideas in The Map of Innovation: generate ideas and give participants votes equaling a third of the number of unique ideas. Keep iterating until there is a consensus. More broadly, such methods are called “nominal group techniques.”

Most recently, the notion of prediction markets has been applied to internal and external decision making. In prediction markets,  a similar voting strategy is used but based on either fake or real money, forcing participants towards a risk-averse allocation of assets.

Interestingly, we know that optimal inference based on past experience can be codified using algorithmic information theory, but the fundamental problem with any kind of probabilistic argument is that much change that we observe in society is non-linear with respect to its underlying drivers and that the signals needed are imperfect. As the mildly misanthropic Nassim Taleb pointed out in The Black Swan, the only place where prediction takes on smooth statistical regularity is in Las Vegas, which is why one shouldn’t bother to gamble.… Read the rest

The Goldilocks Complexity Zone

FractalSince my time in the early 90s at Santa Fe Institute, I’ve been fascinated by the informational physics of complex systems. What are the requirements of an abstract system that is capable of complex behavior? How do our intuitions about complex behavior or form match up with mathematical approaches to describing complexity? For instance, we might consider a snowflake complex, but it is also regular in it’s structure, driven by an interaction between crystal growth and the surrounding air. The classic examples of coastlines and fractal self-symmetry also seem complex but are not capable of complex behavior.

So what is a good way of thinking about complexity? There is actually a good range of ideas about how to characterize complexity. Seth Lloyd rounds up many of them, here. The intuition that drives many of them is that complexity seems to be associated with distributions of relationships and objects that are somehow juxtapositioned between a single state and a uniformly random set of states. Complex things, be they living organisms or computers running algorithms, should exist in a Goldilocks zone when each part is examined and those parts are somehow summed up to a single measure.

We can easily construct a complexity measure that captures some of these intuitions. Let’s look at three strings of characters:

x = aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa

y = menlqphsfyjubaoitwzrvcgxdkbwohqyxplerz

z = the fox met the hare and the fox saw the hare

Now we would likely all agree that y and z are more complex than x, and I suspect most would agree that y looks like gibberish compared with z. Of course, y could be a sequence of weirdly coded measurements or something, or encrypted such that the message appears random.… Read the rest