Motivation, Boredom, and Problem Solving

shatteredIn the New York Times Stone column, James Blachowicz of Loyola challenges the assumption that the scientific method is uniquely distinguishable from other ways of thinking and problem solving we regularly employ. In his example, he lays out how writing poetry involves some kind of alignment of words that conform to the requirements of the poem. Whether actively aware of the process or not, the poet is solving constraint satisfaction problems concerning formal requirements like meter and structure, linguistic problems like parts-of-speech and grammar, semantic problems concerning meaning, and pragmatic problems like referential extension and symbolism. Scientists do the same kinds of things in fitting a theory to data. And, in Blachowicz’s analysis, there is no special distinction between scientific method and other creative methods like the composition of poetry.

We can easily see how this extends to ideas like musical composition and, indeed, extends with even more constraints that range from formal through to possibly the neuropsychology of sound. I say “possibly” because there remains uncertainty on how much nurture versus nature is involved in the brain’s reaction to sounds and music.

In terms of a computational model of this creative process, if we presume that there is an objective function that governs possible fits to the given problem constraints, then we can clearly optimize towards a maximum fit. For many of the constraints there are, however, discrete parameterizations (which part of speech? which word?) that are not like curve fitting to scientific data. In fairness, discrete parameters occur there, too, especially in meta-analyses of broad theoretical possibilities (Quantum loop gravity vs. string theory? What will we tell the children?) The discrete parameterizations blow up the search space with their combinatorics, demonstrating on the one hand why we are so damned amazing, and on the other hand why a controlled randomization method like evolutionary epistemology’s blind search and selective retention gives us potential traction in the face of this curse of dimensionality.… Read the rest

Soul Optimization

Against SuperheroesI just did a victory lap around wooden columns in my kitchen and demanded high-fives all around: Against Superheroes is done. Well, technically it just topped the first hurdle.  Core writing is complete at 100,801 words. I will now do two editorial passes and then send it to my editor for clean-up. Finally, I’ll get some feedback from my wife before sending it out for independent review.

I try to write according to a daily schedule but I have historically been an inconsistent worker. I track everything using a spreadsheet and it doesn’t look pretty:

wordchart

Note the long gaps. The gaps are problematic for several reasons, not the least of which is that I have to go back and read everything again to return to form. The gaps arrive with excuses, then get amplified by more excuses, then get massaged into to-do lists, and then always get resolved by unknown forces. Maybe they are unknowable.

The one consistency that I have found is that I always start strong and finish strong, bursts of enthusiasm for the project arriving with runner’s high on the trail, or while waiting in traffic. The plot thickets open to luxuriant fields. When I’m in the gap periods I distract myself too easily, finding the deep research topics an easy way to justify an additional pause of days, then weeks, sometimes months.

I guess I should resolve to find my triggers and work to overcome these tendencies, but I’m not certain that it matters. There is no rush, and those exuberant starts and ends are perhaps enough of a reward that no deeper optimization of my soul is needed.… Read the rest

Quantum Field Is-Oughts

teleologySean Carroll’s Oxford lecture on Poetic Naturalism is worth watching (below). In many ways it just reiterates several common themes. First, it reinforces the is-ought barrier between values and observations about the natural world. It does so with particular depth, though, by identifying how coarse-grained theories at different levels of explanation can be equally compatible with quantum field theory. Second, and related, he shows how entropy is an emergent property of atomic theory and the interactions of quantum fields (that we think of as particles much of the time) and, importantly, that we can project the same notion of boundary conditions that result in entropy into the future resulting in a kind of effective teleology. That is, there can be some boundary conditions for the evolution of large-scale particle systems that form into configurations that we can label purposeful or purposeful-like. I still like the term “teleonomy” to describe this alternative notion, but the language largely doesn’t matter except as an educational and distinguishing tool against the semantic embeddings of old scholastic monks.

Finally, the poetry aspect resolves in value theories of the world. Many are compatible with descriptive theories, and our resolution of them is through opinion, reason, communications, and, yes, violence and war. There is no monopoly of policy theories, religious claims, or idealizations that hold sway. Instead we have interests and collective movements, and the above, all working together to define our moral frontiers.

 … Read the rest

Local Minima and Coatimundi

CoatimundiEven given the basic conundrum of how deep learning neural networks might cope with temporal presentations or linear sequences, there is another oddity to deep learning that only seems obvious in hindsight. One of the main enhancements to traditional artificial neural networks is a phase of supervised pre-training that forces each layer to try to create a generative model of the input pattern. The deep learning networks then learn a discriminant model after the initial pre-training is done, focusing on the error relative to classification versus simply recognizing the phrase or image per se.

Why this makes a difference has been the subject of some investigation. In general, there is an interplay between the smoothness of the error function and the ability of the optimization algorithms to cope with local minima. Visualize it this way: for any machine learning problem that needs to be solved, there are answers and better answers. Take visual classification. If the system (or you) gets shown an image of a coatimundi and a label that says coatimundi (heh, I’m running in New Mexico right now…), learning that image-label association involves adjusting weights assigned to different pixels in the presentation image down through multiple layers of the network that provide increasing abstractions about the features that define a coatimundi. And, importantly, that define a coatimundi versus all the other animals and non-animals.,

These weight choices define an error function that is the optimization target for the network as a whole, and this error function can have many local minima. That is, by enhancing the weights supporting a coati versus a dog or a raccoon, the algorithm inadvertently leans towards a non-optimal assignment for all of them by focusing instead on a balance between them that is predestined by the previous dog and raccoon classifications (or, in general, the order of presentation).… Read the rest

Dates, Numbers, and Canadian Makings

Against SuperheroesOn the 21st of June, 1997, which was the solstice, my wife and I married. We celebrated that date again today, but it is not the solstice again due to astronomical drift around the calendar. And I also crossed the border of 100,000 words on Against Superheroes, moving towards resolution of a novel that could, conceivably, have no ending. There are always more mythologies to be explored.

Just last week I was in Banff, Canada, sitting quietly with my bear spray and a little titanium cook pot. I didn’t have to deploy the mace, and was relieved I also didn’t have to endure twelve hours of wolf stalking like this Canadian woman.

And while I was north of the US border, I learned that a Canadian animated film I was involved with was released to Amazon Prime video. I am just an Executive Producer of the film, which means that I had no creative input, but I am really pleased with the film. Ironically, I couldn’t watch this Canadian product while in Canada, just an hour from the studio that produced it. But rest assured that Christmas will be saved in the end!… Read the rest

New Behaviorism and New Cognitivism

lstm_memorycellDeep Learning now dominates discussions of intelligent systems in Silicon Valley. Jeff Dean’s discussion of its role in the Alphabet product lines and initiatives shows the dominance of the methodology. Pushing the limits of what Artificial Neural Networks have been able to do has been driven by certain algorithmic enhancements and the ability to process weight training algorithms at much higher speeds and over much larger data sets. Google even developed specialized hardware to assist.

Broadly, though, we see mostly pattern recognition problems like image classification and automatic speech recognition being impacted by these advances. Natural language parsing has also recently had some improvements from Fernando Pereira’s team. The incremental improvements using these methods should not be minimized but, at the same time, the methods don’t emulate key aspects of what we observe in human cognition. For instance, the networks train incrementally and lack the kinds of rapid transitions that we observe in human learning and thinking.

In a strong sense, the models that Deep Learning uses can be considered Behaviorist in that they rely almost exclusively on feature presentation with a reward signal. The internal details of how modularity or specialization arise within the network layers are interesting but secondary to the broad use of back-propagation or Gibb’s sampling combined with autoencoding. This is a critique that goes back to the early days of connectionism, of course, and why it was somewhat sidelined after an initial heyday in the late eighties. Then came statistical NLP, then came hybrid methods, then a resurgence of corpus methods, all the while with image processing getting more and more into the hand-crafted modular space.

But we can see some interesting developments that start to stir more Cognitivism into this stew.… Read the rest

Evolving Visions of Chaotic Futures

FlutterbysMost artificial intelligence researchers think unlikely the notion that a robot apocalypse or some kind of technological singularity is coming anytime soon. I’ve said as much, too. Guessing about the likelihood of distant futures is fraught with uncertainty; current trends are almost impossible to extrapolate.

But if we must, what are the best ways for guessing about the future? In the late 1950s the Delphi method was developed. Get a group of experts on a given topic and have them answer questions anonymously. Then iteratively publish back the group results and ask for feedback and revisions. Similar methods have been developed for face-to-face group decision making, like Kevin O’Connor’s approach to generating ideas in The Map of Innovation: generate ideas and give participants votes equaling a third of the number of unique ideas. Keep iterating until there is a consensus. More broadly, such methods are called “nominal group techniques.”

Most recently, the notion of prediction markets has been applied to internal and external decision making. In prediction markets,  a similar voting strategy is used but based on either fake or real money, forcing participants towards a risk-averse allocation of assets.

Interestingly, we know that optimal inference based on past experience can be codified using algorithmic information theory, but the fundamental problem with any kind of probabilistic argument is that much change that we observe in society is non-linear with respect to its underlying drivers and that the signals needed are imperfect. As the mildly misanthropic Nassim Taleb pointed out in The Black Swan, the only place where prediction takes on smooth statistical regularity is in Las Vegas, which is why one shouldn’t bother to gamble.… Read the rest

Purple Flowers

Pacific GroveI’ve got Ligeti’s Lontano running—vast sheaves of tones, building and yielding like the striated dark bands in the edge sands of a beach—while the sun sets through clouds and fern pines. The shadows on the windows are swarming minnows rising and falling before the dense mass of new rain like a Great Wall to the west.

I just returned from an overnighter to Monterey, running out from Cannery Row along the complex purple-flowered walls of Pacific Grove, yelping harbor seals cooking along the rocks and thin spits of sand, their voices mixing with the endless baby wails of seabirds as evening set in.

And Prince is dead. I seem to be commemorating dead people these days. I thought he was an interesting oddity in the earliest years of MTV with his soul moves, but didn’t really find a degree of respect for his talents until I was in the Peace Corps and on leave to the capital city of Fiji, Suva, over summer break (December-ish, 1990). I had nothing better to do than to wander the streets, meet-up and drink with whoever was available, and go to the air-conditioned movie theater whenever I could afford it. I saw Graffiti Bridge probably four times and, by the second showing, had revised my opinion of Prince to that of a great innovator. The music was both like and unlike anything I had heard, rippling with tempo changes and rapid polyrhythms, sharp synchronization and dense harmonies. The movie was, however, terrible.

Like Bowie’s Quicksand and Velvet Underground’s Candy Says, I memorialize Cream and Kiss, and, perhaps the most impressive, Sinead O’Connor’s rendition of Nothing Compares 2 U. It still brings tears.… Read the rest

Build Up That Wall

No, I’m not endorsing the construction of additional walls between the United States and Mexico. There are plenty of those and they may be of questionable value. Instead, it is Thomas Jefferson’s birthday and I’m quoting from Christopher Hitchens (who shared his birthday with Jefferson) in repurposing and inverting Reagan’s famous request of Gorbachev. Hitch promoted the Jeffersonian ideal of separating out the civic from the religious:

Be it enacted by General Assembly that no man shall be compelled to frequent or support any religious worship, place, or ministry whatsoever, nor shall be enforced, restrained, molested, or burthened in his body or goods, nor shall otherwise suffer on account of his religious opinions or belief, but that all men shall be free to profess, and by argument to maintain, their opinions in matters of Religion, and that the same shall in no wise diminish, enlarge or affect their civil capacities.

from Jefferson’s Virginia Statute for Religious Freedom

A rather remarkable continuation of Enlightenment concepts that derive, typically, from a notion of “natural rights” and, even in the Virginia Statue, from religious concepts: “Whereas, Almighty God hath created the mind free.” With the following paragraphs noting that human rulers are fallible and have tended to create false religions down through time, apparently regardless of God’s wishes.

Natural rights are an interesting idea that re-occurs in the Declaration of Independence and were also championed by George Mason in the Virginia Declaration of Rights. The notion that natural rights did not extend to slaves was something that Jefferson was conflicted about, according to Hitchens, until the end of his life, with the issue of state’s rights a pragmatic basis for opposition to an institution that he both profited from and found morally repugnant.… Read the rest