Running, Ancient Roman Science, Arizona Dive Bars, and Lightning Machine Learning

I just returned from running in Chiricahua National Monument, Sedona, Painted Desert, and Petrified Forest National Park, taking advantage of the late spring before the heat becomes too intense. Even so, though I got to Massai Point in Chiricahua through 90+ degree canyons and had around a liter of water left, I still had to slow down and walk out after running short of liquid nourishment two-thirds down. There is an eerie, uncertain nausea that hits when hydration runs low under high stress. Cliffs and steep ravines take on a wolfish quality. The mind works to control feet against stumbling and the lips get serrated edges of parched skin that bite off without relieving the dryness.

I would remember that days later as I prepped to overnight with a wilderness permit in Petrified Forest only to discover that my Osprey Exos pack frame had somehow been bent, likely due to excessive manhandling by airport checked baggage weeks earlier. I considered my options and drove eighty miles to Flagstaff to replace the pack, then back again.

I arrived in time to join Dr. Richard Carrier in an unexpected dive bar in Holbrook, Arizona as the sunlight turned to amber and a platoon of Navajo pool sharks descended on the place for billiards and beers. I had read that Dr. Carrier would be stopping there and it was convenient to my next excursion, so I picked up signed copies of his new book, The Scientist in the Early Roman Empire, as well as his classic, On the Historicity of Jesus, that remains part of the controversial samizdat of so-called “Jesus mythicism.”

If there is a distinguishing characteristic of OHJ it is the application of Bayesian Theory to the problems of historical method.… Read the rest

Quintessence of Rust

¡Reconquista! is making the rounds looking for representation. This is a first for me. I’ve been strongly attracted to the idea of disintermediating publishing, music, film, transportation, business, and anything else that happens by; my Silicon Valley persona sees disruption as a virtue, for better or worse. But why not learn from the mainstream model for the trajectory of books and ideas?

Meanwhile, nothing sits still. I’m journeying through Steven Pinker’s latest data dump, Enlightenment Now. My bookshelf crawls with his books, including Learnability and Cognition, Language Learnability and Cognitive Development, The Stuff of Thought, Words and Rules, and The Language Instinct, while my Kindle app holds The Better Angels of Our Nature and the new tome. I also have a novel by his wife, Rebecca Newberger Goldstein, though I have never read any of her more scholarly materials on Spinoza.

Enlightenment Now (shouldn’t it have an exclamation point?) gives an upbeat shine to these days of social media anger and the jeremiads of fracturing America. Pinker is a polished writer who should be required reading, if only for his clarity and perseverance in structuring support for mature theses. There is some overlap with Better Angels, but it’s easy to skip past the repetition and still find new insights.

And I have a new science fiction/cyberpunk series under development, tentatively titled Quintessence of Rust. Robots, androids, transhumans, AIs, and a future world where reality, virtual existences, and constructed fictions all vie for political and social relevance. Early experimentation with alternative voices and story arc tuning have shown positive results. There is even some concept art executed with an iPad Pro + Pencil combined with Procreate (amused a firm would choose that name!).… Read the rest

Instrumentality and Terror in the Uncanny Valley

I got an Apple HomePod the other day. I have several Airplay speakers already, two in one house and a third in my separate office. The latter, a Naim Mu-So, combines Airplay with internet radio and bluetooth, but I mostly use it for the streaming radio features (KMozart, KUSC, Capital Public Radio, etc.). The HomePod’s Siri implementation combined with Apple Music allows me to voice control playlists and experiment with music that I wouldn’t generally have bothered to buy and own. I can now sample at my leisure without needing to broadcast via a phone or tablet or computer. Steve Reich, Bill Evans, Theolonius Monk, Bach organ mixes, variations of Tristan and Isolde, and, yesterday, when I asked for “workout music” I was gifted with Springsteen’s Born to Run, which I would never have associated with working out, but now I have dying on the mean streets of New Jersey with Wendy in some absurd drag race conflagration replaying over and over again in my head.

Right after setup, I had a strange experience. I was shooting random play thoughts to Siri, then refining them and testing the limits. There are many, as reviewers have noted. Items easily found in Apple Music are occasionally fails for Siri in HomePod, but simple requests and control of a few HomeKit devices work acceptably. The strange experience was my own trepidation over barking commands at the device, especially when I was repeating myself: “Hey Siri. Stop. Play Bill Evans. Stop. Play Bill Evans’ Peace Piece.” (Oh my, homophony, what will happen? It works.) I found myself treating Siri as a bit of a human being in that I didn’t want to tell her to do a trivial task that I had just asked her to perform.… Read the rest

Black and Gray Boxes with Autonomous Meta-Cognition

Vijay Pande of VC Andreessen Horowitz (who passed on my startups twice but, hey, it’s just business!) has a relevant article in New York Times concerning fears of the “black box” of deep learning and related methods: is the lack of explainability and limited capacity for interrogation of the underlying decision making a deal-breaker for applications to critical areas like medical diagnosis or parole decisions? His point is simple, and related to the previous post’s suggestion of the potential limitations of our capacity to truly understand many aspects of human cognition. Even the doctor may only be able to point to a nebulous collection of clinical experiences when it comes to certain observational aspects of their jobs, like in reading images for indicators of cancer. At least the algorithm has been trained on a significantly larger collection of data than the doctor could ever encounter in a professional lifetime.

So the human is almost as much a black box (maybe a gray box?) as the algorithm. One difference that needs to be considered, however, is that the deep learning algorithm might make unexpected errors when confronted with unexpected inputs. The classic example from the early history of artificial neural networks involved a DARPA test of detecting military tanks in photographs. The apocryphal to legendary formulation of the story is that there was a difference in the cloud cover between the tank images and the non-tank images. The end result was that the system performed spectacularly on the training and test data sets but then failed miserably on new data that lacked the cloud cover factor. I recalled this slightly differently recently and substituted film grain for the cloudiness. In any case, it became a discussion point about the limits of data-driven learning that showed how radically incorrect solutions could be created without careful understanding of how the systems work.… Read the rest

Deep Simulation in the Southern Hemisphere

I’m unusually behind in my postings due to travel. I’ve been prepping for and now deep inside a fresh pass through New Zealand after two years away. The complexity of the place seems to have a certain draw for me that has lured me back, yet again, to backcountry tramping amongst the volcanoes and glaciers, and to leasurely beachfront restaurants painted with eruptions of summer flowers fueled by the regular rains.

I recently wrote a technical proposal that rounded up a number of the most recent advances in deep learning neural networks. In each case, like with Google’s transformer architecture, there is a modest enhancement that is based on a realization of a deficit in the performance of one of two broad types of networks, recurrent and convolutional.

An old question is whether we learn anything about human cognition if we just simulate it using some kind of automatically learning mechanism. That is, if we use a model acquired through some kind of supervised or unsupervised learning, can we say we know anything about the original mind and its processes?

We can at least say that the learning methodology appears to be capable of achieving the technical result we were looking for. But it also might mean something a bit different: that there is not much more interesting going on in the original mind. In this radical corner sits the idea that cognitive processes in people are tactical responses left over from early human evolution. All you can learn from them is that they may be biased and tilted towards that early human condition, but beyond that things just are the way they turned out.

If we take this position, then, we might have to discard certain aspects of the social sciences.… Read the rest

The Universal Roots of Fantasyland

Intellectual history and cultural criticism always teeters on the brink of totalism. So it was when Christopher Hitchens was forced to defend the hyperbolic subtitle of God Is Not Great: How Religion Poisons Everything. The complaint was always the same: everything, really? Or when Neil Postman downplayed the early tremors of the internet in his 1985 Amusing Ourselves to Death. Email couldn’t be anything more than another movement towards entertainment and celebrity. So it is no surprise that Kurt Andersen’s Fantasyland: How America Went Wrong: A 500-Year History is open to similar charges.

Andersen’s thesis is easily digestible: we built a country on fantasies. From the earliest charismatic stirrings of the Puritans to the patent medicines of the 19th century, through to the counterculture of the 1960s, and now with an incoherent insult comedian and showman as president, America has thrived on inventing wild, fantastical narratives that coalesce into movements. Andersen’s detailed analysis is breathtaking as he pulls together everything from linguistic drift to the psychology of magical thinking to justify his thesis.

Yet his thesis might be too narrow. It is not a uniquely American phenomenon. When Andersen mentions cosplay, he fails to identify its Japanese contributions, including the word itself. In the California Gold Rush, he sees economic fantasies driving a generation to unmoor themselves from their merely average lives. Yet the conquistadores had sought to enrich themselves, God, and country while Americans were forming their shining cities on hills. And in mid-19th-century Europe, while the Americans panned in the Sierra, romanticism was throwing off the oppressive yoke of Enlightenment rationality as the West became increasingly exposed to enigmatic Asian cultures. By the 20th century, Weimar Berlin was a hotbed of cultural fantasies that dovetailed with the rise of Nazism and a fantastical theory of race, German volk culture, and Indo-European mysticism.… Read the rest

I, Robot and Us

What happens if artificial intelligence (AI) technologies become significant economic players? The topic has come up in various ways for the past thirty years, perhaps longer. One model, the so-called technological singularity, posits that self-improving machines may be capable of a level of knowledge generation and disruption that will eliminate humans from economic participation. How far out this singularity might be is a matter of speculation, but I have my doubts that we really understand intelligence enough to start worrying about the impacts of such radical change.

Barring something essentially unknowable because we lack sufficient priors to make an informed guess, we can use evidence of the impact of mechanization on certain economic sectors, like agribusiness or transportation manufacturing, to try to plot out how mechanization might impact other sectors. Aghion, Jones, and Jones’ Artificial Intelligence and Economic Growth, takes a deep dive into the topic. The math is not particularly hard, though the reasons for many of the equations are tied up in macro and microeconomic theory that requires a specialist’s understanding to fully grok.

Of special interest are the potential limiting role of inputs and organizational competition. For instance, automation speed-ups may be limited by human limitations within the economic activity. This may extend even further due to fundamental limitations of physics for a given activity. The pointed example is that power plants are limited by thermodynamics; no amount of additional mechanization can change that. Other factors related to inputs or the complexity of a certain stage of production may also drag economic growth to a capped, limiting level.

Organizational competition and intellectual property considerations come into play, as well. While the authors suggest that corporations will remain relevant, they should become more horizontal by eliminating much of the middle tier of management and outsourcing components of their productivity.… Read the rest

Simulator Superputz

The simulation hypothesis is perhaps a bit more interesting than how to add clusters of neural network nodes to do a simple reference resolution task, but it is also less testable. This is the nature of big questions since they would otherwise have been resolved by now. Nevertheless, some theory and experimental analysis has been undertaken for the question of whether or not we are living in a simulation, all based on an assumption that the strangeness of quantum and relativistic realities might be a result of limited computing power in the grand simulator machine. For instance, in a virtual reality game, only the walls that you, as a player, can see need to be calculated and rendered. The other walls that are out of sight exist only as a virtual map in the computer’s memory or persisted to longer-term storage. Likewise, the behavior of virtual microscopic phenomena need not be calculated insofar as the macroscopic results can be rendered, like the fire patterns in a virtual torch.

So one way of explaining physics conundrums like delayed choice quantum erasers, Bell’s inequality, or ER = EPR might be to claim that these sorts of phenomena are the results of a low-fidelity simulation necessitated by the limits of the simulator computer. I think the likelihood that this is true is low, however, because we can imagine that there exists an infinitely large cosmos that merely includes our universe simulation as a mote within it. Low-fidelity simulation constraints might give experimental guidance, but the results could also be supported by just living with the indeterminacy and non-locality as fundamental features of our universe.

It’s worth considering, however, what we should think about the nature of the simulator given this potentially devious (and poorly coded) little Matrix that we find ourselves trapped in?… Read the rest

Ambiguously Slobbering Dogs

I was initially dismissive of this note from Google Research on improving machine translation via Deep Learning Networks by adding in a sentence-level network. My goodness, they’ve rediscovered anaphora and co-reference resolution! Next thing they will try is some kind of network-based slot-filler ontology to carry gender metadata. But their goal was to add a framework to their existing recurrent neural network architecture that would support a weak, sentence-level resolution of translational ambiguities while still allowing the TPU/GPU accelerators they have created to function efficiently. It’s a hack, but one that potentially solves yet another corner of the translation problem and might result in a few percent further improvements in the quality of the translation.

But consider the following sentences:

The dog had the ball. It was covered with slobber.

The dog had the ball. It was thinking about lunch while it played.

In these cases, the anaphora gets resolved by semantics and the resolution seems largely an automatic and subconscious process to us as native speakers. If we had to translate these into a second language, however, we would be able to articulate that there are specific reasons for correctly assigning the “It” to the ball in the first two sentences. Well, it might be possible for the dog to be covered with slobber, but we would guess the sentence writer would intentionally avoid that ambiguity. The second set of sentences could conceivably be ambiguous if, in the broader context, the ball was some intelligent entity controlling the dog. Still, when our guesses are limited to the sentence pairs in isolation we would assign the obvious interpretations. Moreover, we can resolve giant, honking passage-level ambiguities with ease, where the author is showing off in not resolving the co-referents until obscenely late in the text.… Read the rest

Gravity and the Dark Star

I began at 5 AM from the Broomfield Aloft hotel, strategically situated in a sterile “new urban” office park cum apartment complex along the connecting freeway between Denver and Boulder. The whole weekend was fucked in a way: colleges across Colorado were moving in for a Monday start, half of Texas was here already, and most of Colorado planned to head north to the zone of totality. I split off I-25 around Loveland and had success using US 85 northbound through Cheyenne. Continuing up 85 was the original plan, but that fell apart when 85 came to a crawl in the vast prairie lands of Wyoming. I dodged south and east, then, (dodging will be a continuing theme) and entered Nebraska’s panhandle with middling traffic.

I achieved totality on schedule north of Scottsbluff. And it was spectacular. A few fellow adventurers were hanging out along the outflow lane of an RV dump at a state recreation area. One guy flew his drone around a bit. Maybe he wanted B roll for other purposes. I got out fast, but not fast enough, and dodged my way through lane closures designed to provide access from feeder roads. The Nebraska troopers were great, I should add, always willing to wave to us science and spectacle immigrants. Meanwhile, SiriusXM spewed various Sibelius pieces that had “sun” in their name, while the Grateful Dead channel gave us a half dozen versions of Dark Star, the quintessential jam song for the band that dates to the early, psychedelic era of the band.

Was it worth it? I think so, though one failed dodge that left me in a ten mile bumper-to-bumper crawl in rural Nebraska with a full bladder tested my faith in the stellar predictability of gravity.… Read the rest