The Hard Problem of the Future

The American zeitgeist is obsessed with decline and a curious sense of ennui. On the progressive left there is the rolling mortal threat of inequality and the destruction of the middle class. Wages don’t keep up with inflation or, more broadly, the cost of living. On the new MAGA right there is an unfocused rage that builds in part on the angst of hollowed-out rural and post-industrial communities, and then in part on undocumented immigrants as scapegoats and symbolic of lefty lawlessness, and again in part as a tirade against wealthy, coastal elites who control the media, universities, and have pushed the Overton window in incremental lurches towards inclusiveness. The populism is mostly half-baked, certainly, and exploited by cynical conservatives for undermining social support while bolstering commercial interests and reducing taxes for the well-to-do. But half-baked is enough for a sensibility; things fully realized are only afterthoughts.

There are other chthonic rumblings and imputations that filter up. The rise of China’s industrial, military, and scientific power is a growing shadow that some see threatening to engulf the world in its umbra. And with it comes the fear of slowing technological might, despite the domination of the recent technological present by the United States. We might be left behind like unhoused, opioid-addicted, modern peasants. The crumbling of the cities would be just punishment even if their loss only cascades the problems of the heartland.

And so as the future keeps getting harder, we turn to mad kings who promise radical change in the face of hard problems. The change can’t possibly be realized, so it is better to just pretend that there are solutions. Annex Greenland, rename the Gulf of Mexico, incorporate Canada, occupy Panama, reach for Mars, acquire territory, but all the while cocooned by the complex institutional and international realities that mean that acting aggressively and alone is now untenable.… Read the rest

We Are Weak Chaos

Recent work in deep learning networks has been largely driven by the capacity of modern computing systems to compute gradient descent over very large networks. We use gaming cards with GPUs that are great for parallel processing to perform the matrix multiplications and summations that are the primitive operations central to artificial neural network formalisms. Conceptually, another primary advance is the pre-training of networks as autocorrelators that helps with smoothing out later “fine tuning” training programs over other data. There are some additional contributions that are notable in impact and that reintroduce the rather old idea of recurrent neural networks, networks with outputs attached back to inputs that create resonant kinds of running states within the network. The original motivation of such architectures was to emulate the vast interconnectivity of real neural systems and to capture a more temporal appreciation of data where past states affect ongoing processing, rather than a pure feed-through architecture. Neural networks are already nonlinear systems, so adding recurrence just ups the complexity of trying to figure out how to train them. Treating them as black boxes and using evolutionary algorithms was fashionable for me in the 90s, though the computing capabilities just weren’t up for anything other than small systems, as I found out when chastised for overusing a Cray at Los Alamos.

But does any of this have anything to do with real brain systems? Perhaps. Here’s Toker, et. al. “Consciousness is supported by near-critical slow cortical electrodynamics,” in Proceedings of the National Academy of Sciences (with the unenviable acronym PNAS). The researchers and clinicians studied the electrical activity of macaque and human brains in a wide variety of states: epileptics undergoing seizures, macaque monkeys sleeping, people on LSD, those under the effects of anesthesia, and people with disorders of consciousness.… Read the rest