Capping off Friday on the Left Coast with work in Big Data analytics (check out my article mildly crucified by editing in Cloud Computing News), segueing to researching Çatalhöyük, Saturn’s link to the Etruscan Satre, and ending listening to Ravel while reviewing a new cover art option:
Author: Mark Davis
Cover art sample: Against Superheroes
New sample cover art for Against Superheroes, available December 2015:
Evolutionary Optimization and Environmental Coupling
Carl Schulman and Nick Bostrom argue about anthropic principles in “How Hard is Artificial Intelligence? Evolutionary Arguments and Selection Effects” (Journal of Consciousness Studies, 2012, 19:7-8), focusing on specific models for how the assumption of human-level intelligence should be easy to automate are built upon a foundation of assumptions of what easy means because of observational bias (we assume we are intelligent, so the observation of intelligence seems likely).
Yet the analysis of this presumption is blocked by a prior consideration: given that we are intelligent, we should be able to achieve artificial, simulated intelligence. If this is not, in fact, true, then the utility of determining whether the assumption of our own intelligence being highly probable is warranted becomes irrelevant because we may not be able to demonstrate that artificial intelligence is achievable anyway. About this, the authors are dismissive concerning any requirement for simulating the environment that is a prerequisite for organismal and species optimization against that environment:
In the limiting case, if complete microphysical accuracy were insisted upon, the computational requirements would balloon to utterly infeasible proportions. However, such extreme pessimism seems unlikely to be well founded; it seems unlikely that the best environment for evolving intelligence is one that mimics nature as closely as possible. It is, on the contrary, plausible that it would be more efficient to use an artificial selection environment, one quite unlike that of our ancestors, an environment specifically designed to promote adaptations that increase the type of intelligence we are seeking to evolve (say, abstract reasoning and general problem-solving skills as opposed to maximally fast instinctual reactions or a highly optimized visual system).
Why is this “unlikely”? The argument is that there are classes of mental function that can be compartmentalized away from the broader, known evolutionary provocateurs.… Read the rest
The Rise and Triumph of the Bayesian Toolshed
In Asimov’s Foundation, psychohistory is the mathematical treatment of history, sociology, and psychology to predict the future of human populations. Asimov was inspired by Gibbon’s Decline and Fall of the Roman Empire that postulated that Roman society was weakened by Christianity’s focus on the afterlife and lacked the pagan attachment to Rome as an ideal that needed defending. Psychohistory detects seeds of ideas and social movements that are predictive of the end of the galactic empire, creating foundations to preserve human knowledge against a coming Dark Age.
Applying statistics and mathematical analysis to human choices is a core feature of economics, but Richard Carrier’s massive tome, On the Historicity of Jesus: Why We Might Have Reason for Doubt, may be one of the first comprehensive applications to historical analysis (following his other related work). Amusingly, Carrier’s thesis dovetails with Gibbon’s own suggestion, though there is a certain irony to a civilization dying because of a fictional being.
Carrier’s methods use Bayesian analysis to approach a complex historical problem that has a remarkably impoverished collection of source material. First century A.D. (C.E. if you like; I agree with Carrier that any baggage about the convention is irrelevant) sources are simply non-existent or sufficiently contradictory that the background knowledge of paradoxography (tall tales), rampant messianism, and general political happenings at the time lead to a likelihood that Jesus was made up. Carrier constructs the argument around equivalence classes of prior events that then reduce or strengthen the evidential materials (a posteriori). And he does this without ablating the richness of the background information. Indeed, his presentation and analysis of works like Inanna’s Descent into the Underworld and its relationship to the Ascension of Isaiah are both didactic and beautiful in capturing the way ancient minds seem to have worked.… Read the rest
Active Deep Learning
Deep Learning methods that use auto-associative neural networks to pre-train (with bottlenecking methods to ensure generalization) have recently been shown to perform as well and even better than human beings at certain tasks like image categorization. But what is missing from the proposed methods? There seem to be a range of challenges that revolve around temporal novelty and sequential activation/classification problems like those that occur in natural language understanding. The most recent achievements are more oriented around relatively static data presentations.
Jürgen Schmidhuber revisits the history of connectionist research (dating to the 1800s!) in his October 2014 technical report, Deep Learning in Neural Networks: An Overview. This is one comprehensive effort at documenting the history of this reinvigorated area of AI research. What is old is new again, enhanced by achievements in computing that allow for larger and larger scale simulation.
The conclusions section has an interesting suggestion: what is missing so far is the sensorimotor activity loop that allows for the active interrogation of the data source. Human vision roams over images while DL systems ingest the entire scene. And the real neural systems have energy constraints that lead to suppression of neural function away from the active neural clusters.
The Great Crustacean
David Foster Wallace’s Joseph Frank’s Dostoevsky in Consider the Lobster is worth reading for nothing else than the following two paragraphs:
… Read the restThe big thing that makes Dostoevsky invaluable for American readers and writers is that he appears to possess degrees of passion, conviction, and engagement with deep moral issues that we—here, today—cannot or do not permit ourselves. Joseph Frank does an admirable job of tracing out the interplay of factors that made this engagement possible—[Dostoevsky]’s own beliefs and talents, the ideological and aesthetic climates of his day, etc. Upon his finishing Frank’s books, though, I think that any serious American reader/writer will find himself driven to think hard about what exactly it is that makes many of the novelists of our own place and time look so thematically shallow and lightweight, so morally impoverished, in comparison to Gogol or Dostoevsky (or even to lesser lights like Lermontov and Turgenev). Frank’s bio prompts us to ask ourselves why we seem to require of our art an ironic distance from deep convictions or desperate questions, so that contemporary writers have to either make jokes of them or else try to work them in under cover of some formal trick like intertextual quotation or incongruous juxtaposition, sticking the really urgent stuff inside asterisks as part of some multivalent defamiliarization-flourish or some such shit.
Part of the explanation for our own lit’s thematic poverty obviously includes our century and situation. The good old modernists, among their other accomplishments, elevated aesthetics to the level of ethics—maybe even metaphysics—and Serious Novels after Joyce tend to be valued and studied mainly for their formal ingenuity. Such is the modernist legacy that we now presume as a matter of course that “serious” literature will be aesthetically distanced from real lived life.
On Killing Kids
Mark S. Smith’s The Early History of God is a remarkable piece of scholarship. I was recently asked what I read for fun and had to admit that I have been on a trajectory towards reading books that have, on average, more footnotes than text. J.P. Mallory’s In Search of the Indo-Europeans kindly moves the notes to the end of the volume. Smith’s Chapter 5, Yahwistic Cult Practices, and particularly Section 3, The mlk sacrifice, are illuminating on the widespread belief that killing children could propitiate the gods. This practice was likely widespread among the Western Semitic peoples, including the Israelites and Canaanites (Smith’s preference for Western Semitic is to lump the two together ca. 1200 BC because they appear to have been culturally the same, possibly made distinct after the compilation of OT following the Exile).
I recently argued with some young street preachers about violence and horror in Yahweh’s name and by His command while waiting outside a rock shop in Old Sacramento. Human sacrifice came up, too, with the apologetics being that, despite the fact that everyone was bad back then, the Chosen People did not perform human sacrifice and therefore they were marginally better than the other people around them. They passed quickly on the topic of slavery, which was wise for rhetorical purposes, because slavery was widespread and acceptable. I didn’t remember the particulars of the examples of human sacrifice in OT, but recalled them broadly to which they responded that there were translation and interpretation errors with “burnt offering” and “fire offerings of first borns” that, of course, immediately contradicted their assertion of acceptance and perfection of the scriptures.
More interesting, though, is the question of why might human sacrifice be so pervasive, whether among Yahwists and Carthiginians or Aztecs?… Read the rest
STEM Scholarships for Young Scholars
I’m pleased to announce the availability of the James Davis and Wirt Atmar Memorial Scholarship at New Mexico State University. My wife and I are pleased to provide full scholarships for undergraduate and graduate students in STEM (Science, Technology, Engineering, and Mathematics) who are residents of New Mexico and El Paso County, Texas for the Spring semester of 2015 and beyond.
Dr. James Davis (Jim), my birth father, received his Ph.D. in Electrical Engineering and Astrophysics from University of Wisconsin at Madison in 1969. His involvement in gamma ray astronomy led him to take a professorship at New Mexico State in 1973 after post-docs at Oregon State, University of Colorado, and the Naval Observatory in Washington D.C. At NMSU, he met the unusual character Wirt Atmar, Sc.D. 1976, Electrical Engineering and Biology, who was involved in early work on evolutionary simulation and later developed new models for thinking about the evolution of sex as well as species nesting using information theory. When Jim became ill and later succumbed to an unknown kidney disorder following a transplant, Wirt and his wife (Ph.D. biochemistry) became a new family for me and I spent my teen years in an elaborate bohemian world of academic and computer technologies, merged. Wirt passed in 2009 after an unexpected heart attack associated with some other medical problems.
We hope that new and continuing students will benefit from this scholarship (six should be awarded each year initially), and Jim and Wirt’s commitment to science and technology will impact a new generation of students.… Read the rest
Inequality and Big Data Revolutions
I had some interesting new talking points in my Rock Stars of Big Data talk this week. On the same day, MIT Technology Review published Technology and Inequality by David Rotman that surveys the link between a growing wealth divide and technological change. Part of my motivating argument for Big Data is that intelligent systems are likely the next industrial revolution via Paul Krugman of Nobel Prize and New York Times fame. Krugman builds on Robert Gordon’s analysis of past industrial revolutions that reached some dire conclusions about slowing economic growth in America. The consequences of intelligent systems on everyday life will have enormous impact and will disrupt everything from low-wage workers through to knowledge workers. And how does Big Data lead to that disruption?
Krugman’s optimism was built on the presumption that the brittleness of intelligent systems so far can be overcome by more and more data. There are some examples where we are seeing incremental improvements due to data volumes. For instance, having larger sample corpora to use for modeling spoken language enhances automatic speech recognition. Google Translate builds on work that I had the privilege to be involved with in the 1990s that used “parallel texts” (essentially line-by-line translations) to build automatic translation systems based on phrasal lookup. The more examples of how things are translated, the better the system gets. But what else improves with Big Data? Maybe instrumenting many cars and crowdsourcing driving behaviors through city streets would provide the best data-driven approach to self-driving cars. Maybe instrumenting individuals will help us overcome some of things we do effortlessly that are strangely difficult to automate like folding towels and understanding complex visual scenes.
But regardless of the methods, the consequences need to be considered.… Read the rest
Rocking Big Data in San Jose Today
Catch me at IEEE’s Rock Stars of Big Data Analytics today. My talk is at 3:50 in the San Jose Civic Auditorium. Hmmm, music as semi-structured data…… Read the rest