The Rubbery Road from Original Position to Metaphysical Naturalism

From complaints about student protests over Israel in Gaza, to the morality of new House Speaker Johnson, and even to the reality and consequences of economic inequality, there is a dynamic conversation in the media over what is morally right and, importantly, why it should be considered right. It’s perfectly normal for those discussions and considered monologues to present ideas, cases, and weigh the consequences to American life, power, and the well-being of people around the world. It also demonstrates the fact that ideas like divine command theory become irrelevant for most if not all of these discussions since they still require secular analysis and resolution. Contributions from the Abrahamic faiths (and similarly from Hindu nationalism) are largely objectionable moral ideas (“The Chosen People,” jihad, anti-woman, etc.) that are inherently preferential and exclusionary.

Indeed, this public dialogue perhaps best shows how modern people build ethical systems. It looks mostly like Rawl’s concept of “reflective equilibrium” with dashes of utilitarianism and occasional influences from religious tradition and sentiment. And reflective equilibrium has few foundational ideas beyond a basic commitment to fairness as justice using the “original position” as its starting point. That is, if we had to create a society with no advance knowledge about what our role and position might be within it (a veil of ignorance), the best for us would be to create an equal, fair, and just society.

So ethics is cognitively rubbery, with changing attachments and valences as we process options into a coherent whole. We might justify civilian deaths for a greater good when we have few options, imprecise weapons, and existential fear (say, the atom bomb in World War II).… Read the rest

One Shot, Few Shot, Radical Shot

Exunoplura is back up after a sad excursion through the challenges of hosting providers. To be blunt, they mostly suck. Between systems that just don’t work right (SSL certificate provisioning in this case) and bad to counterproductive support experiences, it’s enough to make one want to host it oneself. But hosting is mostly, as they say of war, long boring periods punctuated by moments of terror as things go frustratingly sideways. But we are back up again after two hosting provider side-trips!

Honestly, I’d like to see an AI agent effectively navigate through these technological challenges. Where even human performance is fleeting and imperfect, the notion that an AI could learn how to deal with the uncertain corners of the process strikes me as currently unthinkable. But there are some interesting recent developments worth noting and discussing in the journey towards what is named “general AI” or a framework that is as flexible as people can be, rather than narrowly tied to a specific task like visually inspecting welds or answering a few questions about weather, music, and so forth.

First, there is the work by the OpenAI folks on massive language models being tested against one-shot or few-shot learning problems. In each of these learning problems, the number of presentations of the training data cases is limited, rather than presenting huge numbers of exemplars and “fine tuning” the response of the model. What is a language model? Well, it varies across different approaches, but typically is a weighted context of words of varying length, with the weights reflecting the probabilities of those words in those contexts over a massive collection of text corpora. For the OpenAI model, GPT-3, the total number of parameters (words/contexts and their counts) is an astonishing 175 billion using 45 Tb of text to train the model.… Read the rest