Vaguely Effective Altruism

In “Killing John Galt” in my new collection, Entanglements, the first-person narrator muses that:

Reaching the moon was easy but conquering poverty was impossible. Watching Sonya’s animated hopefulness—perfection!—almost made me want to call Winborn and recommend that he just pay more taxes with the same money. Let the organizations and bureaucrats build institutions that could chisel away at the edifice, slowly and steadily; look at giant statistical outcomes to guide changes in policy over time. It would convert the problem from individual heroism into a technocratic game. I could play that game, running regression models and factor analytic comparisons to tease out what was and what was not working effectively. Social change then became policy management.

With the plunge of Effective Altruism (EA) from its hubristic trajectory across the sun of cryptocurrency, how and why to do good by wealthy people has become a renewed topic for discussion. At the New York Times, for instance, we have the regularly vague Ross Douthat complaining that if every oil magnate wants their money applied to just saving kids from malaria we would have fewer quaint state parks. Perhaps more interesting at the same publication is Ezra Klein’s discussion of the goals and limits of EA as well as the philosophical underpinnings of the movement. There is plenty of room for a spectrum of responses to the basic problem of how to give away money, but the key concept of “effectiveness” is what forces EA and EA-adjacent proponents to analyze their approaches and goals towards making the world a better place. Historically, much large-scale giving was intended to create a legacy for the industrialist families (Carnegie-Mellon University, Rockefeller Foundation, … ahem, Sackler Institute and related organizations). The name was attached to an institution that was created to execute the general advance of the arts, sciences, learning, and even charitable giving. In EA, however, existential threats that can be partially quantified and the relative numbers of lives that can be saved per invested dollar are more pressing than this vague, general approach.

The submovement concerning existential threats in EA has a large following who are terrified of self-modifying artificial intelligence. There are research groups that look into mathematical models for quantifying the risk and how to mechanize rules to stop AIs from killing us all and taking over the universe. I am largely dismissive of this branch of the movement since I have yet to see anything that looks like artificial general intelligence. Since we have no idea how actual human intelligence works, there is a very long road ahead of us before becoming too concerned about that particular threat. Weaponizing biomachines and global climate catastrophes are perhaps more pressing.

But let’s consider a value system that is somewhat more aligned with the institutional giving arm. If we accept that the modern world is a truly remarkable place where life expectancy has grown fantastically, global poverty levels have been reduced remarkably, war and violence are rarer than almost ever in history, and massive populations of our fellows live lives that are more self-determined and free than ever before—that is, if we are passionately optimistic about what the historical record shows—then we should support enhancing the fundamental drivers that got us to this point in the hope that more now translates into better tomorrows. This can clearly include saving lives and enhancing drinking water around the world, but it also requires more investment in essential undertakings in science, the arts, and humanities.

Can we somehow measure the potential of a positive scientific advance against the lives of children at the atomized analytical level of EA? For instance, could we make an argument that investing part of the money going for mosquito netting into curing malaria would be a better investment because more lives would ultimately be saved? We certainly can do that but it does drift into distasteful cringe about individual suffering and relies on substituting possible future outcomes for immediate realities. Still, we could develop a calculus that suggests how to subdivide the giving pot to try to balance the discounted future outcomes against current needs. One way to estimate the future outcomes is to look at the rate of scientific advances in a given field and use that as a template for what might come. We can’t be certain like we can about the reduction in here-and-now deaths, but we can be sure that knowledge advances have gotten us here and are key to moving forward, whether in creating/reducing new existential risks or eliminating current barriers to human flourishing.

So give vaguely this holiday season. Whether you support the arts, educational institutions, research, development, or highly targeted EA charitable organizations, you are enhancing the future.

Leave a Reply

Your email address will not be published. Required fields are marked *