The last link provided in the previous post leads down a rabbit hole. The author translates a Chinese report and then translates the data into geospatial visualizations and pie charts, sure, but he also begins very rapidly to layer on his ideological biases. He is part of the “AltRight” movement with a focus on human biodiversity. The memes of AltRight are largely racially charged, much less racist, defined around an interpretation of Darwinism that anoints difference and worships a kind of biological determinism. The thought cycles are large, elliptical constructs that play with sociobiology and evolutionary psychology to describe why inequities exist in the human world. Fair enough, though we can quibble over whether any scientisms rise far enough out of the dark waters of data to move policy more than a hair either way. And we can also argue against the interpretations of biology that nurtured the claims, especially the ever-present meme that inter-human competition is somehow discernible as Darwinian at all. That is the realm of the Social Darwinists and Fascists, and the realm of evil given the most basic assumptions about others. It also begs explanation of cooperation at a higher level than the superficial realization that kin selection might have a role in primitive tribal behavior. To be fair, of course, it has parallels in attempts to tie Freudian roles to capitalism and desire, or in the deeper contours of Marxist ideology.
But this war of ideologies, of intellectual histories, of grasping at ever-deeper ways of reinterpreting the goals and desires of political actors, might be coming to an end in a kind of bloodless, technocratic way. Specifically, surveillance, monitoring, and data analysis can potentially erode the theologies of policy into refined understandings of how groups react to changes in laws, regulations, incentives, taxes, and entitlements.
How will this work?
Let’s take gerrymandering as an example. There is an uncomplicated competition for power involved in reengineering political districts that can be solved fairly easily via algorithms that remove human decision making from the process (see Wikipedia article for examples of splitline algorithms and isoperimetric quotients). A similar approach that uses experimentation and non-ideological mechanisms can be applied to many (though not all) divisive political problems:
- Global warming controversial? Apply cap-and-trade or other CO2 reductions at half optimal strength (as argued by proponents). Surveil outcomes and establish a decision criterion for next steps.
- Health care reform unappetizing? Create smaller-scale laboratories to identify what works and what doesn’t (say, like Massachusetts). Identify the social goods and bads and expand where appropriate.
- Welfare systems under the microscope? Reform and reimplement using state and community block grants to test alternative ways of solving the problem. Leave existing system intact until the data is in.
Behavioral economics somewhat foreshadows this future of outcome- and data-driven policy development. I’ve coined the term “technonomy” based on Pittendrigh’s notion of “teleonomy” to capture this idea of basing policy decisions on experimental and data-driven methodologies, and to distinguish it from technocracy (it also appears to have other meanings already that involve “synergies” and other vacuous crap). If the AltRight want to deny a specific government action based on racial theories, or if the Very Left want to spend more to correct for a perceived injustice, or if Libertarians want a return to a gold standard, then all that is required is for the groups to design a policy laboratory that controls the variables of interest well enough that the theory can be tested. It will require enormous creativity that goes beyond conspiracy theories and mere kvetching, but would certainly be more informative than the current guerrilla wars of partisan intellectual rage.