Continuing on with this theme of an ethics of emergence, can we formulate something interesting that does better than just assert that freedom and coordination are inherent virtues in this new scheme? And what does that mean anyway in the dirty details? We certainly see natural, emergent systems that exhibit tight regulatory control where stability, equilibrium, and homeostasis prevent dissipation, like those hoped-for fascist organismic states. There is not much free about these lower level systems, but we think that though they are necessary they are insufficient for the higher-order challenges of a statistically uncertain world. And that uncertainty is what drives the emergence of control systems in the first place. The control breaks out at some level, though, in a kind of teleomatic inspiration, and applies stochastic exploration of the adaptive landscape. Freedom then arises as an additional control level, emergent itself.
We also have this lurking possibility that emergent systems may not be explainable in the same manner that we have come to expect scientific theories to work. Being highly contingent they can only be explained in specificity about their contingent emergence, not by these elegant little explanatory theories that we have now in fields like physics. Stephen Wolfram, and the Santa Fe Institute folks as well, investigated this idea but it has remained inconclusive in its predictive power so far, though that may be changing.
There is an interesting alternative application for deep learning models and, more generally, the application of enormous simulation systems: when emergent complexity is daunting, use simulation to uncover the spectrum of relationships that govern complex system behavior.
Can we apply that to this ethics or virtue system and gain insights from it? Perhaps. A way this might happen is by creating simulations of sufficient granularity of human interactions such that when we inject specific choice or attitude changes we can look at millions of possible futures and gather together bundles of outcomes that show differences in thriving among people and in our relationships to our environments. This is like Asimov’s psychohistory but quite different in the way it works. It is not a mathematical or analytical insight. It’s ensembles of potentialities that can be scored in terms of outcomes.
There is a weird uncertainty built into these kinds of systems, though. I have previously argued in the context of the “simulation hypothesis” that in order to reach the level of accuracy necessary to not just hallucinate like some deep learning system about future outcomes, but to be a good enough simulation that the predictive outcomes are in fact predictive, we need both to unwind all that contingency that has been built into us here now and to compute down through all the layers that are determinant of our choices. We likely don’t need to reductively bottom out at something like quantum indeterminacy, but we need to get to the layer of invariance to the simulation.
And the unlikeliness of achieving that fidelity may just be a blocker. The predictive indeterminacy keeps the future inherently uncertain—forever blurred and out-of-reach—and our ethics just has to fall back to the nostrums of freedom and coordination, raw and semantically mutable, always mentionable but rarely agreed to in fact if not in principle.