Causally Emergent vs. Divine Spark Murder Otherwises

One might claim that a metaphysical commitment to strong determinism is only porous to quantum indeterminacy or atomic indeterminacy (decay behavior for instance). Those two can be lumped together and simply called subatomic indeterminacy or something. Everything else is conceptually derivative of state evolution and therefore deterministic. So does that mean that my model for R fails unless I can invoke these two candidates? My suggestion of amplifying thermodynamic noise doesn’t really cut the mustard (an amusing semantic drift from pass muster, perhaps) because it only appears random and solely characterizable by these macroscopic variables like pressure and temperature, not because it actually is random in the molecule swirl.

But I can substitute an atomic decay counter for my thermodynamic amplifier, or use a quantum random number generator based on laser measurements of vacuum fluctuations. There, I’ve righted the ship, though I’ve jettisoned my previous claim that randomness is not necessary for R’s otherwises. Now it is, but it is not sufficient because of the need for a device like the generative subsystem that uses randomness in a non-arbitrary way to revise decisions. We do encounter a difficulty in porting subatomic indeterminacy into a human analog, of course, though some have given it a try.

But there is some new mathematics for causal emergence that fits well with my model. In causal emergence, ideas like necessity and sufficiency for causal explanations can be shown to have properties in macroscale explanations that are not present at microscales. The model used is a simple Markov chain that flips between two states and information theory is applied to examine a range of conceptual structures for causation running from David Hume’s train of repeating objects (when one damn thing comes after another and then again and again, we may have a cause), up through David Lewis’s notion of counterfactuals in alternative probabilistic universes (could it have happened that way in all possible worlds?),… Read the rest

Free Will and Algorithmic Information Theory

I was recently looking for examples of applications of algorithmic information theory, also commonly called algorithmic information complexity (AIC). After all, for a theory to be sound is one thing, but when it is sound and valuable it moves to another level. So, first, let’s review the broad outline of AIC. AIC begins with the problem of randomness, specifically random strings of 0s and 1s. We can readily see that given any sort of encoding in any base, strings of characters can be reduced to a binary sequence. Likewise integers.

Now, AIC states that there are often many Turing machines that could generate a given string and, since we can represent those machines also as a bit sequence, there is at least one machine that has the shortest bit sequence while still producing the target string. In fact, if the shortest machine is as long or a bit longer (given some machine encoding requirements), then the string is said to be AIC random. In other words, no compression of the string is possible.

Moreover, we can generalize this generator machine idea to claim that given some set of strings that represent the data of a given phenomena (let’s say natural occurrences), the smallest generator machine that covers all the data is a “theoretical model” of the data and the underlying phenomena. An interesting outcome of this theory is that it can be shown that there is, in fact, no algorithm (or meta-machine) that can find the smallest generator for any given sequence. This is related to Turing Incompleteness.

In terms of applications, Gregory Chaitin, who is one of the originators of the core ideas of AIC, has proposed that the theory sheds light on questions of meta-mathematics and specifically that it demonstrates that mathematics is a quasi-empirical pursuit capable of producing new methods rather than being idealistically derived from analytic first-principles.… Read the rest