Causally Emergent vs. Divine Spark Murder Otherwises

One might claim that a metaphysical commitment to strong determinism is only porous to quantum indeterminacy or atomic indeterminacy (decay behavior for instance). Those two can be lumped together and simply called subatomic indeterminacy or something. Everything else is conceptually derivative of state evolution and therefore deterministic. So does that mean that my model for R fails unless I can invoke these two candidates? My suggestion of amplifying thermodynamic noise doesn’t really cut the mustard (an amusing semantic drift from pass muster, perhaps) because it only appears random and solely characterizable by these macroscopic variables like pressure and temperature, not because it actually is random in the molecule swirl.

But I can substitute an atomic decay counter for my thermodynamic amplifier, or use a quantum random number generator based on laser measurements of vacuum fluctuations. There, I’ve righted the ship, though I’ve jettisoned my previous claim that randomness is not necessary for R’s otherwises. Now it is, but it is not sufficient because of the need for a device like the generative subsystem that uses randomness in a non-arbitrary way to revise decisions. We do encounter a difficulty in porting subatomic indeterminacy into a human analog, of course, though some have given it a try.

But there is some new mathematics for causal emergence that fits well with my model. In causal emergence, ideas like necessity and sufficiency for causal explanations can be shown to have properties in macroscale explanations that are not present at microscales. The model used is a simple Markov chain that flips between two states and information theory is applied to examine a range of conceptual structures for causation running from David Hume’s train of repeating objects (when one damn thing comes after another and then again and again, we may have a cause), up through David Lewis’s notion of counterfactuals in alternative probabilistic universes (could it have happened that way in all possible worlds?),… Read the rest

Uncertainty, Murder, and Emergent Free Will

I’ll jump directly into my main argument without stating more than the basic premise that if determinism holds all our actions cannot be otherwise and there is no “libertarian” free will.

Let’s construct a robot (R) that has a decision-making apparatus (DM), some sensors (S) for collecting impressions about our world, and a memory (M) of all those impressions and past decisions of DM. DM is pretty much an IF-THEN arrangement but has a unique feature. It has subroutines that generate new IF-THENs by taking existing rules and randomly recombining them together with variation. This might be done by simply snipping apart at logical operations (blue AND wings AND small => bluejay at 75% can be pulled apart into “blue AND wings” and “wings AND small” and those two combined with other such rules). This generative subroutine (GS) then scores the novel IF-THENs by comparing them to the recorded history contained in M as well as current sensory impressions and keeps the new rule that scores best or the top few if they score closely. The scoring methodology might include a combination of coverage and fidelity to the impressions and/or recalled action/impressions.

Now this is all quite deterministic. I mentioned randomness but we can produce pseudo-random number generators that are good enough or even rely on a small electronic circuit that amplifies thermodynamic noise to get something “truly” random. But really we could just substitute an algorithm that checks every possible reorganization and scores them all and shelve the randomness component, alleviating any concerns that we are smuggling in randomness for our later construct of free agency.

Now let’s add a rule to DM that when R perceives it has been treated unfairly it might murder the human being who treated it that way.… Read the rest

Bereitschaftspotential and the Rehabilitation of Free Will

The question of whether we, as people, have free will or not is both abstract and occasionally deeply relevant. We certainly act as if we have something like libertarian free will, and we have built entire systems of justice around this idea, where people are responsible for choices they make that result in harms to others. But that may be somewhat illusory for several reasons. First, if we take a hard deterministic view of the universe as a clockwork-like collection of physical interactions, our wills are just a mindless outcome of a calculation of sorts, driven by a wetware calculator with a state completely determined by molecular history. Second, there has been, until very recently, some experimental evidence that our decision-making occurs before we achieve a conscious realization of the decision itself.

But this latter claim appears to be without merit, as reported in this Atlantic article. Instead, what was previously believed to be signals of brain activity that were related to choice (Bereitschaftspotential) may just be associated with general waves of neural activity. The new experimental evidence puts the timing of action in line with conscious awareness of the decision. More experimental work is needed—as always—but the tentative result suggests a more tightly coupled pairing of conscious awareness with decision making.

Indeed, the results of this newer experimental result gets closer to my suggested model of how modular systems combined with perceptual and environmental uncertainty can combine to produce what is effectively free will (or at least a functional model for a compatibilist position). Jettisoning the Chaitin-Kolmogorov complexity part of that argument and just focusing on the minimal requirements for decision making in the face of uncertainty, we know we need a thresholding apparatus that fires various responses given a multivariate statistical topology.… Read the rest

Free Will and Algorithmic Information Theory

I was recently looking for examples of applications of algorithmic information theory, also commonly called algorithmic information complexity (AIC). After all, for a theory to be sound is one thing, but when it is sound and valuable it moves to another level. So, first, let’s review the broad outline of AIC. AIC begins with the problem of randomness, specifically random strings of 0s and 1s. We can readily see that given any sort of encoding in any base, strings of characters can be reduced to a binary sequence. Likewise integers.

Now, AIC states that there are often many Turing machines that could generate a given string and, since we can represent those machines also as a bit sequence, there is at least one machine that has the shortest bit sequence while still producing the target string. In fact, if the shortest machine is as long or a bit longer (given some machine encoding requirements), then the string is said to be AIC random. In other words, no compression of the string is possible.

Moreover, we can generalize this generator machine idea to claim that given some set of strings that represent the data of a given phenomena (let’s say natural occurrences), the smallest generator machine that covers all the data is a “theoretical model” of the data and the underlying phenomena. An interesting outcome of this theory is that it can be shown that there is, in fact, no algorithm (or meta-machine) that can find the smallest generator for any given sequence. This is related to Turing Incompleteness.

In terms of applications, Gregory Chaitin, who is one of the originators of the core ideas of AIC, has proposed that the theory sheds light on questions of meta-mathematics and specifically that it demonstrates that mathematics is a quasi-empirical pursuit capable of producing new methods rather than being idealistically derived from analytic first-principles.… Read the rest