Causally Emergent vs. Divine Spark Murder Otherwises

One might claim that a metaphysical commitment to strong determinism is only porous to quantum indeterminacy or atomic indeterminacy (decay behavior for instance). Those two can be lumped together and simply called subatomic indeterminacy or something. Everything else is conceptually derivative of state evolution and therefore deterministic. So does that mean that my model for R fails unless I can invoke these two candidates? My suggestion of amplifying thermodynamic noise doesn’t really cut the mustard (an amusing semantic drift from pass muster, perhaps) because it only appears random and solely characterizable by these macroscopic variables like pressure and temperature, not because it actually is random in the molecule swirl.

But I can substitute an atomic decay counter for my thermodynamic amplifier, or use a quantum random number generator based on laser measurements of vacuum fluctuations. There, I’ve righted the ship, though I’ve jettisoned my previous claim that randomness is not necessary for R’s otherwises. Now it is, but it is not sufficient because of the need for a device like the generative subsystem that uses randomness in a non-arbitrary way to revise decisions. We do encounter a difficulty in porting subatomic indeterminacy into a human analog, of course, though some have given it a try.

But there is some new mathematics for causal emergence that fits well with my model. In causal emergence, ideas like necessity and sufficiency for causal explanations can be shown to have properties in macroscale explanations that are not present at microscales. The model used is a simple Markov chain that flips between two states and information theory is applied to examine a range of conceptual structures for causation running from David Hume’s train of repeating objects (when one damn thing comes after another and then again and again, we may have a cause), up through David Lewis’s notion of counterfactuals in alternative probabilistic universes (could it have happened that way in all possible worlds?),… Read the rest

Evolving Ought Beyond Is

The is-ought barrier is a regularly visited topic since its initial formulation by Hume. It certainly seems unassailable in that a syllogism designed to claim that what ought to be done is predicated on what is (observable and natural) must always fail. The reason for this is that the ought framework (call it ethics) can be formulated in any particular way to ascribe the good and the bad. A serial killer might believe that killing certain people eliminates demons from the world and is therefore good, regardless of a general prohibition that killing others is bad. In this case, we might argue that the killer is simply mistaken in her beliefs and that a lack of accurate information is guiding her. But even the claim that there is an “is” in this case (killing people results in a worse society/people are entitled to be free from murder/etc.) doesn’t really stay on the factual side of the barrier. The is evaporates into an ought at the very outset.

There are efforts to enliven some type of naturalistic underpinnings of moral reasoning, like Sam Harris’s The Moral Landscape that postulates an adaptive topology where the consequences of individual and group actions result in improvements or harm to humanity as a whole. The end result is a kind of abstract consequentialism beneath local observables that is enervated by some brain science. Here’s an example: (1) Better knowledge of the biological origins of disease can result in behavior that reduces disease harm; (2) It is therefore moral to improve education about biology; (3) Disease harm is reduced resulting in reduced suffering. This doesn’t quite make it across the barrier, though, because it presupposes an ought for humans that reduces the imperatives of the disease itself (what about its thriving?),… Read the rest