In the modern American political climate, I’m constantly finding myself at sea in trying to unravel the motivations and thought processes of the Republican Party. The best summation I can arrive at involves the obvious manipulation of the electorate—but that is not terrifically new—combined with a persistent avoidance of evidence and facts.
In my day job, I research a range of topics trying to get enough of a grasp on what we do and do not know such that I can form a plan that innovates from the known facts towards the unknown. Here are a few recent investigations:
- What is the state of thinking about the origins of logic? Logical rules form into broad classes that range from the uncontroversial (modus tollens, propositional logic, predicate calculus) to the speculative (multivalued and fuzzy logic, or quantum logic, for instance). In most cases we make an assumption based on linguistic convention that they are true and then demonstrate their extension, despite the observation that they are tautological. Synthetic knowledge has no similar limitations but is assumed to be girded by the logical basics.
- What were the early Christian heresies, how did they arise, and what was their influence? Marcion of Sinope is perhaps the most interesting one of these, in parallel with the Gnostics, asserting that the cruel tribal god of the Old Testament was distinct from the New Testament Father, and proclaiming perhaps (see various discussions) a docetic Jesus figure. The leading “mythicists” like Robert Price are invaluable in this analysis (ignore first 15 minutes of nonsense). The thin braid of early Christian history and the constant humanity that arises in morphing the faith before settling down after Nicaea (well, and then after Martin Luther) reminds us that abstractions and faith have a remarkable persistence in the face of cultural change.
- How do mathematical machines take on so many forms while achieving the same abstract goals? Machine learning, as a reificiation of human-like learning processes, can imitate neural networks (or an extreme sketch and caricature of what we know about real neural systems), or can be just a parameter slicing machine like Support Vector Machines or ID3, or can be a Bayesian network or mixture model of parameters. We call them generative or non-generative, we categorize them as to discrete or continuous decision surfaces, and we label them in a range of useful ways. But why should they all achieve similar outcomes with similar ranges of error? Indeed, Random Forests were the belles of the ball until Deep Learning took its tiara.
In each case, I try to work my way, as carefully as possible, through the thicket of historical and intellectual concerns that provide point and counterpoint to the ideas. It feels ethically wrong to make a short, fast judgment about any such topics. I can’t imagine doing anything less with a topic as fraught as the US health care system. It’s complex, indeed, Mr. President.
So, I tracked down a foundational paper on this idea of ethics and epistemology. It dates to 1877 and provides a grounding for why and when we should believe in anything. William Clifford’s paper, The Ethics of Belief, tracks multiple lines of argumentation and the consequences of believing without clarity. Even tentative clarity comes with moral risk, as Clifford shows in his thought experiments.
In summary, though, there is no more important statement than Clifford’s final assertion that it is wrong to believe without sufficient evidence. It’s that simple. And it’s even more wrong to act on those beliefs.
Thanks for turning me on to Clifford’s paper. 🙂
de nada… it’s less dated than it could be.