The Pregnant Machinery of Metaphor

Sylvia Plath has some thoughts about pregnancy in “Metaphors”:

I’m a riddle in nine syllables,
An elephant, a ponderous house,
A melon strolling on two tendrils.
O red fruit, ivory, fine timbers!
This loaf’s big with its yeasty rising.
Money’s new-minted in this fat purse.
I’m a means, a stage, a cow in calf.
I’ve eaten a bag of green apples,
Boarded the train there’s no getting off.

It seems, at first blush, that metaphors have some creative substitutive similarity to the concept they are replacing. We can imagine Plath laboring over this child in nine lines, fitting the pieces together, cutting out syllables like dangling umbilicals, finding each part in a bulging conception, until it was finally born, alive and kicking.

OK, sorry, I’ve gone too far, fallen over a cliff, tumbled down through a ravine, dropped into the foaming sea, where I now bob, like your uncle. Stop that!

Let’s assume that much human creativity occurs through a process of metaphor or analogy making. This certainly seems to be the case in aspects of physics when dealing with difficult to understand new realms of micro- and macroscopic phenomena, as I’ve noted here. Some creative fields claim a similar basis for their work, with poetry being explicit about the hidden or secret meaning of poems. Moreover, I will also suppose that a similar system operates in terms of creating networks of semantics by which we understand the meaning of words and their relationships to phenomena external to us, as well as our own ideas. In other words, semantics are a creative puzzle for us.

What do we know about this system and how can we create abstract machines that implement aspects of it?… Read the rest

See my setup in Cult of Mac!

Just noticed that Cult of Mac used my compute and sound rigs in their Setups category.

Check it out, here!

A few added notes/corrections to CoM bit:

  1. Raspberry Pi (not Pie).
  2. Topping D90 MQA DAC, which adds hardware MQA decoding. MQA is largely considered a “snake oil” standard for high quality digital audio, but I of course had to try it.
  3. A DAC (digital-to-analog converter) does not “ultra-filter.” A DAC just converts digitally-encoded music information into analog for playback. At its best, a DAC makes no changes to the encoded signal. The Topping D90 MQA is at the high end of the best measuring DACs, meaning that it preserves the original signal extremely well. Eh, there is a roll-off filter that keeps stray high frequencies out, but that’s just standard stuff.
  4. Timing of my analytics platform sale was garbled.
  5. I don’t do Zoom, but do use the webcam for Skype, Slack, and FaceTime.
Read the rest

Architects, Farmers, and Patterns

The distinction between writing code and writing prose is not as great as some might imagine. I recently read an article that compared novelists’ metaphors concerning writing. The distinctions included the “architect” who meticulously plans out the structure of the novel. Plots, characters, sets, chapter structure—everything—are diagrammed and refined prior to beginning writing. All that remains is word choice, dialogue, and the gritty details of putting it all on a page. Compare this to the “farmer” approach where a seed is planted in the form of a key idea or plot development. The writer begins with that seed and nurtures it in a continuous process of development. When the tree grows lopsided, there is pruning. When a branch withers, there is watering and attention. The balanced whole builds organically and the architecture is an emergent property.

Coding is similar. We generally know the architecture in advance, though there are exceptions in the green fields. Full stack development involves decoupled database back ends, front end load balancers and servers, and middleware/coding of some stripe. Machine learning involves data acquisition, cleaning, training, and evaluation. User experience components then rely on “patterns” or mini-architectures like Model-View-Controller and similar ideas pop up in the depths of the model and controller, like “factory” patterns that produce objects, or flyweights, adaptors, iterators, and so forth. In the modern world of agile methodologies, the day-to-day development of code is driven by “stories” that are short descriptions of the goals and outcomes of the coding, drawing back to the analogy with prose development. The patterns are little different from choosing to use dialogue or epistolary approaches to convey parts of a tale.

I do all of the above when it comes to writing code or novels.… Read the rest

One Shot, Few Shot, Radical Shot

Exunoplura is back up after a sad excursion through the challenges of hosting providers. To be blunt, they mostly suck. Between systems that just don’t work right (SSL certificate provisioning in this case) and bad to counterproductive support experiences, it’s enough to make one want to host it oneself. But hosting is mostly, as they say of war, long boring periods punctuated by moments of terror as things go frustratingly sideways. But we are back up again after two hosting provider side-trips!

Honestly, I’d like to see an AI agent effectively navigate through these technological challenges. Where even human performance is fleeting and imperfect, the notion that an AI could learn how to deal with the uncertain corners of the process strikes me as currently unthinkable. But there are some interesting recent developments worth noting and discussing in the journey towards what is named “general AI” or a framework that is as flexible as people can be, rather than narrowly tied to a specific task like visually inspecting welds or answering a few questions about weather, music, and so forth.

First, there is the work by the OpenAI folks on massive language models being tested against one-shot or few-shot learning problems. In each of these learning problems, the number of presentations of the training data cases is limited, rather than presenting huge numbers of exemplars and “fine tuning” the response of the model. What is a language model? Well, it varies across different approaches, but typically is a weighted context of words of varying length, with the weights reflecting the probabilities of those words in those contexts over a massive collection of text corpora. For the OpenAI model, GPT-3, the total number of parameters (words/contexts and their counts) is an astonishing 175 billion using 45 Tb of text to train the model.… Read the rest

Ensembles Against Abominables

It seems obvious to me that when we face existential threats we should make the best possible decisions. I do this with respect to investment decisions, as well. I don’t rely on “guts” or feelings or luck or hope or faith or hunches or trends. All of those ideas are proxies for some sense of incompleteness in our understanding of probabilities and future outcomes.

So how can we cope with those kinds of uncertainties given existential threats? The core methodology is based on ensembles of predictions. We don’t actually want to trust an expert per se, but want instead to trust a basket of expert opinions—an ensemble of predictions. Ideally, those experts who have been more effective in the past should be given greater weight than those who have made poorer predictions. We most certainly should not rely on gut calls by abominable narcissists in what Chauncey Devega at Salon disturbingly characterizes as a “pathological kakistocracy.”

Investment decision-making takes exactly this form, when carried out rationally. Index funds adjust their security holdings in relationship to an index like the S&P 500. Since stock markets have risen since their inceptions with, of course, set backs along the way, an index is a reliable ensemble approach to growth. Ensembles smooth predictions and smooth out brittleness.

Ensemble methods are also core to predictive improvements in machine learning. While a single decision tree trained on data may overweight portions of the data set, an ensemble of trees (which we call a forest, of course) smoothes the decision making by having each tree become only a part of the final vote for a prediction. The training of the individual trees is based on a randomized subset of the data, allowing for specialization of stands of trees, but preserving overall effectiveness of the system.… Read the rest

A Personal Computing Revolution

I’m writing this on a 2018 iPad Pro (11” with 512GB storage and LTE). I’m also using an external Apple Magic Keyboard 2 and Magic Trackpad 2. The iPad is plugged into an LG USB-C monitor at my sit-stand desk overlooking a forested canyon in Sedona. And it is, well, almost perfect. Almost, but there are remaining limitations (I’ll get to them), though they are well-balanced by the capabilities and I suspect will be remedied soon.

Overall, though, it feels like a compute revolution where a small, extremely light (1 pound or so) device is all I need to occupy much of my day. I’ll point out that I am not by nature an Apple fanboi. I have an HP laptop that dual boots Ubuntu Linux and Windows in addition to a Macbook Pro with Parallels hosting two Linux distributions for testing and continuing education purposes. I know I can live my online life in Chrome on Linux well enough, using Microsoft Office 365, Google Mail, 1Password, Qobuz, Netflix, etc. while still being able to build enterprise and startup software ecosystems via the Eclipse IDE, Java J2EE, Python, MySQL, AWS, Azure, etc. Did I forget anyone in there? Oh, of course there are Bitbucket, git, maven, Confluence, and all those helpers. All are just perfect on Linux once you fight your way through the package managers and occasional consults of Stack Overflow. I think I first installed Linux on a laptop in 1993, and it remains not for the weak of geek, but is constantly improving.

But what are the positives of the iPad Pro? First is the lightness and more-than-sufficient power. Photo editing via Affinity Photo is actually faster than on my Macbook Pro (2016) and video editing works well though without quite the professional completeness of a Final Cut.… Read the rest

Forever Uncanny

Quanta has a fair round up of recent advances in deep learning. Most interesting is the recent performance on natural language understanding tests that are close to or exceed mean human performance. Inevitably, John Searle’s Chinese Room argument is brought up, though the author of the Quanta article suggests that inferring the Chinese translational rule book from the data itself is slightly different from the original thought experiment. In the Chinese Room there is a person who knows no Chinese but has a collection of translational reference books. She receives texts through a slot and dutifully looks up the translation of the text and passes out the result. “Is this intelligence?” is the question and it serves as a challenge to the Strong AI hypothesis. With statistical machine translation methods (and their alternative mechanistic implementation, deep learning), the rule books have been inferred by looking at translated texts (“parallel” texts as we say in the field). By looking at a large enough corpus of parallel texts, greater coverage of translated variants is achieved as well as some inference of pragmatic issues in translation and corner cases.

As a practical matter, it should be noted that modern, professional translators often use translation memory systems that contain idiomatic—or just challenging—phrases that they can reference when translating new texts. The understanding resides in the original translator’s head, we suppose, and in the correct application of the rule to the new text by checking for applicability according to, well, some other criteria that the translator brings to bear on the task.

In the General Language Understand Evaluation (GLUE) tests described in the Quanta article, the systems are inferring how to answer Wh-style queries (who, what, where, when, and how) as well as identify similar texts.… Read the rest

Deep Learning with Quantum Decoherence

Getting back to metaphors in science, Wojciech Zurek’s so-called Quantum Darwinism is in the news due to a series of experimental tests. In Quantum Darwinism (QD), the collapse of the wave function (more properly the “extinction” of states) is a result of decoherence from environmental entanglement. There is a kind of replication in QD, where pointer states are multiplied, and then a kind of environmental selection as well. There is no variation per se, however, though some might argue that the pointer states imprinted by the environment are variants of the originals. Still, it makes the metaphor a bit thin at the edges, but it is close enough for the core idea to fit most of the floor-plan of Darwinism. Indeed, some champion it as part of a more general model for everything. Even selection among viable multiverse bubbles has a similar feel to it: some survive while others perish.

I’ve been simultaneously studying quantum computing and complexity theories that are getting impressively well developed. Richard Cleve’s An Introduction to Quantum Complexity Theory and John Watrous’s Quantum Computational Complexity are notable in their bridging from traditional computational complexity to this newer world of quantum computing using qubits, wave functions, and even decoherence gates.

Decoherence sucks for quantum computing in general, but there may be a way to make use of it. For instance, an artificial neural network (ANN) also has some interesting Darwinian-like properties to it. The initial weight distribution in an ANN is typically a random real value. This is designed to simulate the relative strength of neural connections. Real neural connections are much more complex than this, doing interesting cyclic behavior, saturating and suppressing based on neurotransmitter availability, and so forth, but assuming just a straightforward pattern of connectivity has allowed for significant progress.… Read the rest

Metaphors as Bridges to the Future

David Lewis’s (I’m coming to accept this new convention with s-ending possessives!) solution to Putnam’s semantic indeterminacy is that we have a network of concepts that interrelate in a manner that is consistent under probing. As we read, we know from cognitive psychology, texts that bridge unfamiliar concepts from paragraph to paragraph help us to settle those ideas into the network, sometimes tentatively, and sometimes needing some kind of theoretical reorganization as we learn more. Then there are some concepts that have special referential magnetism and are piers for the bridges.

You can see these same kinds of bridging semantics being applied in the quest to solve some our most difficult and unresolved scientific conundrums. Quantum physics has presented strangeness from its very beginning and the various interpretations of that strangeness and efforts to reconcile the strange with our everyday logic remains incomplete. So it is not surprising that efforts to unravel the strange in quantum physics often appeal to Einstein’s descriptive approach to deciphering the strange problems of electromagnetic wave propagation that ultimately led to Special and then General Relativity.

Two recent approaches that borrow from the Einstein model are Carlo Rovelli’s Relational Quantum Mechanics and David Albert’s How to Teach Quantum Mechanics. Both are quite explicit in drawing comparisons to the relativity approach; Einstein, in merging space and time, and in realizing inertial and gravitational frames of reference were indistinguishable, introduced an explanation that defied our expectations of ordinary, Newtonian physical interactions. Time was no longer a fixed universal but became locked to observers and their relative motion, and to space itself.

Yet the two quantum approaches are decidedly different, as well. For Rovelli, there is no observer-independent state to quantum affairs.… Read the rest

Theoretical Reorganization

Sean Carroll of Caltech takes on the philosophy of science in his paper, Beyond Falsifiability: Normal Science in a Multiverse, as part of a larger conversation on modern theoretical physics and experimental methods. Carroll breaks down the problems of Popper’s falsification criterion and arrives at a more pedestrian Bayesian formulation for how to view science. Theories arise, theories get their priors amplified or deflated, that prior support changes due to—often for Carroll—coherence reasons with other theories and considerations and, in the best case, the posterior support improves with better experimental data.

Continuing with the previous posts’ work on expanding Bayes via AIT considerations, the non-continuous changes to a group of scientific theories that arrive with new theories or data require some better model than just adjusting priors. How exactly does coherence play a part in theory formation? If we treat each theory as a binary string that encodes a Turing machine, then the best theory, inductively, is the shortest machine that accepts the data. But we know that there is no machine that can compute that shortest machine, so there needs to be an algorithm that searches through the state space to try to locate the minimal machine. Meanwhile, the data may be varying and the machine may need to incorporate other machines that help improve the coverage of the original machine or are driven by other factors, as Carroll points out:

We use our taste, lessons from experience, and what we know about the rest of physics to help guide us in hopefully productive directions.

The search algorithm is clearly not just brute force in examining every micro variation in the consequences of changing bits in the machine. Instead, large reusable blocks of subroutines get reparameterized or reused with variation.… Read the rest