Triangulation Machinery, Poetry, and Politics

I was reading Muriel Rukeyser‘s poetry and marveling at some of the lucid yet novel constructions she employs. I was trying to avoid the grueling work of comparing and contrasting Biden’s speech on the anniversary of January 6th, 2021 with the responses from various Republican defenders of Trump. Both pulled into focus the effect of semantic and pragmatic framing as part of the poetic and political processes, respectively. Sorry, Muriel, I just compared your work to the slow boil of democracy.

Reaching in interlaced gods, animals, and men.
There is no background. The figures hold their peace
In a web of movement. There is no frustration,
Every gesture is taken, everything yields connections.

There is a theory about how language works that I’ve discussed here before. In this theory, from Donald Davidson primarily, the meaning of words and phrases are tied directly to a shared interrogation of what each person is trying to convey. Imagine a child observing a dog and a parent says “dog” and is fairly consistent with that usage across several different breeds that are presented to the child. The child may overuse the word, calling a cat a dog at some point, at which point the parent corrects the child with “cat” and the child proceeds along through this interrogatory process, triangulating in on the meaning of dog versus cat. Triangulation is Davidson’s term, reflecting three parties: two people discussing a thing or idea. In the case of human children, we also know that there are some innate preferences the child will apply during the triangulation process, like preferring “whole object” semantics to atomized ones, and assuming different words mean different things even when applied to the same object: so “canine” and “dog” must refer to the same object in slightly different ways since they are differing words, and indeed they do: dog IS-A canine but not vice-versa.

An objection to this triangulation notion of meaning is that it is possible and even likely that when interrogating and building this triangulated model that one party may never “get it.” That is, when one party describes, say, the atom as being a series of quantized probabilistic fields working from electronic shells inwards to mysterious quark bags, the other party may never achieve the background understanding through interrogation and triangulation needed to reach the same level of meaning attachment as the first party. They get some of it, but it never comes into focus. This is also the classic Quine “gavagai” thought experiment where a tribe member who speaks an unknown language seems to refer to a rabbit as a “gavagai” but is always thinking in rather mysteriously unrelated ways compared to the anthropologist/linguist interviewing her. Are the two ever sharing the same meaning?

These problems pull at the question of certainty with regard to translation and meaning, and demolish the notion of perfection. But they also point out that we can nonetheless achieve an imperfect, operational meaning. For instance, if there is a spiritual-symbolic meaning of gavagai that transcends merely the physical form of the rabbit, we hope that the anthropologist can unravel it further from additional study, so the triangulation process was merely incomplete. It is always an ongoing optimization process that arrives as a shared network of meaning integrated at each end in the interlocutors’ background knowledge obtained by other triangulation events.

If we had to come up, then, with a cognitive model for performing that triangulation and integration, we want it to have several critical features. First, it should be “effective” in the sense that it should have mechanisms for performing the functions needed to triangulate meaning—no hand-waiving. Second, it should be mostly optimizing. That is, we expect learning to work often enough that these parties can communicate with one another. But errors happen, and massive misunderstandings are not uncommon either. So we want it to be mostly optimizing. Given that, we just need a computing machine that can create hypotheses and then reorder or optimize the model to guide it towards a closer approximation of what the other party means. And even then, the other party’s understanding may change as the interrogation proceeds, causing revisions for both as the triangulation zeros in on a closer representation of the shared meaning.

A few years ago I would have argued against artificial neural networks as a good candidate for a model since small versions of them tend to be very gradualistic in their optimization behavior. General “gradient descent” optimization is like that: we encode the possible attachment language for the word that we are trying to triangulate on and we are done when Party A’s attachments minimize the error versus Party B, bouncing along through local optima of mistaken conceptions. Recent work on one-shot learning in large deep learning networks suggest ANNs may not be quite as constrained in their performance on these matters. Depending on the approach, this may have to do with attention nodes that can switch between large sub-networks and therefore provide a less gradual traversal of the state space. It might also be possible to hybridize the networks as modular controllers for other architectures that involve registers and state counters, allowing for more rapid switching between potential interpretations.

There is also a “constrained creativity” problem associated with the choice of triangulating questions. Clearly the space we want to examine is predetermined by the expectations we already have from previous matches in this language game, going all the way to childhood. And the innate biases like the “whole object” preference play a role as well. Moreover, we are grounded and epistemologically “piered” to core naive physics about ordinary lives, and naive psychology about people (“naive” in the sense of everyday or commonplace, not in the sense of lacking experience). How do we create this model such that it is generative, creative, and yet constrained, all in the pursuit of triangulating meanings for us? Trying to encode this as a meta-goal of a machine, say an ANN, requires training it to seek the resolution of ambiguity, which is likely possible but has no clear alignment with current methods.

Leave a Reply

Your email address will not be published. Required fields are marked *