Be Persistent and Evolve

If we think about the evolution of living things we generally start from the idea that evolution requires replicators, variation, and selection. But what if we loosened that up to the more everyday semantics of the word “evolution” when we talk about the evolution of galaxies or of societies or of crystals? Each changes, grows, contracts, and has some kind of persistence that is mediated by a range of internal and external forces. For crystals, the availability of heat and access to the necessary chemicals is key. For galaxies, elements and gravity and nuclear forces are paramount. In societies, technological invention and social revolution overlay the human replicators and their biological evolution. Should we make a leap and just declare that there is some kind of impetus or law to the universe such that when there are composable subsystems and composition constraints, there will be an exploration of the allowed state space for composition? Does this add to our understanding of the universe?

Wong, et. al. say exactly that in “On the roles of function and selection in evolving systems” in PNAS. The paper reminds me of the various efforts to explain genetic information growth given raw conceptions of entropy and, indeed, some of those papers appear in the cites. It was once considered an intriguing problem how organisms become increasingly complex in the face of, well, the grinding dissolution of entropy. It wasn’t really that hard for most scientists: Earth receives an enormous load of solar energy that supports the push of informational systems towards negentropy. But, to the earlier point about composability and constraints, the energy is in a proportion that supports the persistence of systems that are complex.… Read the rest

Intelligent Borrowing

There has been a continuous bleed of biological, philosophical, linguistic, and psychological concepts into computer science since the 1950s. Artificial neural networks were inspired by real ones. Simulated evolution was designed around metaphorical patterns of natural evolution. Philosophical, linguistic, and psychological ideas transferred as knowledge representation and grammars, both natural and formal.

Since computer science is a uniquely synthetic kind of science and not quite a natural one, borrowing and applying metaphors seems to be part of the normal mode of advancement in this field. There is a purely mathematical component to the field in the fundamental questions around classes of algorithms and what is computable, but there are also highly synthetic issues that arise from architectures that are contingent on physical realizations. Finally, the application to simulating intelligent behavior relies largely on three separate modes of operation:

  1. Hypothesize about how intelligent beings perform such tasks
  2. Import metaphors based on those hypotheses
  3. Given initial success, use considerations of statistical features and their mappings to improve on the imported metaphors (and, rarely, improve with additional biological insights)

So, for instance, we import a simplified model of neural networks as connected sets of weights representing some kind of variable activation or inhibition potentials combined with sudden synaptic firing. Abstractly we already have an interesting kind of transfer function that takes a set of input variables and has a nonlinear mapping to the output variables. It’s interesting because being nonlinear means it can potentially compute very difficult relationships between the input and output.

But we see limitations, immediately, and these are observed in the history of the field. For instance, if you just have a single layer of these simulated neurons, the system isn’t fundamentally complex enough to compute any complex functions, so we add a few layers and then more and more.… Read the rest