emblemparade.com

Darwin Was Wrong

Originally published on LiveJournal, 7.23.06

Darwin was wrong and Butler was right.

Butler was furiously opposed to the idea that selection depended on random (“stochastic” in the language of evolution) mutations in the genotype. Remember, this was before genetic theory, so they could think of this only in general terms. Butler argued that these mutations simply could not, logically, be random, that somehow there was a very “deep cunning” in the genotype that produced mutations that appear, to us dumb observers, random. How does the genotype “know” this? Butler surmised that there must exist some kind of information circuit from the phenotype to the genotype that goes beyond the simple selection of mates.

For Darwin and others at the time, the only logical direction of Butler’s thinking was Lamarckism: the theory that changes in the phenotype lead directly to changes in the genotype, meaning that proto-giraffes kept stretching their necks to reach leaves, and this stretching somehow caused a change in their genotype, and thus their children had longer necks. Lamarckism was supposedly proven without doubt to be entirely wrong, and thus Butler must have been wrong, too. Right? Not so fast, says Bateson.

Bateson says that there could be phenomena in evolution that simulate Lamarckian change, through indirect communication between phenotype and genotype. He used cybernetic theory, with which he modeled both genotype and phenotype as part of a very broad ecological system. First, he assumed that the phenotype is immensely complex, and that the genotype may respond to signals from within the phenotype that may come from even outside the phenotype. The phenotype can act a chemical conduit. Also, it’s a well known fact that genotype can affect the phenotype’s preference for selection, going beyond merely the hard-nosed “survivalism” espoused by Darwinists. For example (straight out of Bateson) giraffes stretching their necks might tire themselves, and select (via Darwinism) larger hearts to pump more blood so they won’t get so tired all the time. What the genotype did, then, was create a heart that could get tired so that there would be a second level of learning: what Bateson calls deutero-learning, or Learning II. The bigger heart gene might be connected to a longer neck gene (and another set of genes, too). We know for a fact that genes work together. It thus looks like Lamarckian change, but the circuit is longer, and requires quite a few generations to see as a whole. Actually, it requires more than just looking at population: you will have to take into account that the trees grew taller, forcing the giraffes to stretch their necks. Could the genotype “know” about trees? Not directly, but there could be higher levels of learning here: Learning III, Learning IV, and so on. There could be systems in place by which organisms change the environment in order to force later generations to change themselves. In short, in the very structure of chromosomes there might be something that could allow for the eventual development of human-like intelligence. In which case, we may say that chromosomes are structurally intelligent.

There’s something much more complex going on in evolution than high-school textbooks give credit for, and the genotype can, indeed, be said to “know” something about how the phenotype behaves and chooses mates. There very well may be a very deep cunning going on. Butler wasn’t using Bateson’s terms, but his instinct was on the mark. Most remarkably, he refused Darwin’s notion of “heredity.” He called genotypical transformation “memory.” He furthermore argued that such memory was identical to human memory. A true pioneer of cybernetics.

This is very good news for genetic research today, because finally, with supercomputers, we have tools that can analyze very complex information systems, and understand genetic diseases from a more “ecological” (cybernetic) perspective. This means, though, letting go of Darwin, and in a way embracing people like Butler who argued for “intelligent design”. This is also good knows for those hoping for sentient life on other planets. Within the structures of amino acids (already found plentifully in meteors) there may be structutal ability for Learning XXXII or whatever, something that can evolve quite easily (in millions of years) to human-like sentience.

There’s more to this, perhaps even more important for us right now.

It makes sense, then, that Bateson argued publicly against his fellow scientist who said that intelligent design should not be taught in the classroom. Bateson didn’t think that there was a god controlling evolution, but he was very depressed at how Darwinism turned into an obsessive orthodoxy that denied more subtle thinking. He felt Darwin set science back by his theory, and he thought that teaching more options in schools would create more flexible scientists for the future.

I have to say, I agree, and I always was very upset at how close-minded this anti-intelligent-design movement made science appear. If evolution is a better theory, it should be shown to be so in class. However, as is usual with students, they wouldn’t necessarily believe the entire argument, and trying to think things through for themselves would open up new avenues to better thinking. It could create someone like Gregory Bateson, who’s father, William Bateson, was exactly the kind of odd-ball scientist who didn’t agree with other scientists of his time, and wasn’t upset by the fact that he couldn’t offer an alternative solution. Sometimes it’s good just to show holes in someone’s theory. (The same goes for Marx uncovering capitalism’s inherent logical flaws without really offering any plan for how a socialism would look like.)

Anyway. The real lesson here is not so much about Darwin and Butler, but about how science should look like. And when scientists start burning books — even intelligent design books — we should all be a bit worried about the future of science.