Chapter 1: Computers, cybernetics, and information theory

The body is the only vehicle for the soul.

The mind inhabits the brain just as the brain inhabits the body.

So long as the body exhibits certain complexity of interactions with its environment, the central nervous system will manifest an accompanying field property know as *will*: an attentional drive with its own hierarchy of motivations and its own energy source. We attend to the world, and intend things to happen in it. As long as our interactions are sufficiently complex, we exhibit consciousness, this rich mixture of attending, intending and experiencing.

When the body pays attention to, and reacts to, the world around it, the brain produces mind. Mind is a difference engine, a thing (or rather a process) that compares discriminations in the world. We consider/compare/make a ratio of different things in the world. In the mind, information is any difference which makes a difference. Anything you can separate a distinction between in the world (red or orange, warm or cool) becomes a distinct idea: Perception drives Categorization. The mind maps categories on the world, just as the brain maps actions in the world. This is an "Information Theoretical" view of mind, as given us by the late, great anthropologist Gregory Bateson.

A note on the use of the word *maps*... I am using the words "map" "maps" and "mapping" in the very specialized sense used by Nobel Laureate biologist Gerald Edelman. In his Theory of Neuronal Group Selection, Dr. Edelman posits maps as the fundamental unit of selection in the brain. A map can connect visual-spatial awareness with representations of the outside world in much the same way that we think of road maps or topographical maps of geography, but they also mean much more than that. Maps also refer to how visual inputs connect to motor outputs. Thus, both genetic impulse and adaptive environmental circumstance map the perception of a breast with the suckling instinct in a newborn.

So the bold assertion here is that consciousness is an emergent property of the complexity of neurophysiology. We're going to have to build up quite a bit of supporting evidence in order to make this clear.

Just what "emergence" is, for instance, is a mighty complex idea in its own right, so let's start with the notion of complexity. Complexity is a very simple mathematical idea. Something with more parts is more complex than something with fewer parts. And something with more types of parts is much more complicated than something with fewer types of parts. And, most important of all, something with more types of relationships between its parts is vastly more complex than something with a simpler internal logic.

So an amoeba with a hundred trillion cells is more complicated than an amoeba with only one cell, but even a tiny little tadpole or guppy with just a few thousand cells *composed of several dozen different cell types* is quite a bit more complicated than either. And we should expect our little amphibian friend to exhibit much more interesting heat and electromagnetic signatures, and to be much more unpredictable in its behaviors. That interest, and that unpredictability, are not accidental, but in fact are key and prime ingredients in what it means for something to be complicated.

But what does it mean to say that something "emerges from" that complexity? Let's use the example of ice. Frozen water is comprised of hydrogen and oxygen, just like water in gas or liquid form. Either oxygen or hydrogen alone may be in gas or liquid form. But neither, alone, can be in solid form, regardless of how cold they get. From the complexity of the hydrogen-oxygen relationship, the possibility of the phase transition into the state of solid matter emerges. The language of chaos theory (the study of dynamic systems) is a little confusing, but it may make more sense when we revisit it later in the context of brains and consciousness.

For a lot of how brains work, and how their particularly complex processes interact, it may be useful to begin with the example of computers. Computers are at their essence, after all, tools to help us with thinking. Perhaps unsurprising then that thinking about computers helps you think about thinking. Understanding how a computer goes about allocating resources for the various tasks that it accomplishes is very useful in understanding how the human brain serves the mind and body. In the coming pages, we will consider how computers keep track of themselves and the little parts of the world they monitor, including how they interact with us, all with an eye towards how this will help us understand the dynamic systems of brains and minds.

the curious power of metaphor, the insidious danger of analogy

Before we enter into this perilous metaphor of "central nervous system as computer," why use a metaphor at all? Isn't it just a literary device? By its very nature, isn't the explanatory power of a metaphor quite limited as it is using one thing to describe another? But if that were the case, why is the high school teaching metaphor of "picture the atom as a tiny solar system" so pervasive?

A better question might be: why is it so very effective?

Analogies and metaphors are so very effective because that's how sympathetic networks resonate, how neighboring neural circuits communicate. In other words, relating this thing to that in rather fuzzy, imperfect comparison is just how the brain does a great deal of its business. You might say that metaphor is the lingua franca of mental comparisons. And performing comparisons, individual acts of perceptual or rational discrimination, is the essential job function of a rational mind.

((use the image of the differential potential (from Korzybski) to illustrate Edelman's degenerate networks. fuzzy, non-one-to-one networks of association rather than strongly distinct, Venn-diagram-style Aristotelian categories.)) --this will come in later chapters

((analogy between phase transitions and stagelike human development,
chaos theory and Piaget...)) --this too will come in later chapters

how our thinking machines work

The analogy of computer systems for brain systems is a rather fruitful one, and not for accidental reasons. Let us examine the roots of the word technology. (The history of a word is frequently quite revealing: etymology can provide the excavation of a particular thought.) From the ancient Greek, the root word techne evolved over a period of time progressive from art and skill to mean more of the modern engineering sense of the word we have today. It signifies the use of skill and design intention to fashion things in the real world. Techne is the engineering of intelligence; in other words, "building smarts." Tools are cognitive artifacts, extensions of our minds' attempts to manipulate the world.

Don't think for a moment that I'm saying that computers are actually intelligent: in fact they are almost as stupid as they are fast and literal. The important point is that computers are a natural extension of our impulse to manipulate and refine the world around us, like bees' honeycombs, or birds' nests or beavers' dams.

Marshall McLuhan once contended that all technology functioned as extensions of our five senses: optics the extension of the eye, the wheel an extension of the foot (the motor system in general), clothing an extension of the skin... In that vein, it is rather clear that computers--much like typewriters and the Guttenberg press before it--are extensions of our language faculties.
Since language, in Edelman's quite sensible and defensible view, is the scaffolding out of which our higher-order concsiouesness (self-awareness and rationality), it makes good sense for us to explore the parallels between brains and computers a bit further.

First off, how does a computer work? A computer is an electrical piece of mechanical equipment (much like the human body, but typically much less wet) which has inputs, processes, and outputs. It can be defined functionally as the relationship between those three things. This can be called a cybernetic view of computers, an analysis which concerns itself with control and communication in animal and machine systems. There is a power supply, which connects to a series of circuits, switches and peripheral devices. It is telling that we tend to think of the computer as the collection of those peripheral devices: principally the monitor or screen, but also the keyboard, mouse, and various hard disk and optical drives.

This would be a good time to reiterate: it is of utmost importance not to push these metaphors too far, as that leads to overly mechanistic (and quite erroneous) views of what the brain actually is. Thinking of computer's RAM and hard disk drives as short and long-term memory, respectively, turns out to be rather misleading. Even worse, a single switch in computer memory is in almost all major regards NOT like a single neurons. This last point is missed so frequently, it deserves a little more elucidation. While it is true that a neuron has a quite famous all-or-none firing principle (it either fires or it doesn't, never at greater or less intensity) and a switch has a similarly binary yes-or-no state, that is where the similarities end. Neurons use frequency of firing as communicators of important information regarding intensity, whilst computers emphatically do not use variations in timing as any sort of information signal. Also, computers are notoriously serial in their communications: a switch in memory (not to be confused with a networking switch, like a hub or router) is typically lined up in serial, with one other switch on either side. In the world of computers, 64 or even 32 bits operating in parallel is considered "massively parallel" computing. By contrast, the average neuron takes inputs from in the neighborhood of five hundred to ten thousand other neurons, and often signals out to just as many. Also, the cell body of the neuron reacts with a gradient potential to incoming neural signals, where in extremely analog fashion the signals coming closer to the "axon hillock" (where the electrical all-or-none signal moves forth from the neuron) weigh much more strongly in the neuron's decision whether or not to fire than those signals more distal from the axon. The neuron is, for these reasons alone, massively parallel and extremely analog. Considering further the chemical nature of neuronal signal makes it even more clear that a neuron is in almost no way like a single switch in computer memory.

In the context of our consideration of the human brain, then, the computer is best conceived of in cybernetic terms: namely, input, process and output. This is also the grand picture of the human nervous system, in which the sensory nervous system is input, the central nervous system in processing, and the motor nervous system is output. The "magic" of computers happens in the processing stage, and that is true of us as well. We are able to store datum about how the world works in fuzzy little containers, these things called words, which bear more than a passing resemblance to computer variables and contants.