View Full Version : [Interesting] 10 Important Differences Between Brains and Computers

03-31-2008, 11:58 AM
Great article by Chris Chatham (http://rintintin.colorado.edu/%7Echathach/) over on ScienceBlogs.com

10 Important Differences Between Brains and Computers (http://scienceblogs.com/developingintelligence/2007/03/why_the_brain_is_not_like_a_co.php)

Although the brain-computer metaphor has served cognitive psychology well, research in cognitive neuroscience has revealed many important differences between brains and computers. Appreciating these differences may be crucial to understanding the mechanisms of neural information processing, and ultimately for the creation of artificial intelligence. Below, I review the most important of these differences (and the consequences to cognitive psychology of failing to recognize them): similar ground is covered in this excellent (though lengthy) lecture (http://www.msri.org/cgi-bin/real.cgi?realhost=real.msri.org&realfile=/hosted/pmmb/2002/mumford/1).

Difference # 1: Brains are analogue; computers are digital
It's easy to think that neurons are essentially binary, given that they fire an action potential if they reach a certain threshold, and otherwise do not fire. This superficial similarity to digital "1's and 0's" belies a wide variety of continuous and non-linear processes that directly influence neuronal processing.
For example, one of the primary mechanisms of information transmission appears to be the rate at which neurons fire - an essentially continuous variable. Similarly, networks of neurons can fire in relative synchrony or in relative disarray; this coherence affects the strength of the signals received by downstream neurons. Finally, inside each and every neuron is a leaky integrator circuit, composed of a variety of ion channels and continuously fluctuating membrane potentials.
Failure to recognize these important subtleties may have contributed to Minksy & Papert's infamous mischaracterization of perceptrons, a neural network without an intermediate layer between input and output. In linear networks, any function computed by a 3-layer network can also be computed by a suitably rearranged 2-layer network. In other words, combinations of multiple linear functions can be modeled precisely by just a single linear function. Since their simple 2-layer networks could not solve many important problems, Minksy & Papert reasoned that that larger networks also could not. In contrast, the computations performed by more realistic (i.e., nonlinear) networks are highly dependent on the number of layers - thus, "perceptrons" grossly underestimate the computational power of neural networks.

Difference # 2: The brain uses content-addressable memory
In computers, information in memory is accessed by polling its precise memory address. This is known as byte-addressable memory. In contrast, the brain uses content-addressable memory, such that information can be accessed in memory through "spreading activation" from closely related concepts. For example, thinking of the word "fox" may automatically spread activation to memories related to other clever animals, fox-hunting horseback riders, or attractive members of the opposite sex.
The end result is that your brain has a kind of "built-in Google," in which just a few cues (key words) are enough to cause a full memory to be retrieved. Of course, similar things can be done in computers, mostly by building massive indices of stored data, which then also need to be stored and searched through for the relevant information (incidentally, this is pretty much what Google does, with a few twists).
Although this may seem like a rather minor difference between computers and brains, it has profound effects on neural computation. For example, a lasting debate in cognitive psychology concerned whether information is lost from memory because of simply decay or because of interference from other information. In retrospect, this debate is partially based on the false asssumption that these two possibilities are dissociable, as they can be in computers. Many are now realizing that this debate represents a false dichotomy (http://act-r.psy.cmu.edu/papers/365/ema_cds_2002_a.pdf).

Difference # 3: The brain is a massively parallel machine; computers are modular and serial
An unfortunate legacy of the brain-computer metaphor is the tendency for cognitive psychologists to seek out modularity in the brain. For example, the idea that computers require memory has lead some to seek for the "memory area," when in fact these distinctions are far more messy. One consequence of this over-simplification is that we are only now learning that "memory" regions (such as the hippocampus) are also important for imagination (http://cubic-parsec.blogspot.com/2007/01/hippocampus-and-imagination.html), the representation of novel goals (http://forebrain.blogspot.com/2007/01/goal-related-activity-in-hippocampal.html), spatial navigation (http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&list_uids=5124915&dopt=Abstract), and other diverse functions.
Similarly, one could imagine there being a "language module" in the brain, as there might be in computers with natural language processing programs. Cognitive psychologists even claimed to have found this module, based on patients with damage to a region of the brain known as Broca's area. More recent evidence has shown that language too is computed by widely distributed and domain-general neural circuits, and Broca's area may also be involved in other computations (see here for more on this (http://www.psych.upenn.edu/stslab/Language_organ.pdf)).

Difference # 4: Processing speed is not fixed in the brain; there is no system clock
The speed of neural information processing is subject to a variety of constraints, including the time for electrochemical signals to traverse axons and dendrites, axonal myelination, the diffusion time of neurotransmitters across the synaptic cleft, differences in synaptic efficacy, the coherence of neural firing, the current availability of neurotransmitters, and the prior history of neuronal firing. Although there are individual differences in something psychometricians call "processing speed," this does not reflect a monolithic or unitary construct, and certainly nothing as concrete as the speed of a microprocessor. Instead, psychometric "processing speed" probably indexes a heterogenous combination of all the speed constraints mentioned above.
Similarly, there does not appear to be any central clock in the brain, and there is debate as to how clock-like the brain's time-keeping devices actually are. To use just one example, the cerebellum is often thought to calculate information involving precise timing, as required for delicate motor movements; however, recent evidence suggests that time-keeping in the brain bears more similarity to ripples on a pond (http://www.scientificblogging.com/news/how_does_your_brain_tell_time_study_challenges_the ory_of_inner_clock) than to a standard digital clock.

Difference # 5 - Short-term memory is not like RAM
Although the apparent similarities between RAM and short-term or "working" memory emboldened many early cognitive psychologists, a closer examination reveals strikingly important differences. Although RAM and short-term memory both seem to require power (sustained neuronal firing in the case of short-term memory, and electricity in the case of RAM), short-term memory seems to hold only "pointers" to long term memory whereas RAM holds data that is isomorphic to that being held on the hard disk. (See here (http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&list_uids=15377128&dopt=Abstract) for more about "attentional pointers" in short term memory).
Unlike RAM, the capacity limit of short-term memory is not fixed; the capacity of short-term memory seems to fluctuate with differences in "processing speed" (see Difference #4) as well as with expertise and familiarity.

Difference # 6: No hardware/software distinction can be made with respect to the brain or mind
For years it was tempting to imagine that the brain was the hardware on which a "mind program" or "mind software" is executing. This gave rise to a variety of abstract program-like models of cognition, in which the details of how the brain actually executed those programs was considered irrelevant, in the same way that a Java program can accomplish the same function as a C++ program.
Unfortunately, this appealing hardware/software distinction obscures an important fact: the mind emerges directly from the brain, and changes in the mind are always accompanied by changes in the brain. Any abstract information processing account of cognition will always need to specify how neuronal architecture can implement those processes - otherwise, cognitive modeling is grossly underconstrained. Some blame this misunderstanding for the infamous failure of "symbolic AI (http://www.psych.utoronto.ca/%7Ereingold/courses/ai/symbolic.html)."

Difference # 7: Synapses are far more complex than electrical logic gates
Another pernicious feature of the brain-computer metaphor is that it seems to suggest that brains might also operate on the basis of electrical signals (action potentials) traveling along individual logical gates. Unfortunately, this is only half true. The signals which are propagated along axons are actually electrochemical in nature, meaning that they travel much more slowly than electrical signals in a computer, and that they can be modulated in myriad ways. For example, signal transmission is dependent not only on the putative "logical gates" of synaptic architecture but also by the presence of a variety of chemicals in the synaptic cleft, the relative distance between synapse and dendrites, and many other factors. This adds to the complexity of the processing taking place at each synapse - and it is therefore profoundly wrong to think that neurons function merely as transistors.

Difference #8: Unlike computers, processing and memory are performed by the same components in the brain
Computers process information from memory using CPUs, and then write the results of that processing back to memory. No such distinction exists in the brain. As neurons process information they are also modifying their synapses - which are themselves the substrate of memory. As a result, retrieval from memory always slightly alters those memories (usually making them stronger, but sometimes making them less accurate - see here (http://develintel.blogspot.com/2006/05/origins-of-memory-distortion.html) for more on this).

Difference # 9: The brain is a self-organizing system
This point follows naturally from the previous point - experience profoundly and directly shapes the nature of neural information processing in a way that simply does not happen in traditional microprocessors. For example, the brain is a self-repairing circuit - something known as "trauma-induced plasticity" kicks in after injury. This can lead to a variety of interesting changes, including some that seem to unlock unused potential in the brain (known as acquired savantism (http://www.sciam.com/article.cfm?articleID=0006216C-45CB-116C-85CB83414B7F0000&sc=I100322)), and others that can result in profound cognitive dysfunction (as is unfortunately far more typical in traumatic brain injury and developmental disorders).
One consequence of failing to recognize this difference has been in the field of neuropsychology, where the cognitive performance of brain-damaged patients is examined to determine the computational function of the damaged region. Unfortunately, because of the poorly-understood nature of trauma-induced plasticity, the logic cannot be so straightforward. Similar problems underlie work on developmental disorders and the emerging field of "cognitive genetics", in which the consequences of neural self-organization are frequently neglected (http://www.psychology.nottingham.ac.uk/staff/gs/Publications_files/TiCS_Final.pdf) .

Difference # 10: Brains have bodies
This is not as trivial as it might seem: it turns out that the brain takes surprising advantage of the fact that it has a body at its disposal. For example, despite your intuitive feeling that you could close your eyes and know the locations of objects around you, a series of experiments in the field of change blindness (http://viscog.beckman.uiuc.edu/djs_lab/demos.html) has shown that our visual memories are actually quite sparse. In this case, the brain is "offloading" its memory requirements to the environment in which it exists: why bother remembering the location of objects when a quick glance will suffice? A surprising set of experiments by Jeremy Wolfe (http://search.bwh.harvard.edu/new/pubs/targetsearch.pdf) has shown that even after being asked hundreds of times which simple geometrical shapes are displayed on a computer screen, human subjects continue to answer those questions by gaze rather than rote memory. A wide variety of evidence from other domains suggests that we are only beginning to understand the importance of embodiment in information processing.

Bonus Difference: The brain is much, much bigger than any [current] computer
Accurate biological models of the brain would have to include some 225,000,000,000,000,000 (225 million billion) interactions between cell types, neurotransmitters, neuromodulators, axonal branches and dendritic spines, and that doesn't include the influences of dendritic geometry, or the approximately 1 trillion glial cells which may or may not be important for neural information processing. Because the brain is nonlinear, and because it is so much larger than all current computers, it seems likely that it functions in a completely different fashion. (See here (http://develintel.blogspot.com/2006/01/complexity-and-biologically-accurate_04.html) for more on this.) The brain-computer metaphor obscures this important, though perhaps obvious, difference in raw computational power.

03-31-2008, 04:30 PM
I am in disagreement with most of what is said here. I'll try to be brief, though. The main point is the fact that he doesn't appreciate the nature of programming.

#1 - Analog vs Digital. Digital simulations can exceed or match the performance of analog systems to an arbitrary degree of precision, allowing more efficient resource usage, and more flexible information processing.

#2 - Content addressable memory. Write a program that mimics the functionality of content addressable memory. Voila, et toi, without much ado over nothing. Optimize and debug for better performance.

#3 - Massively Parallel vs single core, etc. Again, it is a matter of programming, and starting in 2009, we will (supposedly) have access to 2-3 teraflop massively parallel processors designed for home use. It's a matter of scale and engineering, not an inherent difference.

#4 - No system clock. Ok, check out asynchronous processors, and the various x86 revisions being designed specifically for asynchronous processors. Again, engineering and software, not a fundamental difference.

#5 - RAM vs Short term Memory. Hardware, Engineering, and clever programming can mimic any behavior we are able to describe.

#6 - No hardware / Software distinction is possible. True, but come on... a piece of software designed to mimic the brain in its entirety would of a necessity include simulations of physical processes. Imagine the brain (hardware) being the chipset xBrain, and the software being a program written in C++ for xBrain. The physical architecture determines the instruction set. More fundamental processes determine the xBrain layout, which is unique to each brain, but there are behaviors and activity that we can monitor, recreate, and optimize. The brain is reverse engineerable.

#7 - Synapses vs Logic gates. Bleh, almost not worth answering. I'll refer back to my answer to #1... a digital computer can recreate an analog construct to an arbitrary degree of precision.

#8 - #6 rehashed... hardware and software are concurrent. This can be virtualized, programmed, etc. Also, it's more efficient to be able to easily replace hardware.

#9 - Self Organizing system... again, who is this guy? I can program a self organizing system. Ta-Da! It's called cellular automata, and if you've ever played the game Life, then you've done it too. It's a challenge for programmers, not a fundamental obstacle or difference between computers and brains.

#10 - Brains have bodies. The only one I agree with. In order to have real intelligence, Brains must have Bodies. A real intelligence must have a sensorium that interacts with the real world. To understand this at a very basic level, research Helen Keller. Her humanity and intelligence is undeniable, yet her sensorium was severely limited. Perhaps there are clues to creating an appropriate sensorium for computer based intelligence in her experiences?

At any rate, I don't want to bash the guy. It's fairly well thought out, but it's closed minded, and misses the crux of the entire issue. The difference between a computer and a brain? There is no fundamental difference. The brain is what we'd like computers to be. It's just a matter of time before we can successfully replicate the brain and it's functions. Human engineering is going to take us to places we can't even conceive of.


04-01-2008, 06:38 PM
I am confused what this person's point is. The brain is not like a computer... and? So what!

Look at history. Mankind has really never made things that are close or like to nature, even though they perform the same functions.

For instance, a cheetah travels 60mph. We certainly can't pump our legs that fast! Oh, but we made this thing called the wheel. And with our wheeled contraptions, we can achieve the same 60mph that a cheetah can!

When mankind flies, it is not by flapping wings! Instead, we fly by forcing a velocity of air over an airfoil, creating lift due to pressure changes above and below the wing.

When playing chess, man 'knows' how to play based on experience, or sees play lines, or such. Computers use Alpha Beta pruning to exhaust all possible moves, looking for the end game advantage.

What I am saying is, so what the computer is not like a brain. We have been using 'non-natural' solutions to solve problems and such for many many years, and I don't see that stopping.

04-07-2008, 11:37 AM
I agree with jrowe47's critique and conclusion. There is a theory in mathematics and computer science of "computability," that is what types of problems can be solved by computation and what types of machines you need to solve them. The short answer is that there is a class of problems that can be solved by mechanical computers, and that the architecture and structure of the machine doesn't matter. Modern computer (with arbitrary amounts of memory), Turing machine, or a man with pencil and paper -- all are exactly equally powerful at solving computable problems (although not equally fast of course).

To be different in kind the human brain would have to be able to solve non-computable problems. While some hold out hope for this, there is absolutely no evidence for it. We can solve some specific problems from the non-computable class, but faced with the general case we're as blinkered as any calculator.

That said, there are obviously operational principles of biological brains that we don't understand. Surely a modern laptop could simulate an ant brain in real time, and yet we struggle to build robots with ant-level behaviors. There's a lot we can learn from biology about how our minds work at a fundamental level, and that's partly what's so exciting about robotics right now.

04-07-2008, 10:11 PM
I think the problem is that we can't accurately simulate reality. I did a personal study a while ago on the links between non-computable problems and intuition, and I came to the conclusion that the human sensorium is capable of guiding computation along a fuzzy pathway. Which is a fancy way of saying it provides a whole bunch of random avenues of thought, while making intelligent guesses as to which avenues might provide a more efficient route to a solution.

Anything you can describe, you can program. The human language is the vehicle of the consensus reality. By describing a problem, you have provided the basis for a solution. It is those problems which are indescribable, or nonsensical in common language, that we have problems with. Such as... can an immovable object be moved by an unstoppable force? The language describes the problem accurately, the words and grammar are correct, but the nature of the terms used creates an insoluble problem.

The fundamental issue challenging AI right now is not one of hardware, or of any discrete differences between brains and computers, but of engineering to the degree of providing accurate sensoriums and behavior fundamentals to particular applications. As soon as we can describe things, such as sentience and intelligence, then we can program them. The challenge is in describing them relative to the vehicle of the sensorium... an intelligent human might be difficult to describe, but how would you describe the equivalent level of intelligence or sentience in a hexapod robot?

04-08-2008, 01:44 AM
We can simulate reality if we can create an abstraction for it. That's what you mean by describe, I think. Even chaotic systems can be simulated if we only care about plausible solutions, not exact solutions. The question is, is intelligence computable? I think it is, but that's speculation at this point because we have no abstract notion of intelligence.

Consider a typical robotics problem: "Take a 3DOF hexapod, starting location, and touch and accelerometer inputs. Given a desired final location, compute a series of joint angles relative to sensor inputs that will put the hexapod at the desired location." This is clearly computable. There is a way to move the legs, reacting to what the sensors find along the way, that will get the robot to the final location. (Or at least there's a way to prove that there's an impassable obstacle.) And yet no one has yet written this program.

But consider the problem that we're all solving now: "Read this forum thread. Compose a short reply which summarizes your views on the subject." Is this problem computable? We don't know. I suspect it is; all the solutions will be unique, but there is also no one set of joint angles that will get a hexapod from A to B. And yet until someone can actually solve it we're shooting in the dark.