Human Brain Functionality

There are certainly many thousands of researchers investigating every possible aspect of our brains and how they work.   However, it seems that the insights of different researchers aren't always getting to others who might benefit from them!   This essay briefly presents a number of areas that seem worthy of investigating.

Artificial Intelligence

Computer people are roaring along, making processors and computers faster and faster.   One of the future hopes is that an eventual processor will be fast enough to accomplish the equivalent to human thought and logic.

But other researchers have firmly established that our brains don't work like that!   Specifically, computers use metallic circuits where the electric signals travel at nearly the speed of light, 186,000 miles per second.   In biological nervous systems, the electrical signals must cross "synapses" between nerve cells where the electric signal is carried by a chemical action.   This, and the sheathed neurons that carry many such signals, regularly transfer electric signals at around 200 miles per hour, which is about 0.06 miles per second.   Two hundred miles per hour (around 300 feet per second) is plenty fast enough for biological systems to work adequately, so we have good reaction times to danger and all the rest.   But that signal transfer speed through the conductors in computers is around 3,000,000 times as fast as in biological systems such as our nervous systems and brains.

I recently learned of someone who claimed that a biological system in the human brain is supposed to commonly handle 100,000,000 bits of information per second.   But no proof was presented for such an outrageous claim!   It is unimaginable that any biological signal path could process anywhere near that many bits of data per second.

This limitation of signal speed in biological systems completely eliminates any possibility of computer-like speeds of processing.   There is no way around that fact, because of the comparative slowness of signal movements in living beings.   Therefore, a "single processor" paradigm for the human brain is apparently impossible.

Instead, it seems that we each must have a multitude of relatively independent "slow" processors that each accomplishes fairly narrow goals.   The researchers that are pursuing this direction call them "cognitive modules".   For example, it appears that quite a few separate organic "processors" receive and process the various sense stimuli that the body is exposed to.   To a great extent, it seems that they are unaware of the activity of each other.   In computer jargon, they would each be called dedicated purpose processors.   Each module seems to be "programmed" to accomplish a single task in the brain.   But a separate "central" biological computer (now called an interpreter, 2006) is continuously monitoring the processed results of these many separate computers, and then making system-wide responses as a result.   The "dedicated processors" do not "bother" the central processor under normal conditions. When they have nothing that is out of the ordinary that has occurred above a certain threshhold of sensation, they create no output to send to it.   This leaves the central processor (interpreter) to remain free to deal with whatever very limited input it actually needs to possibly react to.   The dedicated processors (cognitive modules) only send output signals to the central computer when there is a change in some sensory input or condition, which is above some threshold level.   This situation keeps the central processor from being overloaded by signal load, and actually keeps it generally pretty free to be able to concentrate its efforts on urgent matters.

Public Service
Self-Sufficiency - Many Suggestions

Environmental Subjects

Scientific Subjects

Advanced Physics

Social Subjects

Religious Subjects

Public Services Home Page

Main Menu
As a hypothetical example, imagine a dedicated processor that has the exclusive duty of monitoring the signals from nerve endings in your left foot. As long as the temperature of all those nerve endings remains relatively constant, they do not send signals to the brain. Even though that dedicated processor constantly "polls" all of the incoming nerves for signals, nothing "interesting" is happening right now. But now a rose thorn sticks into that foot. A signal is sent to the brain, which indicates the intensity of the pressure/pain by the frequency of the alternating current signal. The dedicated processor recognizes which nerve ending(s) are sending the signals, and it sends an output signal to the central processor. That signal includes information regarding the intensity level of the pain. The central processor immediately gives attention to that input, and after some processing, sends an output signal out to the leg muscles to pull the leg away from that rose bush. It all can happen very rapidly (not computer-rapidly, but in milliseconds) even though each of the biological signal processors involved are only capable of functioning at rather slow rates.

In a sense, for Artificial Intelligence (AI), we have had such technology within our grasp for at least ten years already! Imagine a room full of (obsolete, slow) 386 or even 286 computers, maybe 200 of them. And then one more old 386 that has NO external sensors but ONLY is attentive to the outputs of the others. Even the wiring to accomplish this is pretty mundane. One of the computers might be connected to tens of thousands of pressure sensors on the west wall of the room, and nothing else. It continuously "polls" all of its sensors, but as long as nothing touches its wall, it doesn't really do much. More importantly, it doesn't annoy the CENTRAL computer with anything to have to react to.

An entirely separate computer might be monitoring that same wall for temperature variations. Other computers would be monitoring the other walls and floor and ceiling in similar ways. Another computer could be programmed to sense aromatic (smell) sensations, and another taste. Several might be involved with sight and light and color stimuli.

Let's say a fly lands somewhere on that west wall of the room. Suddenly, the very sensitive pressure sensors tell that specific computer that something has applied a minor localized pressure there. Maybe the heat sensor even tells ITS computer about a tiny amount of heat being radiated from the fly's body. The two very specialized computers each do their analysis. Each compare such inputs to some pre-determined threshold values, to decide whether the sensation is actually large enough to care about. The one soon sends a small amount of information to the central computer that a very localized pressure of a certain fraction of an ounce exists at a specific location on the wall. The central computer might also receive information from the temperature computer about the slight temperature rise in the same location. The central computer then combines the two and checks with yet more (memory) computers that have stored records of previous archived experiences, and quickly concludes that it is a fly at that location.

No ultra-fast computer is necessary for this sequence, closely following the rather slow processing speed of our organic brain computer! But it IS important that a BUNCH of virtually independent computers are each doing their work at sensing outside stimuli.

There could be hundreds of thousands of sensors covering the walls of the room, but as long as there were no sensor readings that had a DIFFERENTIAL greater than the threshold values, the central processor would not be bothered. This is akin to the very large numbers of nerve endings in all of the skin of our bodies, and that we are generally unaware of any sensations, except for when something changes.

There is a notable limitation in this approach. Say that a million flies were let into the room, and they kept landing on various sensors on the walls and then moving on. Each dedicated processor would be very busy, and each would be sending large numbers of output signals to the central processor. In this situation, the central processor would become completely overloaded. One possibility is that it could go into something like biological "shock" where it just stopped even trying to process ANY input signals. If the overwhelming quantities of input signals continued beyond a minute or two, the central processor (or all the dedicated processors) could choose a different reaction. The threshold value of signal strength could be increased. This would generally reduce the number of signals sent to the central processor or the number accepted by it. Essentially, this would result in the central processor again being able to handle its duty load, but it would also result in a sort of "numbing" of the sensory inputs due to the higher thresholds.

These arguments would possibly explain several biological activities, such as shock and the natural numbing that nearly always occurs a minute or so after an injury. Traditional straight-computer-like thinking cannot easily explain such things.


Since all of these computers are essentially identical, effectively, any of them could do any of the functions. The room could have a number of EXTRA computers available that are connected but not being used. If a specific computer should burn out or otherwise become defective, the CENTRAL control computer could choose to ignore all further information from that unit, and "train" and then watch the outputs of one of the standby computers, which would take over the necessary functionality of that defective computer. Effectively, the faulty unit would be functionally replaced, even though it could still be active, but just ignored.

Early Development

In considering the situation comparable to that of an early organic embryo of a creature or plant, a primitive arrangement like this might have the 200 functioning computers connected "in parallel" where none yet has specific functions and none yet has a leadership role. At some point, one of them acts in a creative role, directing another to focus on a specific function. The overall success of this, and its consequent value to the "organism" will likely be great enough to encourage more creative direction, and the group of identical computers would soon self-arrange into the diversity of single-purpose computers providing the "creative" CENTRAL computer with stimuli information. This would likely evolve in this way, as the CENTRAL computer had early demonstrated both leadership and creativity in directing the compartmentalization of functions.

This reasoning implies that an assortment of identical processors would logically evolve into an organized structure. No necessary preference would need to exist regarding any specific processor as being the ultimate CENTRAL processor.

Another valuable insight seems available here. If the environment is sterile, with little external stimuli present, few of the computers / processors would be needed for adequate functioning, so most would be left as standby units for possible future need. A sterile environment would tend to inspire minimum brain activity. However, a complex environment, with large numbers and varieties of stimuli present, would likely inspire the use of a large number of such individual processors, to both handle the diversity of the environment and keep the CENTRAL processor from being overwhelmed. A diverse and active environment would tend to inspire extensive brain development and activity. This effect has long been seen in children.

Using this analogy, it seems that unborn babies are developing the first few processors, including establishing which one will be the CENTRAL processor. Stimuli such as singing and verbally reading and music might easily affect the rapidity of development of additional processors at that time. A newborn child generally is in an environment where overwhelming amounts of stimuli information is available to be processed, which would suggest that many additional dedicated processors would be assigned during that period. Again, the maximum diversity of stimuli in the environment would inspire the maximum assignment of additional stimuli processors.

It would seem that, for most people, by the age of six or so, the NUMBER of such dedicated independent processors might be pretty well established. From then on, learning would generally involve refinements in the precision and the processing methods of each of them. This might suggest that, for most people, the actual basic intellectual capability might be set by about that age. However, there is no actual reason for this to be true. If a child (or adult) would later deeply study ANY specific field (whether it's playing the piano or studying a foreign language or playing football or playing arcade games or virtually anything else) it seems likely that a brain could "activate" one or more of those "standby processors" for that purpose. There are actually there for that very purpose! If, due to a failure in some active processor, or due to severe brain injury, or due to a change of environment where survival requires learning new techniques and abilities, any of those standby processors could be activated, to improve the survival chances of that person (or animal).

This reasoning also explains the rather large brains that humans have, and also that much of those brains seem to be relatively inactive. Those sections are there and available, for possible use in the future as improvements to personal survival, due to the uncertainty of what a future might hold. In each of the eventualities mentioned above, a person's survival would greatly diminish without this "flexibility."

There appear to be many other aspects of human physiology that are explainable with this logical approach. In addition, an assortment of psychological curiosities might also find explanation with this approach.

Regarding Artificial Intelligence (AI) research, these thoughts seem to suggest that much of current work may be directed in a somewhat different direction. Nearly all researchers approach the problem as involving faster and faster single central processors, which then directly deal with all stimuli. That approach will certainly work, but it seems that it will have several intrinsic limitations, specifically sensory overload of that central processor and the complexities of logic necessary in the programming.

A "distributed brain" as implied here is more like a community of dedicated slow, primitive processors working together. None, even the CENTRAL processor, requires extreme programming, and none requires excessive speed of processing. Each can plod along and minimal speeds, essentially totally unaware of all the Universe except its specific sensor inputs.

In a biological sense, it could be no other way! A single, ultra-complex brain could become non-functional due to even minor injuries or illnesses, and so the person or animal would have grave survival problems. A distributive processing approach would far better ensure the survival of that person or animal, even with occasional degradation of one or more of the separate processor units. Considering the special importance of the CENTRAL processor in the survival equation, it seems logical that a second and even a third redundant processor is active for that functionality. Along the same vein of logic, those two or three separate processors are likely to be physically separated, such that severe damage to one part of the skull might still permit the necessary survival processing to continue in a redundant CENTRAL processor in a less damaged area.

Again, regarding AI, I can imagine dedicating 100 of the 200 active processors to various areas of visual stimuli. A few of those could ONLY be alert to far peripheral vision. As long as nothing unexpected occurs in that visual area, that processor would not send much information to the central processor. This would permit a "general awareness" of that peripheral region without requiring the CENTRAL processor to use up significant time regarding it. In the even that some unexpected thing is sensed there, an appropriate message is created by that processor and sent to the CENTRAL processor, so it can decide what response is appropriate. Many of the processors associated with visual input would be specifically occupied with areas inside a narrow zone of the central attention focus of the eyes.

This presentation was first placed on the Internet in March 2000.

Other Health-Related Web-Pages in this Domain

Do you want to lose some weight RIGHT NOW?
Bodyfat, Weight Loss, the Physics of Body Weight Control
How Did You Gain Weight?
Bodyfat - Accurate and Easy Determination
Bodyfat - Combating Childhood Obesity Through Motivation
Bodyfat - Simple and Accurate Measurement - PSA Storyline
Bodyfat Percentage - Determining Accurate Bodyfat Easily
Dieting for Weight Loss - The Physics
Medical Anesthesia Can be Much Safer
Pure Water Supply for Third World Villages
Pure Desalinated Seawater for Third World Villages
Safe Drinking Water for a Third World Village
Calf Muscle Strength Development
Common Cold, Value, Importance, Cure
Muscle Cramp Warning?
World Hunger - A Unique New Solution
Horses Sleep in Two Completely Different Ways
Human Being - Thermal Efficiency - the Physics Scientific Analysis
Lumbar Lower-Back Support Device
Burger Promotion For Charity - Taste Buds in the Mouth
Diet Food, Lower-Calorie Aerated Foods
Diet Method, Intermittent Eating Diet
Dieting Method - Maybe an Odd Approach
Dieting - A Physicist's Weight Loss System A Sleeping Weight Loss System!
Sports Situations - Physics Analysis
Brain Research From Sleeping Dolphins
Human Brain Functionality Our Brains and Artificial Intelligence
Bodyfat Analysis - 20-Compartment Percentage Formula
Improving Bad Heartburn, GERD, Reflux
Tobacco Negotiations Concerns The Tobacco, Cigarette Industry (1995)
Tobacco Negotiations - Update (2001)
Blue Streak Optical Phenomenon A Strange Visual Sensation
ESP, Extra-Sensory Perception. A Possible Mechanism
Exhilaration, Happiness, Vacations, Thrill Seekers
Déjà vu and other Unusual Phenomena - Deja vu
Conflict Resolution - A Unique New Approach
Learning Right And Wrong
GMO - Genetically Modifying Foods - The Physics, the Safety
Life Choices - Practical Discussion for Teens
Bodyfat - Modifying your Breathing (Respiration) May Reduce Your Bodyfat
IQ Test - Possibly a More Accurate Approach
A Partial Explanation of the Obesity Epidemic

This page - - - - is at
This subject presentation was last updated on - -

Link to the Public Services Home Page


Link to the Public Services Main Menu


E-mail to:

C Johnson, Theoretical Physicist, Physics Degree from Univ of Chicago