Intelligence—do we know what it is? Of course, “it’s what intelligence
tests measure.” They do measure something. That’s a fact demonstrated
by the high correlations they exhibit between different questions
within the same test, and between tests given to the same individual
People know intelligence when they see it—most people choose a spouse
with an IQ similar to their own. This fact may be how intelligence
evolved in the first place. A positive feedback cycle for intelligence
might involve individuals striving to get the most intelligent mates
they could. The least intelligent, being less sought after as mates,
would produce fewer offspring. The survival benefits of intelligence
might even be secondary to the effects of such a positive feedback
OK, it evolved. We can test it and we can recognize it, so why can’t
we define it better? Well, that’s what “Numinations” are all about.
Here we sift thoughts and formulate fresh patterns. For example,
intelligence is quickness. It has been demonstrated that a word or
picture flashed on a screen is identified more often by those subjects
with greater intelligence. In other words, the ability to identify
something seen briefly correlates with intelligence. This is quickness
in the simplest sense.
Intelligence is a talent for abstract reasoning. It is a talent for
problem solving. It has much to do with a facility for language, and a
good memory. It is not necessarily a talent for decision making or
goal reaching. It has been suggested that there are many types of
intelligence. I hold to the “Spearman’s g” theory that intelligence is
a single, albeit perhaps large and nebulous, thing made up of a number
of components—a sum that is more than its individual parts. We may
test the components separately, but I would suggest that the total
score is more important than a profile of the parts.
One exceptional component, in an otherwise dysfunctional team of
components, results in an idiot savant. A well-functioning team may
not even need a superstar member to compete at the championship level.
As for what the components are and how they can best function as a
team, human patterns of intelligence may offer only preliminary clues.
Our brains aren’t the be-all and end-all of intelligence—the evolution
of intelligencecould well have a longer road in front of it than it has
traveled up to where we stand today.
Think of intelligence as an engineering problem. There are two ways to
approach its design. One is to duplicate some aspect of human behavior
that is generally agreed to demonstrate intelligence. The other is to
reverse engineer the brain and duplicate the functions one discovers in
it. There are scientists and engineers at work right now on both of
these approaches. Let’s take a closer look at each.
The first approach is essentially the “Turing Test” approach. Today’s
Turing Test is not as he originally proposed it—he suggested
discriminating between a man and a woman at remote teletypes. In
today’s version, there are two computers on a table in a room. One
computer has a sophisticated program in it, the other has a simple link
to a human in another room. Both respond similarly. Both can carry on
a conversation with you. Can you tell which computer is “talking” for
itself, and which is connected to a human? How much would you be
willing to bet? When an experienced judge can get it right only about
half the time, we would say the computer program has passed the Turing
Clearly, the program should evince understanding, be able to answer
questions, work out problems, and exhibit a great deal of real world
knowledge. It might also have to feign some human failings such as
poor typing, less than perfect spelling and grammar, and so forth.
The problem here is that the program may have to exhibit more
personality than intelligence to pass the test. Perhaps a better
scenario would simply be a panel of judges who each conduct a one hour
conversation with six computers. Five of the computers would be
connected through to humans with measured IQs spanning a range from 100
to 140. The sixth computer would contain the “intelligent” program.
The judges would be used to rank the six “contestants” in order of
their intelligence as well as to try to “spot the computer.” If the
computer consistently ranked above the lowest third and escaped
detection better than a third of the time, it would have to be credited
with a fair degree of intelligence.
The other approach, reverse engineering, reminds me of the old films of
people trying to fly by strapping on wings and flapping really hard.
It was only recently that human powered flight was achieved. The
Gossamer Condor looked nothing like any bird. It was a high-tech
marvel of materials and design, and it owed nothing to the way birds
fly. It’s too early to say if this analogy parallels the efforts of
those who are attempting to reverse engineer the human brain, but it’s
tempting to think so.
Certainly something can be learned by a study of the brain. For one
thing, uploading and downloading programs and information does not look
feasible with nature’s design. A different design could repair this
limitation. The brain is not intelligent at birth. It begins with no
memory and few abilities, but it has a fantastic ability to learn, and
it has a set of instincts to guide its learning and behavior. These
are surely lessons to be considered. Another lesson is the magnitude
of the brain’s complexity. The brain consists of some hundred billion
individual neurons. Each neuron is a single cell. The complexity of a
single cell is on a par with that of some of our most complex machines,
but many specifics of the cell are still shrouded in mystery. A cell
is certainly many, many times the complexity of a single memory
location in a computer. We could build computers with a hundred
billion memory locations, but not at present with a hundred billion
Each of the brain’s processing units (neurons) is attached to as many
as several thousand other processing units. The number of connections
in the human brain is very large, perhaps in the high trillions. The
brain also appears to have three or more types of memory mechanism. At
least one of these involves the brain’s ability to “wire” itself.
Another probably involves the building of complex molecules to encode
information. The first of these involves areas of the brain devoted to
certain special functions. The second allows memories to be spread
out, so that a single memory is not located in a single cell or a few
connections. There appear to be one or more other memory mechanisms as
The brain is a vast parallel processor. Modern computers are serial
processors for the most part. The brain’s processing speeds are on the
order of several milliseconds per operation. The speed of a modern
computer is a million times as fast. The brain is “hardwired” even
though it has considerable ability to “rewire” itself. A computer is
driven by software. Only a small part of its logic is hardwired.
Computer software can be updated by the computer itself.
The information it takes to build a cell is on the same order as the
information contained in a modern computer operating system. One copy
of this logic is all that would be needed for a computer simulation of
any number of cells. The functional contribution of a cell in the
brain to the brain’s overall intelligence could probably be represented
by only a few memory locations in a computer, perhaps with a larger set
of locations to be shared by many such cell simulators. A living brain
needs a great deal of redundancy to survive the rigors of life. It
also has many other duties that don’t contribute to its “intelligence.”
When all of these factors are considered, it is plausible that a modern
computer has the potential complexity of the human brain, and therefore
the potential, given the right software, to exhibit intelligence. If
this is not quite true, it will certainly be true one day very soon.
Now the question is, what does it take to develop the right software?
The concept of software is closely related to the concept of
information itself. We use these terms all the time. Familiarity has
bred a certain degree of contempt. It will turn out, however, that
these are exquisitely deep and complex subjects worthy of much more
exploration (and “Numination”). To see how the complexity of software
can increase with its size, consider the busy beaver function
discovered by Tibor Rado circa 1962 (citations are hard to find; see
The Age of Intelligent Machines, by Raymond Kurzweil).
Essentially, a “busy beaver” is a computer program that prints out the
longest possible sequence of “1” digits, and then stops. It doesn’t
take a very complex program to print forever; that can be done by a
loop of instructions that never terminates. It is much harder to get a
program to produce the longest possible printout, and then stop.
The busy beaver function is the mapping between the number of
instructions in a program and the number of ones it can print before
stopping. This is a very simple analogy to how complex a program is.
The function is interesting because of how fast it increases. It can
also be shown that the values for this function cannot be computed. In
fact, it can be further shown that whether a given busy beaver program
(or any program, for that matter) will ever stop cannot be computed.
The only thing we can do with this function is to lay down a set of
rules for a machine of some type, and the instructions used to program
it, then begin solving the function one number at a time. This has
been done for a theoretical computer called a Turing Machine with up to
eight states (a state in a Turing Machine is like a line of code in an
ordinary computer). The busy beaver of the first five numbers isn’t
even mentioned by Kurzweil, and he reports that the busy beaver of six
is only 35. The busy beaver of seven jumps to 22,961 and the busy
beaver of eight is 1 followed by 43 zeros. The busy beaver of nine has
not even been estimated as far as I know, and well before the busy
beaver of 100, the intelligence of the human brain is probably
incapable of even making an estimate.
The point of this is to convey the complexity possible even in short
computer programs. Couple this with a result proved by Turing and
mentioned by Kurzweil: A Turing Machine can model any machine, where a
machine is regarded as a defined process that follows natural laws.
Then, if we regard the brain as something that could, in principle, be
described, and that follows natural laws, we must conclude that the
brain could be modeled by a Turing Machine, or by a common computer
with sufficient memory. Such models never consider processing time,
only computational power. But, given the time to evolve software, and
hardware not too much faster than today’s, a computer should be able to
pass the Turing Test. If it did so, who’s to say that it would not be
intelligent in its own right? The alternative is that the brain
cannot, in principle, be described or defined, or that it does not
follow natural laws. A true “Numinator” would find this unacceptable.
Although protected by Copyright, the author grants
permission to reprint this article in a non-profit publication, or copy
it over the Internet, with its Title, Copyright, and this notice.
Notification to the author and courtesy copies of the publication would
be appreciated. For other publication, please contact the