Psychologists and others have been trying to define intelligence for decades. Definitions can be the bane of true understanding.
As in many fields, they get tangled up in semantics. Words are not something absolute. They are imprecise, inexact, and can have subtle or not-so-subtle variations in meaning, interpretation, intent, and use. We don’t have a universally agreed-upon, clear vocabulary.
They’ve come up with tests which supposedly measure what they call intelligence. The results may indicate a likelihood of how well a person might perform in the modern world, but is it really a good measure of whatever intelligence is? Without a good, clear understanding of what intelligence is, how can they be sure? We consider the old saying, “intelligence is what an intelligence test measures” as still valid today. The tests are inaccurate and inappropriate for the larger picture.
Although most of these people don’t specify it, they are limiting themselves to human intelligence. Intelligence is a much broader topic than just humans. Intelligence can be species-specific. There could be so-called intelligence scales for rats, for horses, and for other animals, which would be different from humans. Is there a form of plant intelligence? And don’t forget the Universal Intelligence.
How can you define something in scientific, precise, or valid terms when you don’t really know or understand what it is.
Some people try to tell us what God is. Their proclaimed definitions are only an inadequate hypothesis at best, something incorrect or totally delusional at worst.
We have the same problem here trying to define intelligence, but ….
As supposedly living, thinking persons at this point, we feel the need to ask:
What is intelligence?
Are we humans really intelligent?
Is artificial intelligence really a form of intelligence? Does it, or can it, exhibit intelligence?
Others have debated this in the past. We thought we’d throw our two dollars (to cover for inflation of the old two cents saying and our added value) into the mix.
Intelligence is more complex and difficult to analyze than probably most people realize.
What should be a minimum set of faculties to qualify as intelligence includes the abilities to:
1 – receive and output information
2 – store information in memory
3 – learn (= actively pursue and organize information)
4 – communicate
5 – make decisions
6 – adapt, able to react/respond in some system way to stimuli
7 – have some comprehension and understanding of the environment, and
8 – be intentionally and meaningfully creative.
In generating this list of faculties, out of “fairness” we tried to avoid those requiring an external action. Some computers could have the internal capabilities, but no outward expression.
Computers can perform at least a subset of most of these, but it all depends on the device’s programming and built-in capabilities.
A computer, as we see it for the foreseeable future, is dependent on it’s programming by humans. We’ve seen or heard of instances where some computers are now capable of self-programming, but a human is still necessary behind the scenes at some point. How much of these abilities can be self-programmed in at some future point is uncertain.
Now, again to be fair, we humans are also programmed. This occurs over time as we grow and mature. We are programmed how to think, how to react in certain situations, what to believe, etc. There is still a distinction between humans and other organisms and a machine in this regard.
What about errors? Probably nothing as complex as “intelligent” computers created by humans will be error-free. But humans aren’t error-free either.
We’re trying to avoid a potential situation where we define intelligence as part of a biological living system, by default excluding the inanimate computer as having any intelligence. How do we do that?
Computers operate differently from humans or other living creatures. The mechanisms are clearly different. Does that mean that computers could not be intelligent? Is communication the same for a computer as it is for a living organism? Is learning the same or different?
We get into questions about the difference between data and information. Generally, data is defined as a raw number or basic fact. Information is considered something more meaningful or relevant. Computers can store data very efficiently; information is something more. While computers can be programmed to make inferences or compute probabilities, will they ever be able to do so independent of their human programmers or operators?
Probably the major distinction of the above list between human intelligence and computer-programmed capabilities is creativity.
There is something known by various names but we’ll call it the “monkey hypothesis”. This claims that if you were to put a large number of monkeys to work typing indefinitely on keyboards that you would eventually get some classic work like Shakespeare’s Hamlet. Or any major meaningful, credible work of fiction would actually do — even non-fiction.
We don’t know anyone who actually believes this monkey hypothesis is true, but does a failure to produce something meaningful mean that monkeys are not intelligent? No. It more closely indicates that they don’t understand or have the basics of our languages.
Would the situation be the same for a large number of computers spitting out characters forever? Provided with the characters of our languages, could one or more of them actually write something like Hamlet on their own? Call us Doubting Thomases.
Computers have been coded to write short advertising and other copy, but the output quality is rather poor. The probability of them doing something creative like a successful play or novel meaningful to humans is too low to be realistic, despite how fast they can output characters. Another way of saying this might be that the odds are infinitely low.
Another problem would be verification. It could take a large number of additional computers to check on all the garbage put out by the original computers. Of course, many human-written works are not of a quality to be published for general consumption.
In making a declaration of something, it’s not wise to say never. The declarer is taking a huge risk. Human ingenuity is very impressive and has done things which were never imagined or thought possible even up to the time and after they were accomplished. Some examples are that humans will never fly, and the car will never replace the horse.
In the future, humans may be able to create biological organisms which have superior specialized computing power and certain other characteristics. But this will not be exclusively through silicon or other chips.
Humans and other living organisms as a group are more adaptable, flexible than something which is strictly a machine.
Humans and other living organisms are more than the material body. A computer won’t be.
A computer aura only consists of electric and magnetic fields; the human is so much more. Instead of computers having “full” intelligence, maybe they should use “partial” intelligence or “poor copycat” intelligence?
What may be the dominant distinguishing factor between whatever “intelligence” computers might develop and humans could be called consciousness. But that’s a topic for another blog.