Asociation Argentina de Teoria General de Sistemas y Cibernetica Libertad 742, 1640 Martinez, Argentinien
(Reflections on a lecture by Brother J. M. Ramlot O. P)
Computers are to an increasing extent taking on problems previously reserved for humans. Nowadays many of them can deal not only with data but also with knowledge. The gap between artificial and natural intelligence is gradually closing; therefore we have to constantly reevaluate the differences between both forms.
Artificial intelligence can compete successfully with natural intelligence 1) in resolving equations and 2) in demonstrating and even in finding previously unknown demonstrations for mathematical theorems. The algorithmic and sequential capabilities needed to perform these tasks can be found in both human brains and computers – but the computer does this kind of work much faster.
Nevertheless the machine still needs an operator to put the data into it, or at least to equip it with a collecting device. Besides, it generally isn’t able to discern whether the data is correct. Also the operational algorithm, simple or complex, must be introduced by a mind, and the machine cannot produce anything other than what is contained in its program. Therefore artificial intelligence is still largely dependent upon natural intelligence.
For some years now, however, we’ve seen the emergence of expert systems, which handle not only data, but also knowledge. This knowledge is kept apart from the rules governing its handling, which constitute a second systems level, a sort of “metaknowledge” in the form of algorithmic combinations.
Expert systems have already produced spectacular results. A system of this type can include the total knowledge of many human experts, and its algorithmic operation sometimes leads to the discovery of things which all of the specialists – even the designers ofthe system – had failed to notice. Of course the operations are carried out much more rapidly and exhaustively in artificial than in natural intelligence systems.
No expert system is, however, capable being independently creative and of inventing something which is not implicit in its algorithm. The computer’s limits are obvious:
- it doesn’t “know” (isn’t conscious of) what it knows (what’s in its memory and its algorithm). (At least, we think it doesn’t know.)
- it can’t break out of its sequentiality. The parallel multisequentiality of some systems is only a first approximation to the truly simultaneous parallelism and the self interconnection of natural intelligence.
- it’s a prisoner of binary logic, which implies a particular type of reductionism.
However artificial intelligence may cross a new threshold in the coming years, since we are now gaining a better understanding of some properties of the brain. The following ones seem particularly interesting:
- It has a fantastic combinatory capacity.
- It begins practically virgin, which is why we remember almost nothing about the first years of life.
- Making use of varied means of learning, it must organize itself.
- It must acquire perceptual and conceptual selectivity without having to actively register everything in its surroundings.
- It is capable of forgetting.
These characteristics lead us to some surprising conclusions when we take Ashby’s ideas about variety and constrictions into consideration:
- The brain receives and processes many observations simultaneously.
- The brain needs to form algorithms in order to be able to function usefully in its super-complex and constantly changing environment.
- The “algorithmization” can be transferred from brain to brain, and is called instruction/education.
- Learning leads to the formation of mental algorithms by trial and error. Any algorithm can arise as a result of the repeated strengthening of correct answers; it therefore represents a set of organizing constrictions.
- Once established, the algorithm replaces chance behavior with determined behavior. This determination is, however, never absolute, probably because of the great potential of the algorithms acquired in childhood and youth.
- The algorithmic functions acquired by the brain tend to partly block creative capacity. This phenomenon undergoes rapid enhancement during adolescence and youth
- with few exceptions, older people are no longer creative.
- The ability to forget seems to be indispensible for relieving the brain of useless data: The mechanism for forgetting is, however, by no means completely understood.
On the basis of these characteristics we can conceive of a type of artificial intelligence which could go much further than the most advanced machines currently available and come to much more closely resemble natural intelligence. The prerequisites for this type of system are:
- the ability to “learn” not only the contents of data but also
- the simultaneous functioning of many units.
- the formation of interconnections, at first by chance, between these units.
- the progressive dynamic stabilization of certain of these interconnections (ultrastability).
- the maintenance of a great deal of variety, partly by random utilization of the algorithms which are formed.
- an ability to selectively destroy certain portions of a memory.
The essential differences between current artificial and natural intelligence systems lie in the following characteristics of the latter:
- Their multi-simultaneous mode of operation prevents them from becoming absolutely determined.
- The entire algorithm acquired by the brain is so large and complex that no human can possibly use all its potential contents.
- The “algorithmization” is never complete. A virgin reserve (potential for variety) is always left over. Thus there is always a margin of imprevisibility as far as the future behavior of a natural intelligence system is concerned.