-(A sunday morning provocation)
Artificial Intelligence is a name born in 1956. It has gained a lot of attention thanks to its resonance with humans, and favoured the devlopment of an imaginative Hollywood production stream.
The name “artificial intelligence” has an implicit bias that does not allow for a cognitive perception adherent to reality.
On the contrary, the name favours the suggestion of the possibility of machines to develop some form of consciouness, emotions, acquire a “personality” similar to humans’ and, ultimately overcome human limitations and developing a self superior to humans.
You’ve seen the movies, you know the narrative… But they are only devices that extract correlations from data and use those correlations to make predictions and a load of very useful things. And as having calculators doing square roots, they can do it at a scale and speed far exceeding human’s performances. By throwing in some logic and randomness they can also exhibit some interesting and original behaviors. Yet machines have no clue of what reality is. At best, they mimic a model of the reality, so they are two steps away from reality.
After a conference on AI at the Pontifical Academy Of Science in Rome, discussing with some friends (among them Aimee van Wynsberghe), we argued that the first and foremost AI bias is its name. It induces analogies that have limited adhrence to reality and it generates infinite speculations (some of them causing excessive expectations and fears).
Because of this misconception, we proposed we should drop the usage of the term “Artificial Intelligence” and adopt a more appropriate and scoped-limited terminology for these technologies which better describe what these technologies are: Systematic Approaches to Learning Algorithms and Machine Inferences.
Now we have redefined the name, will we still support the idea that SALAMI will develop some form of consciouness ?
Will SALAMI have emotions ?
Can SALAMI acquire a “personality” similar to humans’ ?
Will SALAMI ultimately overcome human limitations and develop a self superior to humans ?
Can you possibly fall in love with a SALAMI ?
Can we suddenly perceive a sense of how all these far flung (unrealistic) predictions look somewhat ridiculous ?
18 thoughts on “Let’s forget the term AI. Let’s call them Systematic Approaches to Learning Algorithms and Machine Inferences (SALAMI).”
Aspettavo da tempo questo articolo, e l’attesa è stata ripagata.
Aggiungerei un’altra domanda: un sistema LAWS equipaggiato con SALAMI può essere chiamato “Beretta”?
Beretta is a worldwide well known producer of firearms but there’s also another beretta, well known in italy, that produces.. salami.
So the joke here is “if it is a salami equipped weapon, can it be called beretta ?”
Very controversial name for vegans and vegetarians guys, expect fierce opposition…
Era ora che si facesse chiarezza su un’ormai annosa questione e tu lo hai fatto con efficacia, professionalità e, lasciami dire, con eleganza(un po’ di bellezza non guasta e ci ricorda che noi umani abbiamo sensibilità e senso estetico). Ottimo, Stefanno!
The unique answer to the 6 final question is “NO” because machime,also as so defined “intelligent” , us no able to say “yes” or bot” “bad or good” in front to the sale accident. A macchine Never has Free thimking in gron to the same fact because of follia e an algorithm. An human mry chance idea following the y of a mosquito.
Can SALAMI develop the capability to taste?
Let us stop being treated like SALAMI!
Salami have no dreams.
They just taste good or bad, to be eaten or thrown up.
Machine learning started when PCs were bundled with the DOS/WINDOWS operating system; then the “Computer No Problem” free booklets came about, telling us to buy “Plug & Play” components “Off The Shelf” in “One Stop Shops”.
Let us start learning how People Learning can be kicked-off!
Jazz up/Animate threads such as https://mastodon.uno/web/statuses/103204201688395627
>Will SALAMI have emotions ?
NEVER. Computers are universal, objective machines. Emotions, and also sensations, are absolutely individual and subjective. If I eat a mango, nobody else can have the same sensation of taste I am feeling; neither can anyone like that taste the way I like it.
>Can SALAMI acquire a “personality” similar to humans’ ?
NEVER, because humans have individuality and absolutely personal, subjective inner activities, such as thinking, feeling and willing. The first one can be ovjective, e.g. thinking on mathematics, but can also be subjective.
>Will SALAMI ultimately overcome human limitations and develop a self superior to humans ?
This depends on what limitations you are talking about. Computers have surpassed humans in arithmetic calculations since the first one. But they will never surpass humans in feelings and intuition. (BTW, intuitive thinking is anti-scientific).
>Can you possibly fall in love with a SALAMI ?
Yes, if you are a degenerate person.
>Can we suddenly perceive a sense of how all these far flung (unrealistic) predictions look somewhat ridiculous ?
This depends on the way of thinking. If it is materialistic, physicalistic, then it is perfectly possible to be serious about these predictions. Technology is being badly used and destroying nature and humankind. Only a change in the way of thinking will revert this process. But for this, we have to add to the materialistic-scientific way of thinking other world conceptions — but I am not referring to dogmatic religions! Read my papers on my website and see what a different, coherent and rational way of thinking may be:
Intuitive thinking *IS* scientific.. but humans usually don’t have the time to consider every variance just as, eg,, a quantic computer will.
Predictions could be wrong but many laws, objects, knowledges that we take for granted at the time of the formulation has been considered flung..
> Intuitive thinking *IS* scientific..
Absolutely wrong. NOBODY can describe how s/he reached an intuitive thinking. My preferred example is how a chess grand master discovers the right move just by looking at the chess board. No such player has been able to describe how s/he reached the idea of the right move. That’s why it was NEVER possible to insert into a traditional chess program the strategy of a grand master. Deep blue won the tournament agains Kasparov in 1996 because its designers took out every strategy they had inserted into the previous machine, Deep Tought (quite a modest name, isn’t it?) which lost the tournament. BTW, read my old paper
where I call the attention to the fact that Kasparov won one game and had 2 ties. How was this possible, playing a mathematical game against a machine that tested 200 million moves per second, and Kasparov could eventually test some 10 or 20 per minute? The answer is very simple: the machine was calculating, and Kasparov, while using his intuition, was not!
What is intuitive thinking? It happens when one is not thinking on the possible solutions to a problem — that’s rational thinking, a causative thiniking. One way of training intuitive thinking is to think deeply on a problem, from all its sides, without rationally thinking on solutions, and just wait for a solution to “come” as an intuition — probably when one is not thinking about the problem!
Please be aware that we don’t know how the brain works. If our thoughts would be generated by the brain, we would not be able to control them, that is, concentrating on a certain chain of thoughts. It would be impossible to even do manually a sum with many digits. I think it is of utmost importance that everyone recognizes that we are able to determine our next thoughts — which is a peronal experience of free will, it cannot be formally proved. But free will cannot arise from matter, because tha latter is inexorably subject to physical “laws”.
I insist: intuition is anti-scientific.
Buona settimana! Domingo, 16.12.19
I don’t pretend to describe how the human brain works, I simply say that the result (e.g. for a chess move) is one of the possible moves (there is no supernatural event) that a computer can foresee.
Faster is the computer (and more are the data and precise the rules of the game) worst will be for the human player.
This shows how awesome is intuition in the human brain, but DeepBlue is an old crap compared to an iphone..
So I agree that right now is difficult to think to a robot able to have opinions, personal tastes and so on, but the computation power allows him to beat a modern Kasparov.
Do you think intuitive tkinking could take an operational role in maintainig a relationship between computational and systems thinking?
The Web was born in a social environment, made available by institutional constituents and enabling scientific researchers to take advantage of computer technology, while adapting its evolution to their requirements, long before anybody could tell how.
In that environment computational and systems thinking could talk to each other and cooperate as peers. Has anybody ever told how that worked?
Could anybody tell, today, how a stakeholder citizenship might take a constituency role, towards the availability of a social environment, enabling individuals to be involved in adapting a computer network to the achievement of their own goals, beyond the fragmentation imposed by political and market use of stand-alone computational thinking?
I look forward to hearing from you, if what I have written above makes sense, or if it can be more properly reworded.
Un saluto da Bologna e da nonno Luigi
Saluti dalle montagne di Campos do Jordão, Brasile (guarda google maps).
Only today I receive a notice that you had posted some comments, Escuse me for the delay.
> So I agree that right now is difficult to think to a robot able to have opinions, personal tastes and so on, but the computation power allows him to beat a modern Kasparov.
Computers were able to beat humans since their beginings, e.g in arithmetic calculations. And they will continue to beat us in more and more mental activities. But this does not mean that they will beat us in EVERY mental activity. They will never have intuition, as I previously exposed. They will never have sensations and feelings (because they are strictly individual and subjective, and computers are universal and objective machines) etc.
In my opinion, your “right now” is wrong. Computers will NEVER have human opinions, personal tastes and so on.
> Do you think intuitive tkinking could take an operational role in maintainig a relationship between computational and systems thinking?
There is NO “computational and systems **thinking**”, in the human sense. To start with, as you recognized, we don’t know how we physically think, so it is impossible to compare it with what a computer does, because we have full knowledge of how it works. Well, this is nowadays not the case when using ‘machine learning’ (as usual, a completely wrong denomination, because we don’t know how humans learn) systems without examining what the calculated parameters mean, that is, nobody knows how the program works — a tremendous shift in the programming paradigm). The main problem here is delegating human decisions to machines — machines don’t make decisions, they make logical choices! There is a danger that humans will take less and less decisions.
BTW, we don’t even know how a simple representation of the concept ‘two’, such as ‘2’, is stored in the brain and how we are able to use it. My conjecture is that with the present scientific paradigm we will never know. E.g., if you consider all possible representations of the ‘two’, such as 2, II, ii, :, due, dois, dos, deux, dba, two, zwei , shtaim (the end with my native and foreign languages…), and what all of them have in common, you get to the **pure concept** of the 2, which obviously has NO physical representation. We work with this concept in mathematics, independently of its representation. So with our thinking we are able to reach non-physical entities, in the Platonic world of ideas. Computers will NEVER be able to reach pure concepts. Is that clear?
BTW-3. Ideal geometric representations, as a point, a line, a circle, have no physical representations either. But we work with them in geometry. We also work with the concept of geometrical or mathematica infinity, and transfinite numbers, but they have no physical representation, so it is impossible to represent them in a computer. It is also impossible to represent in a computer the continuum of real numbers.
Eugenio, if you say that machines will be able to do whatever humans can do, you are inducing an idea that we are machines. This is EXTREMELY dangerous. The Nazis treated people as animals, but one may have compassion towards animals, because they feel pain as we do. (Hitler had great social sensitivity, but had no compassion.) It’s a psychological aberration to have compassion for machines. But I am sure that this will come if we keep considering that humans are machines and — I am truly sorry to say this –, if you keep telling that computers will do whatever we do.
BTW-2, we will NEVER know what is the code used by the brain, because another one of my conjectures is that this code simply does no exist.
Please give a look at some papers on my web site, mainly on spirituality and AI. Comments and criticisms are most welcome (unfortunately, I haven’t had the time to translate many into English.):
All the best for the new year, with good mental and physical wealth, and for maaaaany following years as well!
Nonno Valdemar, con 8 nipoti. La più vecchia (va cosi in Italiano?) ha 23 anni.
Scusa Luigi, ti ho confuso con Eugenio. Saluti, anche a tutti quanti! Val.
Ciao nonno Valdemar.
I am sorry but these comments – about Systematic Approaches to Learning Algorithms and Machine Inferences (SALAMI) – do not help turning our attention to ….
“how the relationship between social and technical systems needs to be managed”
in order to avoid that we, technical system users, end up being eaten like salami.
It’s a hopeless interpersonal communication problem, I am afraid.
Saluti da Bologna – my birthplace – e dalle Dolomiti Bellunesi – my still-unreachable-mountain-peak basecamp.
Ciao, nonno Luigi,
Come stai, va tutto bene? E con i tuoi figli e nipoti?
I’m sorry, only today I realized you had commented on my words.
“I am sorry but these comments – about Systematic Approaches to Learning Algorithms and Machine Inferences (SALAMI) – do not help turning our attention to ….
‘how the relationship between social and technical systems needs to be managed’
in order to avoid that we, technical system users, end up being eaten like salami.”
Social systems are composed of individuals, so it is a consequence of the latter (a propos, that’s why it is unpredictable). It is absolutely necessary to understand the difference between humans and technical systems to be able to manage the latter. If we consider that we are spiritual beings, that is, we transcend mere material matter and energy, and technical systems are purely physical, then we have to examine what is the influence of technical systems on our non-physical members. Moreover, it is necessary to consider the limitations of technical systems compared to our non-physical capacities (e.g., intelectual, emotional, creative, etc.) Look at a recente essay of mine showing that we are not pure physical systems (and maybe you can learn some Portuguese — it is bilingual, and Portuguese is **very** similar to Italian):
Tutto il meglio, stai bene, Val.