-(A sunday morning provocation)
Artificial Intelligence is a name born in 1956. It has gained a lot of attention thanks to its resonance with humans, and favoured the devlopment of an imaginative Hollywood production stream.
The name “artificial intelligence” has an implicit bias that does not allow for a cognitive perception adherent to reality.
On the contrary, the name favours the suggestion of the possibility of machines to develop some form of consciouness, emotions, acquire a “personality” similar to humans’ and, ultimately overcome human limitations and developing a self superior to humans.
You’ve seen the movies, you know the narrative… But they are only devices that extract correlations from data and use those correlations to make predictions and a load of very useful things. And as having calculators doing square roots, they can do it at a scale and speed far exceeding human’s performances. By throwing in some logic and randomness they can also exhibit some interesting and original behaviors. Yet machines have no clue of what reality is. At best, they mimic a model of the reality, so they are two steps away from reality.
After a conference on AI at the Pontifical Academy Of Science in Rome, discussing with some friends (among them Aimee van Wynsberghe), we argued that the first and foremost AI bias is its name. It induces analogies that have limited adhrence to reality and it generates infinite speculations (some of them causing excessive expectations and fears).
Because of this misconception, we proposed we should drop the usage of the term “Artificial Intelligence” and adopt a more appropriate and scoped-limited terminology for these technologies which better describe what these technologies are: Systematic Approaches to Learning Algorithms and Machine Inferences.
Now we have redefined the name, will we still support the idea that SALAMI will develop some form of consciouness ?
Will SALAMI have emotions ?
Can SALAMI acquire a “personality” similar to humans’ ?
Will SALAMI ultimately overcome human limitations and develop a self superior to humans ?
Can you possibly fall in love with a SALAMI ?
Can we suddenly perceive a sense of how all these far flung (unrealistic) predictions look somewhat ridiculous ?
UPDATED: How would you call those degraded outputs when slicing a SALAMI ? Clearly you would call it RANCID SLICE. I think it makes the point way better than “hallucination” to call output artifacts…