Why Computers Won’t Make Themselves Smarter | The New Yorker

Dire che la singolrità arrivera’ è un atto di fede.
In genere i sostenitori più accaniti dicono “non puoi escludere che avverrà, per qualcosa – oggi sconosciuto – che inventeremo”.
Al che io rispondo ” e tu non puoi escludere che viviamo in un universo tra infiniti universi paralleli di un multiverso e che, in questo momento, in un altro universo, Turing e Goedel bevono il caffe’ e Einstein glielo serve.”
parimenti non lo puoi escludere.
Quandro vedrò Turing e Goedel prendere il caffè crederò anche che la singolarità sia raggiungibile.
Entrambi sono atti di fede.

L’articolo è gustoso.

Source : The New Yorker

Why Computers Won’t Make Themselves Smarter
We fear and yearn for “the singularity.” But it will probably never come.

In the eleventh century, St. Anselm of Canterbury proposed an argument for the existence of God that went roughly like this: God is, by definition, the greatest being that we can imagine; a God that doesn’t exist is clearly not as great as a God that does exist; ergo, God must exist. This is known as the ontological argument, and there are enough people who find it convincing that it’s still being discussed, nearly a thousand years later. Some critics of the ontological argument contend that it essentially defines a being into existence, and that that is not how definitions work.

God isn’t the only being that people have tried to argue into existence. “Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever,” the mathematician Irving John Good wrote, in 1965:

Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an “intelligence explosion,” and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.

The idea of an intelligence explosion was revived in 1993, by the author and computer scientist Vernor Vinge, who called it “the singularity,” and the idea has since achieved some popularity among technologists and philosophers. Books such as Nick Bostrom’s “Superintelligence: Paths, Dangers, Strategies,” Max Tegmark’s “Life 3.0: Being Human in the age of Artificial Intelligence,” and Stuart Russell’s “Human Compatible: Artificial Intelligence and the Problem of Control” all describe scenarios of “recursive self-improvement,” in which an artificial-intelligence program designs an improved version of itself repeatedly.

I believe that Good’s and Anselm’s arguments have something in common, which is that, in both cases, a lot of the work is being done by the initial definitions. These definitions seem superficially reasonable, which is why they are generally accepted at face value, but they deserve closer examination. I think that the more we scrutinize the implicit assumptions of Good’s argument, the less plausible the idea of an intelligence explosion becomes.

Continua qui Why Computers Won’t Make Themselves Smarter | The New Yorker.

If you like this post, please consider sharing it.

Leave a Comment

Your email address will not be published. Required fields are marked *