Quantum computing has a hype problem | MIT Technology Review


C’è da scommettere che molti addetti ai lavori si scateneranno contro questo articolo. D’altronde,  nessuno piacerebbe sentirsi dire che siamo a decenni di distanza dal vedere risultati economicamente concreti del loro lavoro. E la chiusa dell’articolo è un bel modo per dribblare le critiche.

Source : MIT Technology Review

Quantum computing has a hype problem

As a buzzword, quantum computing probably ranks only below AI in terms of hype. Large tech companies such as Alphabet, Amazon, and Microsoft now have substantial research and development efforts in quantum computing. A host of startups have sprung up as well, some boasting staggering valuations. IonQ, for example, was valued at $2 billion when it went public in October through a special-purpose acquisition company. Much of this commercial activity has happened with baffling speed over the past three years.

I am as pro-quantum-computing as one can be: I’ve published more than 100 technical papers on the subject, and many of my PhD students and postdoctoral fellows are now well-known quantum computing practitioners all over the world. But I’m disturbed by some of the quantum computing hype I see these days, particularly when it comes to claims about how it will be commercialized.

Established applications for quantum computers do exist. The best known is Peter Shor’s 1994 theoretical demonstration that a quantum computer can solve the hard problem of finding the prime factors of large numbers exponentially faster than all classical schemes. Prime factorization is at the heart of breaking the universally used RSA-based cryptography, so Shor’s factorization scheme immediately attracted the attention of national governments everywhere, leading to considerable quantum-computing research funding.

The only problem? Actually making a quantum computer that could do it. That depends on implementing an idea pioneered by Shor and others called quantum-error correction, a process to compensate for the fact that quantum states disappear quickly because of environmental noise (a phenomenon called “decoherence”). In 1994, scientists thought that such error correction would be easy because physics allows it. But in practice, it is extremely difficult.

The most advanced quantum computers today have dozens of decohering (or “noisy”) physical qubits. Building a quantum computer that could crack RSA codes out of such components would require many millions if not billions of qubits. Only tens of thousands of these would be used for computation—so-called logical qubits; the rest would be needed for error correction, compensating for decoherence.

The qubit systems we have today are a tremendous scientific achievement, but they take us no closer to having a quantum computer that can solve a problem that anybody cares about. It is akin to trying to make today’s best smartphones using vacuum tubes from the early 1900s. You can put 100 tubes together and establish the principle that if you could somehow get 10 billion of them to work together in a coherent, seamless manner, you could achieve all kinds of miracles. What, however, is missing is the breakthrough of integrated circuits and CPUs leading to smartphones—it took 60 years of very difficult engineering to go from the invention of transistors to the smartphone with no new physics involved in the process.

There are in fact ideas, and I played some role in developing the theories for these ideas, for bypassing quantum error correction by using far-more-stable qubits, in an approach called topological quantum computing. Microsoft is working on this approach. But it turns out that developing topological quantum-computing hardware is also a huge challenge. It is unclear whether extensive quantum error correction or topological quantum computing (or something else, like a hybrid between the two) will be the eventual winner.

Physicists are smart as we all know (disclosure: I am a physicist), and some physicists are also very good at coming up with substantive-sounding acronyms that stick. The great difficulty in getting rid of decoherence has led to the impressive acronym NISQ for “noisy intermediate scale quantum” computer—for the idea that small collections of noisy physical qubits could do something useful and better than a classical computer can. I am not sure what this object is: How noisy? How many qubits? Why is this a computer? What worthy problems can such a NISQ machine solve?

A recent laboratory experiment at Google has observed some predicted aspects of quantum dynamics (dubbed “time crystals”) using 20 noisy superconducting qubits. The experiment was an impressive showcase of electronic control techniques, but it showed no computing advantage over conventional computers, which can readily simulate time crystals with a similar number of virtual qubits. It also did not reveal anything about the fundamental physics of time crystals. Other NISQ triumphs are recent experiments simulating random quantum circuits, again a highly specialized task of no commercial value whatsoever.

Using NISQ is surely an excellent new fundamental research idea—it could help physics research in fundamental areas such as quantum dynamics. But despite a constant drumbeat of

Continua qui: Quantum computing has a hype problem | MIT Technology Review

If you like this post, please consider sharing it.

Leave a Comment

Your email address will not be published. Required fields are marked *