2007 (2)
2016 (86)
2017 (154)
2019 (79)
2021 (73)
2022 (129)
2023 (263)
【微信上看到的一篇讲量子计算机的,后来看到了英文,并转于此。】
作者:Mikhail Dyakonov在法国蒙彼利埃大学的查尔斯•库仑(Charles Coulomb)实验室从事理论物理研究工作。他的大名与诸多物理现象联系在一起,最著名的现象也许是迪阿科诺夫表面波(Dyakonov surface wave)。
目前提出的策略有赖于高精度地操纵数量多得难以想象的变量
眼下量子计算风靡一时。似乎没有哪天新闻媒体不在报道这项技术有望带来的出众优点。大多数评论人士忘记了或者完全掩盖了这个事实:几十年来人们一直在搞量子计算,却没有任何实际的结果好炫耀一番。
有人告诉我们,量子计算机有望“在许多学科领域带来突破,包括材料及药物发现、优化复杂的人造系统和AI等领域。”有人向我们保证,量子计算机将“永远改变我们的经济、工业、学术和社会格局。”有人甚至告诉我们,量子计算机“可能很快就会破解保护世界上最敏感数据的加密技术。”现在到了这样的地步:物理学众多领域的许多研究人员声称自己开展的研究工作与量子计算有一定的关联,以此证明其研究工作的必要性。
与此同时,政府研究机构、学术部门(其中许多由政府机构资助)和企业实验室每年花费数十亿美元研发量子计算机。在华尔街,摩根士丹利及其他金融巨头预计量子计算很快会成熟起来,急于想搞清楚这项技术如何能帮到自己。
这多少已成了一场自我延续的军备竞赛,许多企业组织参与竞争似乎只为了避免被抛在后面。世界上一些顶尖的技术人才(比如在谷歌、IBM和微软等公司)正在努力工作,并借助最先进实验室拥有的丰富资源,以期实现他们憧憬的量子计算未来。
有鉴于这一切,我们很自然想知道:实用的量子计算机到底何时才会搞出来?最乐观的专家估计要过5年至10年,较为谨慎的专家预测再过20年到30年。(顺便提一下,过去的20年已有人作出类似的预言。)极少数人说“在可预见的未来搞不出来”,我正是其中之一。我从事量子和凝聚态物理的研究工作已有数十年,逐渐有了这种非常悲观的观点。之所以有这个观点,是由于对实现量子计算需要克服的巨大技术挑战有深入了解。
量子计算概念最早出现在近40年前的1980年,当时出生于俄罗斯的数学家尤里·曼宁(Yuri Manin,现在波恩的马克斯·普朗克数学研究所工作)最先提出了这个概念,尽管是相当模糊的雏形。不过第二年这个概念迅速遐尔闻名,那年加州理工学院的物理学家理查德·费曼(Richard Feynman)独立提出了这个概念。
费曼后来认识到研究中的系统变得过于复杂时,量子系统的计算机模拟变得无法进行,于是提出了这个观念:计算机本身应该在量子模式下运行。他当时说:“该死的,大自然不是经典的;如果你想要对大自然进行模拟,最好把它变成量子力学;天哪,这是很棒的问题,但不是那么容易解决。”几年后,牛津大学的物理学家大卫•多伊奇(David Deutsch)正式描述了一种通用量子计算机,这是通用图灵机的量子版系统。
不过直到1994年,这个课题才备受关注,当时数学家彼得•肖尔(Peter Shor,当时在贝尔实验室,目前在麻省理工学院)为理想的量子计算机提出了一种算法,那样为非常大的数字分解因子比在传统计算机上快得多。这一杰出的理论成果引发了人们对量子计算产生了浓厚兴趣。自那以来,已发表了数千篇关于这个课题的研究论文(主要是理论研究),而且继续层出不穷。
量子计算的基本思想是,以一种与传统计算机全然不同的方式来存储和处理信息,传统计算机基于经典物理学。可以说,传统计算机通过操纵大量运行起来实际上就是通断开关的微型晶体管来工作,通断开关在计算机时钟的周期之间改变状态。
因此,任何特定时钟周期开始时的经典计算机的状态可以通过实际上与单个晶体管状态对应的长长序列的比特来描述。若有N个晶体管,计算机就有2N种可能的状态。根据规定的程序,这种机器上的计算基本上包括让一切晶体管在“通”状态和“断”状态之间切换。
在量子计算中,经典的双态电路元件(晶体管)被名为量子比特(qubit)的量子元素所取代。与传统比特一样,量子比特也有两个基本状态。虽然众多实物可合理地充当量子比特,但最简单的用法是电子的内部角动量或自旋,而自旋有特殊的量子特性:在任何坐标轴上只有两种可能的投影:+1/2或-1/2(以普朗克常数为单位)。无论选择的是哪条轴,你都可以将电子自旋的两个基本量子态表示为↑和↓。
这时候情况变得怪异起来。若是量子比特,这两个状态不是唯一可能的状态。那是由于电子的自旋态由量子力学波函数来描述。而这个波函数涉及两个复数:α和β(名为量子振幅),由于是复数,因而有实部和虚部。那些复数即α和β各自有某个振幅;而且按照量子力学的规则,它们的平方振幅必须加起来是1。
那是由于那两个平方振幅对应于你在测量时,电子自旋处于基本状态↑和↓的概率。又由于那些是唯一可能的结果,两个相关的概率必须加起来是1。比如说,如果发现电子处于↑状态的概率是0.6(60%),那么发现电子处于↓状态的概率势必是0.4(40%),没有其他的可能性。
与经典比特只能处于两个基本状态中的一个相比,量子比特可能处于一连串可能状态中的任何一个,由量子振幅α和β的值所定义。这个属性常常由相当惊人的定论来描述,即量子比特可同时存在于↑状态和↓状态。
是的,量子力学常常有悖直觉。但是这个概念不应该用这种令人困惑的言辞来加以表达。相反,可以看成位于x-y平面内的一个矢量,与x轴呈45度倾斜。有人可能会说,这个矢量同时指向x方向和y方向。这种说法在某种意义上是正确的,但其实不是实用的描述。在我看来,将量子比特描述为同时处于↑状态和↓状态同样毫无助益。不过,记者们这么来描述几乎成了一种惯例。
在有两个量子比特的系统中,有22即4个基本状态,可以写为(↑↑)、(↑↓)、(↓↑)和(↓↓)。当然了,两个量子比特可以由涉及四个复数的量子力学波函数来描述。在N个量子位的一般情况下,系统状态由2N个复数来描述,复数受到它们的平方振幅必须加起来是1这个条件的限制。
虽然在任何特定时刻有N个比特的传统计算机势必处于2N个可能状态中的一个,但有有N个量子比特的量子计算机的状态由2N量子振幅的值来描述,这是连续参数(可以是任何值,而不仅仅是0或1)。这是量子计算机强大功能的起源,但也是其巨大脆弱性和薄弱性的原因。
信息在这样的机器中如何处理?借助运用某些类型的变换(名为“量子门”)来处理,而量子门能以一种精确的、受控制的方式来改变这些参数。
专家估计,实用量子计算机所需的量子比特数在1000个至100000个,这种量子计算机在解决某些类别的有趣问题方面可与笔记本电脑一较高下。因此,在任何特定时刻描述这种实用量子计算机状态的连续参数数量必须至少是21000个,大致相当于10300个。这个数字确实很庞大。有多大?比可观测宇宙中亚原子粒子的数量还多得多。
重复一下:实用量子计算机需要处理一组连续参数,数量比可观测宇宙中的亚原子粒子数量还多。
眼下,头脑冷静的工程师对描述一种可能的未来技术失去了兴趣。在任何实际的计算机中,你得考虑错误的影响。在传统计算机中,如果一个或多个晶体管在应该接通时被断开或应该断开时被接通,会出现错误。可使用相对简单的纠错方法来处理这种不希望看到的情况,这些方法利用了内置到硬件中的某种冗余机制。
相比之下,面对实用量子计算机必须处理的10300个连续参数,如何牢牢控制错误绝对不可想象。然而,量子计算理论家已成功地让公众相信这是切实可行的。的确,他们声称阈值定理(threshold theorem)证明了能做到这一点。他们指出,一旦每个量子门的每个量子比特的误差低于某个值,无限长的量子计算就成为可能,而代价是所需的量子比特数量大幅增加。他们认为,由于那些额外的量子比特,可以通过使用多个物理量子比特形成逻辑量子比特来处理错误。
每个逻辑量子比特需要多少物理量子比特?其实没有人知道,但估计通常在大约1000到100000之间。因此结果是,实用量子计算机现在需要100万或更多的量子比特。而定义这种假想量子计算机的状态的连续参数的数量现在变得更荒谬了。
即使不考虑这些异常庞大的数字,令人警醒的是,也没有人搞清楚如何将许多物理量子比特组合成可以执行实用计算操作的较少数量的逻辑量子比特。倒不是说这向来不是关键的目标。
21世纪初,应高级研发活动中心(美国情报界的一家资助机构,现在是情报高级研究项目活动中心的一部分)的要求,一队杰出的量子信息专家为量子计算制定了路线图。为2012年所定的目标是“需要大约50个物理量子比特”,并“让多个逻辑量子比特完成容错[量子计算]所需的一整套操作,以便执行一种简单的相关量子算法......”现在已到了2018年底,而这种能力还没有予以演示。
围绕量子计算撰写的大量学术文献在描述实际硬件的实验研究方面尤其轻描淡写。不过,业已报道的比较少的实验极难进行,应得到尊重和钦佩。
这种原理证明实验的目的是表明执行基本量子运算的可能性,并演示已设计出来的量子算法的一些元素。它们所用的量子比特数少于10个,通常是3个到5个。很显然,量子比特从5个到50个(高级研发活动中心专家组为2012年设定的目标)带来了难以克服的实验难题。它们很可能与25 = 32,而250 = 1125899906842624这个简单的事实有关。
相比之下,量子计算理论似乎没有遇到处理数百万量子比特方面的任何重大困难。比如误差率方面的研究在考虑各种噪声模型。已证明(在某些假设下)“局部”噪声产生的误差可以通过精心设计、非常巧妙的方法来纠正,包括大规模并发机制(以及其他技巧),数千个门同时应用于不同的量子比特对、数千次测量同时进行。
十五年前,高级研发活动中心的专家组特别指出,“在某些假设下已确定,如果可以获得每个门操作的阈值精度,量子纠错将让量子计算机可以无限期计算。”这里的关键词是“在某些假设下”。然而,这群杰出专家并没有解决这些假设能否果真得到满足的问题。
我认为他们也解决不了。在物理界,连续量(无论是电压还是定义量子力学波函数的参数)既无法测量,也无法精确地操纵。也就是说,任何连续可变量无法做到有精确值,包括0。在数学家看来,这可能听起来很荒谬,但任何工程师都知道,这是我们所处的这个世界无可置疑的现实。
当然,可以准确地知道离散量,比如教室中的学生数量或“开通”状态下的晶体管数量。持续变化的量则不是这样。这一事实可以解释传统数字计算机和假想量子计算机之间的巨大差异。
的确,理论家针对量子比特准备到特定状态、量子门的操作和测量可靠性等方面所做的种种假设都无法准确地实现。只能以有限的精度来接近它们。所以真正的问题是:需要什么样的精度?比如说,必须以什么样的精度在试验中获得2的平方根(进入许多相关量子运算的无理数)?应该近似为1.41还是1.41421356237?还是说需要更精确?令人惊讶的是,不但这些关键问题没有明确的答案,甚至从未讨论过!
虽然目前正在探究制造量子计算机的各种策略,但许多人认为最有希望的一种方法立足于使用互连的约瑟夫森结(Josephson junctions)冷却到超低温度(低至约10毫开)的量子系统。加拿大公司D-Wave Systems最先研究这种方法,现在IBM、谷歌、微软和其他公司亦步亦趋。
最终目标是制造一台通用量子计算机,可以在使用肖尔算法对大数分解因子方面击败传统计算机,借助1996年洛弗•格罗弗(Lov Grover)在贝尔实验室开发的一种同样很有名的量子计算算法执行数据库搜索,并执行适合量子计算机处理的其他专用应用软件。
硬件方面,高级研究工作正在开展中,最近研究和制造出了49个量子比特的芯片(英特尔)、50个量子比特的芯片(IBM)和72个量子比特的芯片(谷歌)。这方面工作的最终结果尚不完全清楚,特别是由于这些公司还没有透露其工作的细节。
虽然我认为这样的实验研究大有助益,并有助于更深入地了解复杂的量子系统,但我怀疑这些努力果真会带来实用的量子计算机。这种计算机必须能够在微观层面以极高的精度来操纵物理系统,这种物理系统的特点是参数多得难以想象,每个参数可能呈现连续范围的值。我们果真能学会控制决定这类系统的量子状态的超过10300个连续变量参数吗?
我的回答很简单。根本不能。
我认为,恰好相反,量子计算热接近尾声。这是由于几十年是技术或科学界任何大泡沫所能持续的最长时间。一段时间后,由于做出了太多未能实现的承诺,一直关注这个话题的人会开始对宣布即将取得突破的新闻感到腻味。此外,到那个时候,该领域所有的终身教授职位已“名花有主”。支持者年龄越来越大,热情越来越低,而年轻一代寻求全新的技术,更有可能取得成功。
所有这些问题以及我在本文中并没有提及的另外几个问题对于量子计算的未来打上了大大的问号。用几个量子比特进行的很基本但很困难的实验与依赖操纵数千个到数百万个量子比特来执行实用计算的极其发达的量子计算理论之间存在着巨大的差距,不太可能很快就能缩小这个差距。
在我看来,量子计算研究人员仍应该听从IBM的物理学家罗尔夫•兰道尔(Rolf Landauer)几十年前在这个领域首次备受关注时给予的告诫。他敦促量子计算的支持者们在出版的论文中加入这样的免责声明:“这种方案与量子计算的所有其他方案一样都依赖理论技术,目前并未考虑噪声、不可靠性和制造错误方面所有可能的来源,可能不会奏效。”
量子技术社群欢迎加入,群主微信:aclood(备注任职单位+职位,否则不予通过)
【英文:IEEE Spetrum--https://spectrum.ieee.org/computing/hardware/the-case-against-quantum-computing?
】
15 Nov 2018 | 16:00 GMT
Quantum computing is all the rage. It seems like hardly a day goes by without some news outlet describing the extraordinary things this technology promises. Most commentators forget, or just gloss over, the fact that people have been working on quantum computing for decades—and without any practical results to show for it.
We’ve been told that quantum computers could “provide breakthroughs in many disciplines, including materials and drug discovery, the optimization of complex manmade systems, and artificial intelligence.” We’ve been assured that quantum computers will “forever alter our economic, industrial, academic, and societal landscape.” We’ve even been told that “the encryption that protects the world’s most sensitive data may soon be broken” by quantum computers. It has gotten to the point where many researchers in various fields of physics feel obliged to justify whatever work they are doing by claiming that it has some relevance to quantum computing.
Meanwhile, government research agencies, academic departments (many of them funded by government agencies), and corporate laboratories are spending billions of dollars a year developing quantum computers. On Wall
Street, Morgan Stanley and other financial giants expect quantum computing to mature soon and are keen to figure out how this technology can help them.
It’s become something of a self-perpetuating arms race, with many organizations seemingly staying in the race if only to avoid being left behind. Some of the world’s top technical talent, at places like Google, IBM, and Microsoft, are working hard, and with lavish resources in state-of-the-art laboratories, to realize their vision of a quantum-computing future.
In light of all this, it’s natural to wonder: When will useful quantum computers be constructed? The most optimistic experts estimate it will take 5 to 10 years. More cautious ones predict 20 to 30 years. (Similar predictions have been voiced, by the way, for the last 20 years.) I belong to a tiny minority
that answers, “Not in the foreseeable future.” Having spent decades conducting research in quantum and condensed-matter physics, I’ve developed my very pessimistic view. It’s based on an understanding of the gargantuan technical challenges that would have to be overcome to ever make quantum computing work.
The idea of quantum computing first appeared nearly 40 years ago, in 1980, when the Russian-born mathematician Yuri Manin, who now works at the Max Planck Institute for Mathematics, in Bonn, first put forward the notion, albeit in a rather vague form. The concept really got on the map, though, the following year, when physicist Richard Feynman, at the California Institute of Technology, independently proposed it.
Realizing that computer simulations of quantum systems become impossible to carry out when the system under scrutiny gets too complicated, Feynman advanced the idea that the computer itself should operate in the quantum mode: “Nature isn’t classical, dammit, and if you want to make a simulation of
nature, you’d better make it quantum mechanical, and by golly it’s a wonderful problem, because it doesn’t look so easy,” he opined. A few years later, Oxford physicist David Deutsch formally described a general-purpose quantum computer, a quantum analog of the universal Turing machine.
The subject did not attract much attention, though, until 1994, when mathematician Peter Shor (then at Bell Laboratories and now at MIT) proposed an algorithm for an ideal quantum computer that would allow very large numbers to be factored much faster than could be done on a conventional computer. This outstanding theoretical result triggered an explosion of interest in quantum computing. Many thousands of research papers, mostly theoretical, have since been published on the subject, and they continue to come out at an increasing rate.
The basic idea of quantum computing is to store and process information in a way that is very different from what is done in conventional computers, which are based on classical physics. Boiling down the many details, it’s fair to say that conventional computers operate by manipulating a large number of tiny transistors working essentially as on-off switches, which change state between cycles of the computer’s clock.
The state of the classical computer at the start of any given clock cycle can therefore be described by a long sequence of bits corresponding physically to the states of individual transistors. With N transistors, there are 2N possible states for the computer to be in. Computation on such a machine fundamentally consists of switching some of its transistors between their “on” and “off” states, according to a prescribed program.
In quantum computing, the classical two-state circuit element (the transistor) is replaced by a quantum element called a quantum bit, or qubit. Like the conventional bit, it also has two basic states. Although a variety of physical objects could reasonably serve as quantum bits, the simplest thing to use is the electron’s internal angular momentum, or spin, which has the peculiar quantum property of having only two possible projections on any coordinate axis: +1/2 or –1/2 (in units of the Planck constant). For whatever the chosen axis, you can denote the two basic quantum states of the electron’s spin as ↑ and ↓.
Here’s where things get weird. With the quantum bit, those two states aren’t the only ones possible. That’s because the spin state of an electron is described by a quantum-mechanical wave function. And that function involves two complex numbers, α and β (called quantum amplitudes), which, being complex numbers, have real parts and imaginary parts. Those complex numbers, α and β, each have a certain magnitude, and according to the rules of quantum mechanics, their squared magnitudes must add up to 1.
That’s because those two squared magnitudes correspond to the probabilities for the spin of the electron to be in the basic states ↑ and ↓ when you measure it. And because those are the only outcomes possible, the two associated probabilities must add up to 1. For example, if the probability of finding the electron in the ↑ state is 0.6 (60 percent), then the probability of finding it in the ↓ state must be 0.4 (40 percent)—nothing else would make sense.
In contrast to a classical bit, which can only be in one of its two basic states, a qubit can be in any of a continuum of possible states, as defined by the values of the quantum amplitudes α and β. This property is often described by the rather mystical and intimidating statement that a qubit can exist simultaneously in both of its ↑ and ↓ states.
Yes, quantum mechanics often defies intuition. But this concept shouldn’t be couched in such perplexing language. Instead, think of a vector positioned in the x-y plane and canted at 45 degrees to the x-axis. Somebody might say that this vector simultaneously points in both the x- and y-directions. That statement is true in some sense, but it’s not really a useful description. Describing a qubit as being simultaneously in both ↑ and ↓ states is, in my view, similarly unhelpful. And yet, it’s become almost de rigueur for journalists to describe it as such.
In a system with two qubits, there are 22 or 4 basic states, which can be written (↑↑), (↑↓), (↓↑), and (↓↓). Naturally enough, the two qubits can be described by a quantum-mechanical wave function that involves four complex numbers. In the general case of N qubits, the state of the system is described by 2N complex numbers, which are restricted by the condition that their squared magnitudes must all add up to 1.
While a conventional computer with N bits at any given moment must be in one of its 2N possible states, the state of a quantum computer with N qubits is described by the values of the 2N quantum amplitudes, which are continuous parameters (ones that can take on any value, not just a 0 or a 1). This is the origin of the supposed power of the quantum computer, but it is also the reason for its great fragility and vulnerability.
How is information processed in such a machine? That’s done by applying certain kinds of transformations—dubbed “quantum gates”—that change these parameters in a precise and controlled manner.
Experts estimate that the number of qubits needed for a useful quantum computer, one that could compete with your laptop in solving certain kinds of interesting problems, is between 1,000 and 100,000. So the number of
continuous parameters describing the state of such a useful quantum computer at any given moment must be at least 21,000, which is to say about 10300. That’s a very big number indeed. How big? It is much, much greater than the number of subatomic particles in the observable universe.
To repeat: A useful quantum computer needs to process a set of continuous parameters that is larger than the number of subatomic particles in the observable universe.
At this point in a description of a possible future technology, a hardheaded engineer loses interest. But let’s continue. In any real-world computer, you have to consider the effects of errors. In a conventional computer, those arise when one or more transistors are switched off when they are supposed to be switched on, or vice versa. This unwanted occurrence can be dealt with using relatively simple error-correction methods, which make use of some level of redundancy built into the hardware.
In contrast, it’s absolutely unimaginable how to keep errors under control for the 10300 continuous parameters that must be processed by a useful quantum computer. Yet quantum-computing theorists have succeeded in convincing the general public that this is feasible. Indeed, they claim that something
called the threshold theorem proves it can be done. They point out that once the error per qubit per quantum gate is below a certain value, indefinitely long quantum computation becomes possible, at a cost of substantially increasing the number of qubits needed. With those extra qubits, they argue, you can handle errors by forming logical qubits using multiple physical qubits.
How many physical qubits would be required for each logical qubit? No one really knows, but estimates typically range from about 1,000 to 100,000. So the upshot is that a useful quantum computer now needs a million or more qubits. And the number of continuous parameters defining the state of this hypothetical quantum-computing machine—which was already more than astronomical with 1,000 qubits—now becomes even more ludicrous.
Even without considering these impossibly large numbers, it’s sobering that no one has yet figured out how to combine many physical qubits into a smaller number of logical qubits that can compute something useful. And it’s not like this hasn’t long been a key goal.
In the early 2000s, at the request of the Advanced Research and Development Activity (a funding agency of the U.S. intelligence community that is now part of Intelligence Advanced Research Projects Activity), a team of distinguished experts in quantum information established a road map for quantum computing. It had a goal for 2012 that “requires on the order of 50 physical qubits” and “exercises multiple logical qubits through the full range of operations required for fault-tolerant [quantum computation] in order to perform a simple instance of a relevant quantum algorithm….” It’s now the end of 2018, and that ability has still not been demonstrated.
The huge amount of scholarly literature that’s been generated about quantum-computing is notably light on experimental studies describing actual hardware. The relatively few experiments that have been reported were extremely difficult to conduct, though, and must command respect and admiration.
The goal of such proof-of-principle experiments is to show the possibility of carrying out basic quantum operations and to demonstrate some elements of the quantum algorithms that have been devised. The number of qubits used for them is below 10, usually from 3 to 5. Apparently, going from 5 qubits to 50 (the goal set by the ARDA Experts Panel for the year 2012) presents experimental difficulties that are hard to overcome. Most probably they are related to the simple fact that 25 = 32, while 250 = 1,125,899,906,842,624.
By contrast, the theory of quantum computing does not appear to meet any substantial difficulties in dealing with millions of qubits. In studies of error rates, for example, various noise models are being considered. It has been proved (under certain assumptions) that errors generated by “local” noise can be corrected by carefully designed and very ingenious methods, involving, among other tricks, massive parallelism, with many thousands of gates applied simultaneously to different pairs of qubits and many thousands of
measurements done simultaneously, too.
A decade and a half ago, ARDA’s Experts Panel noted that “it has been established, under certain assumptions, that if a threshold precision per gate operation could be achieved, quantum error correction would allow a quantum computer to compute indefinitely.” Here, the key words are “under certain assumptions.” That panel of distinguished experts did not, however, address the question of whether these assumptions could ever be satisfied.
I argue that they can’t. In the physical world, continuous quantities (be they voltages or the parameters defining quantum-mechanical wave functions) can be neither measured nor manipulated exactly. That is, no continuously variable quantity can be made to have an exact value, including zero. To a mathematician, this might sound absurd, but this is the unquestionable reality of the world we live in, as any engineer knows.
Sure, discrete quantities, like the number of students in a classroom or the number of transistors in the “on” state, can be known exactly. Not so for quantities that vary continuously. And this fact accounts for the great difference between a conventional digital computer and the hypothetical quantum computer.
Indeed, all of the assumptions that theorists make about the preparation of qubits into a given state, the operation of the quantum gates, the reliability of the measurements, and so forth, cannot be fulfilled exactly. They can only be approached with some limited precision. So, the real question is: What precision is required? With what exactitude must, say, the square root of 2 (an irrational number that enters into many of the relevant quantum operations) be experimentally realized? Should it be approximated as 1.41 or as 1.41421356237? Or is even more precision needed? Amazingly, not only are there no clear answers to these crucial questions, but they were never even discussed!
While various strategies for building quantum computers are now being explored, an approach that many people consider the most promising, initially undertaken by the Canadian company D-Wave Systems and now being pursued by IBM, Google, Microsoft, and others, is based on using quantum systems of interconnected Josephson junctions cooled to very low temperatures (down to about 10 millikelvins).
The ultimate goal is to create a universal quantum computer, one that can beat conventional computers in factoring large numbers using Shor’s algorithm, performing database searches by a similarly famous quantum-computing algorithm that Lov Grover developed at Bell Laboratories in 1996, and other specialized applications that are suitable for quantum computers.
On the hardware front, advanced research is under way, with a 49-qubit chip (Intel), a 50-qubit chip (IBM), and a 72-qubit chip (Google) having recently been fabricated and studied. The eventual outcome of this activity is not entirely clear, especially because these companies have not revealed the details of their work.
While I believe that such experimental research is beneficial and may lead to a better understanding of complicated quantum systems, I’m skeptical that these efforts will ever result in a practical quantum computer. Such a computer would have to be able to manipulate—on a microscopic level and with enormous precision—a physical system characterized by an unimaginably huge set of parameters, each of which can take on a continuous range of values. Could we ever learn to control the more than 10300 continuously variable parameters defining the quantum state of such a system?
My answer is simple. No, never.
I believe that, appearances to the contrary, the quantum computing fervor is nearing its end. That’s because a few decades is the maximum lifetime of any big bubble in technology or science. After a certain period, too many unfulfilled promises have been made, and anyone who has been following the topic starts to get annoyed by further announcements of impending breakthroughs. What’s more, by that time all the tenured faculty positions in the field are already occupied. The proponents have grown older and less zealous, while the younger generation seeks something completely new and
more likely to succeed.
All these problems, as well as a few others I’ve not mentioned here, raise serious doubts about the future of quantum computing. There is a tremendous gap between the rudimentary but very hard experiments that have been carried out with a few qubits and the extremely developed quantum-computing theory, which relies on manipulating thousands to millions of qubits to calculate anything useful. That gap is not likely to be closed anytime soon.
To my mind, quantum computing researchers should still heed an admonition that IBM physicist Rolf Landauer made decades ago when the field heated up for the first time. He urged proponents of quantum computing to include in their publications a disclaimer along these lines: “This scheme, like all other schemes for quantum computation, relies on speculative technology, does not in its current form take into account all possible sources of noise, unreliability and manufacturing error, and probably will not work.”
Mikhail Dyakonov does research in theoretical physics at Charles Coulomb Laboratory at the University of Montpellier, in France. His name is attached to various physical phenomena, perhaps most famously Dyakonov surface waves.