2007 (2)
2016 (86)
2017 (154)
2019 (79)
2021 (73)
2022 (129)
2023 (263)
【微信上看到的一篇講量子計算機的,後來看到了英文,並轉於此。】
作者:Mikhail Dyakonov在法國蒙彼利埃大學的查爾斯•庫侖(Charles Coulomb)實驗室從事理論物理研究工作。他的大名與諸多物理現象聯係在一起,最著名的現象也許是迪阿科諾夫表麵波(Dyakonov surface wave)。
目前提出的策略有賴於高精度地操縱數量多得難以想象的變量
眼下量子計算風靡一時。似乎沒有哪天新聞媒體不在報道這項技術有望帶來的出眾優點。大多數評論人士忘記了或者完全掩蓋了這個事實:幾十年來人們一直在搞量子計算,卻沒有任何實際的結果好炫耀一番。
有人告訴我們,量子計算機有望“在許多學科領域帶來突破,包括材料及藥物發現、優化複雜的人造係統和AI等領域。”有人向我們保證,量子計算機將“永遠改變我們的經濟、工業、學術和社會格局。”有人甚至告訴我們,量子計算機“可能很快就會破解保護世界上最敏感數據的加密技術。”現在到了這樣的地步:物理學眾多領域的許多研究人員聲稱自己開展的研究工作與量子計算有一定的關聯,以此證明其研究工作的必要性。
與此同時,政府研究機構、學術部門(其中許多由政府機構資助)和企業實驗室每年花費數十億美元研發量子計算機。在華爾街,摩根士丹利及其他金融巨頭預計量子計算很快會成熟起來,急於想搞清楚這項技術如何能幫到自己。
這多少已成了一場自我延續的軍備競賽,許多企業組織參與競爭似乎隻為了避免被拋在後麵。世界上一些頂尖的技術人才(比如在穀歌、IBM和微軟等公司)正在努力工作,並借助最先進實驗室擁有的豐富資源,以期實現他們憧憬的量子計算未來。
有鑒於這一切,我們很自然想知道:實用的量子計算機到底何時才會搞出來?最樂觀的專家估計要過5年至10年,較為謹慎的專家預測再過20年到30年。(順便提一下,過去的20年已有人作出類似的預言。)極少數人說“在可預見的未來搞不出來”,我正是其中之一。我從事量子和凝聚態物理的研究工作已有數十年,逐漸有了這種非常悲觀的觀點。之所以有這個觀點,是由於對實現量子計算需要克服的巨大技術挑戰有深入了解。
量子計算概念最早出現在近40年前的1980年,當時出生於俄羅斯的數學家尤裏·曼寧(Yuri Manin,現在波恩的馬克斯·普朗克數學研究所工作)最先提出了這個概念,盡管是相當模糊的雛形。不過第二年這個概念迅速遐爾聞名,那年加州理工學院的物理學家理查德·費曼(Richard Feynman)獨立提出了這個概念。
費曼後來認識到研究中的係統變得過於複雜時,量子係統的計算機模擬變得無法進行,於是提出了這個觀念:計算機本身應該在量子模式下運行。他當時說:“該死的,大自然不是經典的;如果你想要對大自然進行模擬,最好把它變成量子力學;天哪,這是很棒的問題,但不是那麽容易解決。”幾年後,牛津大學的物理學家大衛•多伊奇(David Deutsch)正式描述了一種通用量子計算機,這是通用圖靈機的量子版係統。
不過直到1994年,這個課題才備受關注,當時數學家彼得•肖爾(Peter Shor,當時在貝爾實驗室,目前在麻省理工學院)為理想的量子計算機提出了一種算法,那樣為非常大的數字分解因子比在傳統計算機上快得多。這一傑出的理論成果引發了人們對量子計算產生了濃厚興趣。自那以來,已發表了數千篇關於這個課題的研究論文(主要是理論研究),而且繼續層出不窮。
量子計算的基本思想是,以一種與傳統計算機全然不同的方式來存儲和處理信息,傳統計算機基於經典物理學。可以說,傳統計算機通過操縱大量運行起來實際上就是通斷開關的微型晶體管來工作,通斷開關在計算機時鍾的周期之間改變狀態。
因此,任何特定時鍾周期開始時的經典計算機的狀態可以通過實際上與單個晶體管狀態對應的長長序列的比特來描述。若有N個晶體管,計算機就有2N種可能的狀態。根據規定的程序,這種機器上的計算基本上包括讓一切晶體管在“通”狀態和“斷”狀態之間切換。
在量子計算中,經典的雙態電路元件(晶體管)被名為量子比特(qubit)的量子元素所取代。與傳統比特一樣,量子比特也有兩個基本狀態。雖然眾多實物可合理地充當量子比特,但最簡單的用法是電子的內部角動量或自旋,而自旋有特殊的量子特性:在任何坐標軸上隻有兩種可能的投影:+1/2或-1/2(以普朗克常數為單位)。無論選擇的是哪條軸,你都可以將電子自旋的兩個基本量子態表示為↑和↓。
這時候情況變得怪異起來。若是量子比特,這兩個狀態不是唯一可能的狀態。那是由於電子的自旋態由量子力學波函數來描述。而這個波函數涉及兩個複數:α和β(名為量子振幅),由於是複數,因而有實部和虛部。那些複數即α和β各自有某個振幅;而且按照量子力學的規則,它們的平方振幅必須加起來是1。
那是由於那兩個平方振幅對應於你在測量時,電子自旋處於基本狀態↑和↓的概率。又由於那些是唯一可能的結果,兩個相關的概率必須加起來是1。比如說,如果發現電子處於↑狀態的概率是0.6(60%),那麽發現電子處於↓狀態的概率勢必是0.4(40%),沒有其他的可能性。
與經典比特隻能處於兩個基本狀態中的一個相比,量子比特可能處於一連串可能狀態中的任何一個,由量子振幅α和β的值所定義。這個屬性常常由相當驚人的定論來描述,即量子比特可同時存在於↑狀態和↓狀態。
是的,量子力學常常有悖直覺。但是這個概念不應該用這種令人困惑的言辭來加以表達。相反,可以看成位於x-y平麵內的一個矢量,與x軸呈45度傾斜。有人可能會說,這個矢量同時指向x方向和y方向。這種說法在某種意義上是正確的,但其實不是實用的描述。在我看來,將量子比特描述為同時處於↑狀態和↓狀態同樣毫無助益。不過,記者們這麽來描述幾乎成了一種慣例。
在有兩個量子比特的係統中,有22即4個基本狀態,可以寫為(↑↑)、(↑↓)、(↓↑)和(↓↓)。當然了,兩個量子比特可以由涉及四個複數的量子力學波函數來描述。在N個量子位的一般情況下,係統狀態由2N個複數來描述,複數受到它們的平方振幅必須加起來是1這個條件的限製。
雖然在任何特定時刻有N個比特的傳統計算機勢必處於2N個可能狀態中的一個,但有有N個量子比特的量子計算機的狀態由2N量子振幅的值來描述,這是連續參數(可以是任何值,而不僅僅是0或1)。這是量子計算機強大功能的起源,但也是其巨大脆弱性和薄弱性的原因。
信息在這樣的機器中如何處理?借助運用某些類型的變換(名為“量子門”)來處理,而量子門能以一種精確的、受控製的方式來改變這些參數。
專家估計,實用量子計算機所需的量子比特數在1000個至100000個,這種量子計算機在解決某些類別的有趣問題方麵可與筆記本電腦一較高下。因此,在任何特定時刻描述這種實用量子計算機狀態的連續參數數量必須至少是21000個,大致相當於10300個。這個數字確實很龐大。有多大?比可觀測宇宙中亞原子粒子的數量還多得多。
重複一下:實用量子計算機需要處理一組連續參數,數量比可觀測宇宙中的亞原子粒子數量還多。
眼下,頭腦冷靜的工程師對描述一種可能的未來技術失去了興趣。在任何實際的計算機中,你得考慮錯誤的影響。在傳統計算機中,如果一個或多個晶體管在應該接通時被斷開或應該斷開時被接通,會出現錯誤。可使用相對簡單的糾錯方法來處理這種不希望看到的情況,這些方法利用了內置到硬件中的某種冗餘機製。
相比之下,麵對實用量子計算機必須處理的10300個連續參數,如何牢牢控製錯誤絕對不可想象。然而,量子計算理論家已成功地讓公眾相信這是切實可行的。的確,他們聲稱閾值定理(threshold theorem)證明了能做到這一點。他們指出,一旦每個量子門的每個量子比特的誤差低於某個值,無限長的量子計算就成為可能,而代價是所需的量子比特數量大幅增加。他們認為,由於那些額外的量子比特,可以通過使用多個物理量子比特形成邏輯量子比特來處理錯誤。
每個邏輯量子比特需要多少物理量子比特?其實沒有人知道,但估計通常在大約1000到100000之間。因此結果是,實用量子計算機現在需要100萬或更多的量子比特。而定義這種假想量子計算機的狀態的連續參數的數量現在變得更荒謬了。
即使不考慮這些異常龐大的數字,令人警醒的是,也沒有人搞清楚如何將許多物理量子比特組合成可以執行實用計算操作的較少數量的邏輯量子比特。倒不是說這向來不是關鍵的目標。
21世紀初,應高級研發活動中心(美國情報界的一家資助機構,現在是情報高級研究項目活動中心的一部分)的要求,一隊傑出的量子信息專家為量子計算製定了路線圖。為2012年所定的目標是“需要大約50個物理量子比特”,並“讓多個邏輯量子比特完成容錯[量子計算]所需的一整套操作,以便執行一種簡單的相關量子算法......”現在已到了2018年底,而這種能力還沒有予以演示。
圍繞量子計算撰寫的大量學術文獻在描述實際硬件的實驗研究方麵尤其輕描淡寫。不過,業已報道的比較少的實驗極難進行,應得到尊重和欽佩。
這種原理證明實驗的目的是表明執行基本量子運算的可能性,並演示已設計出來的量子算法的一些元素。它們所用的量子比特數少於10個,通常是3個到5個。很顯然,量子比特從5個到50個(高級研發活動中心專家組為2012年設定的目標)帶來了難以克服的實驗難題。它們很可能與25 = 32,而250 = 1125899906842624這個簡單的事實有關。
相比之下,量子計算理論似乎沒有遇到處理數百萬量子比特方麵的任何重大困難。比如誤差率方麵的研究在考慮各種噪聲模型。已證明(在某些假設下)“局部”噪聲產生的誤差可以通過精心設計、非常巧妙的方法來糾正,包括大規模並發機製(以及其他技巧),數千個門同時應用於不同的量子比特對、數千次測量同時進行。
十五年前,高級研發活動中心的專家組特別指出,“在某些假設下已確定,如果可以獲得每個門操作的閾值精度,量子糾錯將讓量子計算機可以無限期計算。”這裏的關鍵詞是“在某些假設下”。然而,這群傑出專家並沒有解決這些假設能否果真得到滿足的問題。
我認為他們也解決不了。在物理界,連續量(無論是電壓還是定義量子力學波函數的參數)既無法測量,也無法精確地操縱。也就是說,任何連續可變量無法做到有精確值,包括0。在數學家看來,這可能聽起來很荒謬,但任何工程師都知道,這是我們所處的這個世界無可置疑的現實。
當然,可以準確地知道離散量,比如教室中的學生數量或“開通”狀態下的晶體管數量。持續變化的量則不是這樣。這一事實可以解釋傳統數字計算機和假想量子計算機之間的巨大差異。
的確,理論家針對量子比特準備到特定狀態、量子門的操作和測量可靠性等方麵所做的種種假設都無法準確地實現。隻能以有限的精度來接近它們。所以真正的問題是:需要什麽樣的精度?比如說,必須以什麽樣的精度在試驗中獲得2的平方根(進入許多相關量子運算的無理數)?應該近似為1.41還是1.41421356237?還是說需要更精確?令人驚訝的是,不但這些關鍵問題沒有明確的答案,甚至從未討論過!
雖然目前正在探究製造量子計算機的各種策略,但許多人認為最有希望的一種方法立足於使用互連的約瑟夫森結(Josephson junctions)冷卻到超低溫度(低至約10毫開)的量子係統。加拿大公司D-Wave Systems最先研究這種方法,現在IBM、穀歌、微軟和其他公司亦步亦趨。
最終目標是製造一台通用量子計算機,可以在使用肖爾算法對大數分解因子方麵擊敗傳統計算機,借助1996年洛弗•格羅弗(Lov Grover)在貝爾實驗室開發的一種同樣很有名的量子計算算法執行數據庫搜索,並執行適合量子計算機處理的其他專用應用軟件。
硬件方麵,高級研究工作正在開展中,最近研究和製造出了49個量子比特的芯片(英特爾)、50個量子比特的芯片(IBM)和72個量子比特的芯片(穀歌)。這方麵工作的最終結果尚不完全清楚,特別是由於這些公司還沒有透露其工作的細節。
雖然我認為這樣的實驗研究大有助益,並有助於更深入地了解複雜的量子係統,但我懷疑這些努力果真會帶來實用的量子計算機。這種計算機必須能夠在微觀層麵以極高的精度來操縱物理係統,這種物理係統的特點是參數多得難以想象,每個參數可能呈現連續範圍的值。我們果真能學會控製決定這類係統的量子狀態的超過10300個連續變量參數嗎?
我的回答很簡單。根本不能。
我認為,恰好相反,量子計算熱接近尾聲。這是由於幾十年是技術或科學界任何大泡沫所能持續的最長時間。一段時間後,由於做出了太多未能實現的承諾,一直關注這個話題的人會開始對宣布即將取得突破的新聞感到膩味。此外,到那個時候,該領域所有的終身教授職位已“名花有主”。支持者年齡越來越大,熱情越來越低,而年輕一代尋求全新的技術,更有可能取得成功。
所有這些問題以及我在本文中並沒有提及的另外幾個問題對於量子計算的未來打上了大大的問號。用幾個量子比特進行的很基本但很困難的實驗與依賴操縱數千個到數百萬個量子比特來執行實用計算的極其發達的量子計算理論之間存在著巨大的差距,不太可能很快就能縮小這個差距。
在我看來,量子計算研究人員仍應該聽從IBM的物理學家羅爾夫•蘭道爾(Rolf Landauer)幾十年前在這個領域首次備受關注時給予的告誡。他敦促量子計算的支持者們在出版的論文中加入這樣的免責聲明:“這種方案與量子計算的所有其他方案一樣都依賴理論技術,目前並未考慮噪聲、不可靠性和製造錯誤方麵所有可能的來源,可能不會奏效。”
量子技術社群歡迎加入,群主微信:aclood(備注任職單位+職位,否則不予通過)
【英文:IEEE Spetrum--https://spectrum.ieee.org/computing/hardware/the-case-against-quantum-computing?
】
15 Nov 2018 | 16:00 GMT
Quantum computing is all the rage. It seems like hardly a day goes by without some news outlet describing the extraordinary things this technology promises. Most commentators forget, or just gloss over, the fact that people have been working on quantum computing for decades—and without any practical results to show for it.
We’ve been told that quantum computers could “provide breakthroughs in many disciplines, including materials and drug discovery, the optimization of complex manmade systems, and artificial intelligence.” We’ve been assured that quantum computers will “forever alter our economic, industrial, academic, and societal landscape.” We’ve even been told that “the encryption that protects the world’s most sensitive data may soon be broken” by quantum computers. It has gotten to the point where many researchers in various fields of physics feel obliged to justify whatever work they are doing by claiming that it has some relevance to quantum computing.
Meanwhile, government research agencies, academic departments (many of them funded by government agencies), and corporate laboratories are spending billions of dollars a year developing quantum computers. On Wall
Street, Morgan Stanley and other financial giants expect quantum computing to mature soon and are keen to figure out how this technology can help them.
It’s become something of a self-perpetuating arms race, with many organizations seemingly staying in the race if only to avoid being left behind. Some of the world’s top technical talent, at places like Google, IBM, and Microsoft, are working hard, and with lavish resources in state-of-the-art laboratories, to realize their vision of a quantum-computing future.
In light of all this, it’s natural to wonder: When will useful quantum computers be constructed? The most optimistic experts estimate it will take 5 to 10 years. More cautious ones predict 20 to 30 years. (Similar predictions have been voiced, by the way, for the last 20 years.) I belong to a tiny minority
that answers, “Not in the foreseeable future.” Having spent decades conducting research in quantum and condensed-matter physics, I’ve developed my very pessimistic view. It’s based on an understanding of the gargantuan technical challenges that would have to be overcome to ever make quantum computing work.
The idea of quantum computing first appeared nearly 40 years ago, in 1980, when the Russian-born mathematician Yuri Manin, who now works at the Max Planck Institute for Mathematics, in Bonn, first put forward the notion, albeit in a rather vague form. The concept really got on the map, though, the following year, when physicist Richard Feynman, at the California Institute of Technology, independently proposed it.
Realizing that computer simulations of quantum systems become impossible to carry out when the system under scrutiny gets too complicated, Feynman advanced the idea that the computer itself should operate in the quantum mode: “Nature isn’t classical, dammit, and if you want to make a simulation of
nature, you’d better make it quantum mechanical, and by golly it’s a wonderful problem, because it doesn’t look so easy,” he opined. A few years later, Oxford physicist David Deutsch formally described a general-purpose quantum computer, a quantum analog of the universal Turing machine.
The subject did not attract much attention, though, until 1994, when mathematician Peter Shor (then at Bell Laboratories and now at MIT) proposed an algorithm for an ideal quantum computer that would allow very large numbers to be factored much faster than could be done on a conventional computer. This outstanding theoretical result triggered an explosion of interest in quantum computing. Many thousands of research papers, mostly theoretical, have since been published on the subject, and they continue to come out at an increasing rate.
The basic idea of quantum computing is to store and process information in a way that is very different from what is done in conventional computers, which are based on classical physics. Boiling down the many details, it’s fair to say that conventional computers operate by manipulating a large number of tiny transistors working essentially as on-off switches, which change state between cycles of the computer’s clock.
The state of the classical computer at the start of any given clock cycle can therefore be described by a long sequence of bits corresponding physically to the states of individual transistors. With N transistors, there are 2N possible states for the computer to be in. Computation on such a machine fundamentally consists of switching some of its transistors between their “on” and “off” states, according to a prescribed program.
In quantum computing, the classical two-state circuit element (the transistor) is replaced by a quantum element called a quantum bit, or qubit. Like the conventional bit, it also has two basic states. Although a variety of physical objects could reasonably serve as quantum bits, the simplest thing to use is the electron’s internal angular momentum, or spin, which has the peculiar quantum property of having only two possible projections on any coordinate axis: +1/2 or –1/2 (in units of the Planck constant). For whatever the chosen axis, you can denote the two basic quantum states of the electron’s spin as ↑ and ↓.
Here’s where things get weird. With the quantum bit, those two states aren’t the only ones possible. That’s because the spin state of an electron is described by a quantum-mechanical wave function. And that function involves two complex numbers, α and β (called quantum amplitudes), which, being complex numbers, have real parts and imaginary parts. Those complex numbers, α and β, each have a certain magnitude, and according to the rules of quantum mechanics, their squared magnitudes must add up to 1.
That’s because those two squared magnitudes correspond to the probabilities for the spin of the electron to be in the basic states ↑ and ↓ when you measure it. And because those are the only outcomes possible, the two associated probabilities must add up to 1. For example, if the probability of finding the electron in the ↑ state is 0.6 (60 percent), then the probability of finding it in the ↓ state must be 0.4 (40 percent)—nothing else would make sense.
In contrast to a classical bit, which can only be in one of its two basic states, a qubit can be in any of a continuum of possible states, as defined by the values of the quantum amplitudes α and β. This property is often described by the rather mystical and intimidating statement that a qubit can exist simultaneously in both of its ↑ and ↓ states.
Yes, quantum mechanics often defies intuition. But this concept shouldn’t be couched in such perplexing language. Instead, think of a vector positioned in the x-y plane and canted at 45 degrees to the x-axis. Somebody might say that this vector simultaneously points in both the x- and y-directions. That statement is true in some sense, but it’s not really a useful description. Describing a qubit as being simultaneously in both ↑ and ↓ states is, in my view, similarly unhelpful. And yet, it’s become almost de rigueur for journalists to describe it as such.
In a system with two qubits, there are 22 or 4 basic states, which can be written (↑↑), (↑↓), (↓↑), and (↓↓). Naturally enough, the two qubits can be described by a quantum-mechanical wave function that involves four complex numbers. In the general case of N qubits, the state of the system is described by 2N complex numbers, which are restricted by the condition that their squared magnitudes must all add up to 1.
While a conventional computer with N bits at any given moment must be in one of its 2N possible states, the state of a quantum computer with N qubits is described by the values of the 2N quantum amplitudes, which are continuous parameters (ones that can take on any value, not just a 0 or a 1). This is the origin of the supposed power of the quantum computer, but it is also the reason for its great fragility and vulnerability.
How is information processed in such a machine? That’s done by applying certain kinds of transformations—dubbed “quantum gates”—that change these parameters in a precise and controlled manner.
Experts estimate that the number of qubits needed for a useful quantum computer, one that could compete with your laptop in solving certain kinds of interesting problems, is between 1,000 and 100,000. So the number of
continuous parameters describing the state of such a useful quantum computer at any given moment must be at least 21,000, which is to say about 10300. That’s a very big number indeed. How big? It is much, much greater than the number of subatomic particles in the observable universe.
To repeat: A useful quantum computer needs to process a set of continuous parameters that is larger than the number of subatomic particles in the observable universe.
At this point in a description of a possible future technology, a hardheaded engineer loses interest. But let’s continue. In any real-world computer, you have to consider the effects of errors. In a conventional computer, those arise when one or more transistors are switched off when they are supposed to be switched on, or vice versa. This unwanted occurrence can be dealt with using relatively simple error-correction methods, which make use of some level of redundancy built into the hardware.
In contrast, it’s absolutely unimaginable how to keep errors under control for the 10300 continuous parameters that must be processed by a useful quantum computer. Yet quantum-computing theorists have succeeded in convincing the general public that this is feasible. Indeed, they claim that something
called the threshold theorem proves it can be done. They point out that once the error per qubit per quantum gate is below a certain value, indefinitely long quantum computation becomes possible, at a cost of substantially increasing the number of qubits needed. With those extra qubits, they argue, you can handle errors by forming logical qubits using multiple physical qubits.
How many physical qubits would be required for each logical qubit? No one really knows, but estimates typically range from about 1,000 to 100,000. So the upshot is that a useful quantum computer now needs a million or more qubits. And the number of continuous parameters defining the state of this hypothetical quantum-computing machine—which was already more than astronomical with 1,000 qubits—now becomes even more ludicrous.
Even without considering these impossibly large numbers, it’s sobering that no one has yet figured out how to combine many physical qubits into a smaller number of logical qubits that can compute something useful. And it’s not like this hasn’t long been a key goal.
In the early 2000s, at the request of the Advanced Research and Development Activity (a funding agency of the U.S. intelligence community that is now part of Intelligence Advanced Research Projects Activity), a team of distinguished experts in quantum information established a road map for quantum computing. It had a goal for 2012 that “requires on the order of 50 physical qubits” and “exercises multiple logical qubits through the full range of operations required for fault-tolerant [quantum computation] in order to perform a simple instance of a relevant quantum algorithm….” It’s now the end of 2018, and that ability has still not been demonstrated.
The huge amount of scholarly literature that’s been generated about quantum-computing is notably light on experimental studies describing actual hardware. The relatively few experiments that have been reported were extremely difficult to conduct, though, and must command respect and admiration.
The goal of such proof-of-principle experiments is to show the possibility of carrying out basic quantum operations and to demonstrate some elements of the quantum algorithms that have been devised. The number of qubits used for them is below 10, usually from 3 to 5. Apparently, going from 5 qubits to 50 (the goal set by the ARDA Experts Panel for the year 2012) presents experimental difficulties that are hard to overcome. Most probably they are related to the simple fact that 25 = 32, while 250 = 1,125,899,906,842,624.
By contrast, the theory of quantum computing does not appear to meet any substantial difficulties in dealing with millions of qubits. In studies of error rates, for example, various noise models are being considered. It has been proved (under certain assumptions) that errors generated by “local” noise can be corrected by carefully designed and very ingenious methods, involving, among other tricks, massive parallelism, with many thousands of gates applied simultaneously to different pairs of qubits and many thousands of
measurements done simultaneously, too.
A decade and a half ago, ARDA’s Experts Panel noted that “it has been established, under certain assumptions, that if a threshold precision per gate operation could be achieved, quantum error correction would allow a quantum computer to compute indefinitely.” Here, the key words are “under certain assumptions.” That panel of distinguished experts did not, however, address the question of whether these assumptions could ever be satisfied.
I argue that they can’t. In the physical world, continuous quantities (be they voltages or the parameters defining quantum-mechanical wave functions) can be neither measured nor manipulated exactly. That is, no continuously variable quantity can be made to have an exact value, including zero. To a mathematician, this might sound absurd, but this is the unquestionable reality of the world we live in, as any engineer knows.
Sure, discrete quantities, like the number of students in a classroom or the number of transistors in the “on” state, can be known exactly. Not so for quantities that vary continuously. And this fact accounts for the great difference between a conventional digital computer and the hypothetical quantum computer.
Indeed, all of the assumptions that theorists make about the preparation of qubits into a given state, the operation of the quantum gates, the reliability of the measurements, and so forth, cannot be fulfilled exactly. They can only be approached with some limited precision. So, the real question is: What precision is required? With what exactitude must, say, the square root of 2 (an irrational number that enters into many of the relevant quantum operations) be experimentally realized? Should it be approximated as 1.41 or as 1.41421356237? Or is even more precision needed? Amazingly, not only are there no clear answers to these crucial questions, but they were never even discussed!
While various strategies for building quantum computers are now being explored, an approach that many people consider the most promising, initially undertaken by the Canadian company D-Wave Systems and now being pursued by IBM, Google, Microsoft, and others, is based on using quantum systems of interconnected Josephson junctions cooled to very low temperatures (down to about 10 millikelvins).
The ultimate goal is to create a universal quantum computer, one that can beat conventional computers in factoring large numbers using Shor’s algorithm, performing database searches by a similarly famous quantum-computing algorithm that Lov Grover developed at Bell Laboratories in 1996, and other specialized applications that are suitable for quantum computers.
On the hardware front, advanced research is under way, with a 49-qubit chip (Intel), a 50-qubit chip (IBM), and a 72-qubit chip (Google) having recently been fabricated and studied. The eventual outcome of this activity is not entirely clear, especially because these companies have not revealed the details of their work.
While I believe that such experimental research is beneficial and may lead to a better understanding of complicated quantum systems, I’m skeptical that these efforts will ever result in a practical quantum computer. Such a computer would have to be able to manipulate—on a microscopic level and with enormous precision—a physical system characterized by an unimaginably huge set of parameters, each of which can take on a continuous range of values. Could we ever learn to control the more than 10300 continuously variable parameters defining the quantum state of such a system?
My answer is simple. No, never.
I believe that, appearances to the contrary, the quantum computing fervor is nearing its end. That’s because a few decades is the maximum lifetime of any big bubble in technology or science. After a certain period, too many unfulfilled promises have been made, and anyone who has been following the topic starts to get annoyed by further announcements of impending breakthroughs. What’s more, by that time all the tenured faculty positions in the field are already occupied. The proponents have grown older and less zealous, while the younger generation seeks something completely new and
more likely to succeed.
All these problems, as well as a few others I’ve not mentioned here, raise serious doubts about the future of quantum computing. There is a tremendous gap between the rudimentary but very hard experiments that have been carried out with a few qubits and the extremely developed quantum-computing theory, which relies on manipulating thousands to millions of qubits to calculate anything useful. That gap is not likely to be closed anytime soon.
To my mind, quantum computing researchers should still heed an admonition that IBM physicist Rolf Landauer made decades ago when the field heated up for the first time. He urged proponents of quantum computing to include in their publications a disclaimer along these lines: “This scheme, like all other schemes for quantum computation, relies on speculative technology, does not in its current form take into account all possible sources of noise, unreliability and manufacturing error, and probably will not work.”
Mikhail Dyakonov does research in theoretical physics at Charles Coulomb Laboratory at the University of Montpellier, in France. His name is attached to various physical phenomena, perhaps most famously Dyakonov surface waves.