IQC's new website is now online! Go to:
http://new.iqc.ca
The page you are viewing will be retired.
INSTITUTE FOR QUANTUM COMPUTING
 
475 Wes Graham Way
Waterloo, ON
200 University Ave. W.
Waterloo, ON N2L 3G1

P: +1 (519) 888-4021
F: +1 (519) 888-7610


University of Waterloo Logo
iqc.ca > the institute > quantum computing
Quantum Computing

(The following has been excerpted with permission from Patricia Bow’s article, titled “Leap of Faith", and published in the University of Waterloo Magazine, Winter 2003 edition.)

Quantum computing is not another way of speeding up or miniaturizing the computers we’re familiar with. It’s something fundamentally different, based on the more exotic aspects of quantum mechanics—how nature behaves at the atomic and sub-atomic level. This new way of computing may, within the next decade or so, become powerful enough to break the cryptographic codes that protect financial and military secrets. It also holds out the promise of making sensitive information far more secure than ever before. Both those things are intensely interesting to generals, security chiefs, and bank presidents.

But a revolution in security is just the beginning. Quantum computing has the potential to revitalize a host of existing technologies and generate new ones, to open new windows on the nature and origin of the universe, and to change the way we think about information and about reality itself.

The quantum idea has been around since 1900, when Max Planck theorized that electromagnetic radiation, including light, is emitted in small bundles of waves, which he called “quanta". (A quantum, also called a photon, is the smallest possible unit of radiation. There’s not such thing as half a quantum.) But the notion of quantum computing only began to circulate in the late ‘70s and early ‘80s, when scientists observed computer chips getting smaller and smaller, and asked themselves what would happen when circuits became as small as the atoms.

Above that level, the classical laws of physics do a good job of explaining the world we can see. Below it, Newton is out of his depth. Things get strange, as even some physicists will admit. Quantum mechanics is commonly described as a counter-intuitive, headache-inducing, Alice-in-Wonderland realm of multiple universes, teleportation, atomic particles that are in two places at the same time, and cats that are simultaneously dead and alive. Niels Bohr, one of the founders of the field, is reported to have said, “Anyone who says they can contemplate quantum mechanics without becoming dizzy has not understood it.”

The strangeness is where the promise lies. Quantum computing exploits an effect called superposition, which means, essentially, that a particle can be in more than one state at once. The usefulness of this effect becomes clear when you think about how binary computers work. In a classical computer the bit, the smallest unit of data, is registered when a tiny switch closes, allowing electricity to flow, or opens, shutting off the current. Closed means 1, open means 0. Long strings of 1s and 0s make up the masses of information processed inside a CPU.

Inside a quantum computer, there is no tiny electrical switch. Instead, there are a lot of atoms that can be nudged into different states. For example, the nucleus of an atom inside an NMR machine may spin like a top in one direction or another, and its axis of spin can point up (1), down (0), or any direction in between.

But here’s the key. A quantum bit (or qubit) can be 1 or 0-but it can also be both as the same time. It’s as if the particle exists in an infinity of parallel universes, having a different state in each one; and in fact, that’s one theory put forth to account for this un-Newtonian behaviour.

A three-bit classical computer can store three digits in one of these eight patterns: 000 001 010 011 111 110 101 100

A three-qubit quantum computer can hold the same eight patterns all at once, making it eight times more powerful. A seven-qubit like the one at University of Waterloo, the largest so far, would have 128 possible combinations of 0s and 1s to work with simultaneously, giving it 128 times the capacity of a seven-bit classical computer. A 30-qubit quantum computer would be roughly three times as powerful as today’s fastest supercomputers, which can run at trillions of operations per second.

After the idea first floated, although scientists continued to expand their understanding of how a quantum computer might work, it remained a plaything for theoreticians with no practical applications until 1994. In that year Peter Shor, a computer scientist at AT&T, proved mathematically that a quantum computer, if one existed, would be capable of finding the factors of a very large number – say, one with 400 digits – in a few days. This may sound like a long time for a computer to do anything, but it’s actually very little time, compared to the billions of years it would take a conventional computer to factor the same number.

Shor’s discovery set off mental explosions in universities and research centres around the world. And in organizations like the U.S. National Security Agency. Because the most sophisticated cryptographic systems, the secret codes that protect information as it travels through cyberspace, are based on the near-impossibility of factoring very large numbers using conventional computers. The cryptographic threat kicked quantum computing research into high gear. Suddenly there was money available. At Los Alamos, Raymond Laflamme began to receive funding from the National Security Agency, which has to guarantee that its codes are good for 50 years. “Now they had a dilemma,” Laflamme explains. “They wanted to know if there would be a quantum computer. If my answer had been ‘Yes’ right away, all that research would be classified, and I would probably not be here (University of Waterloo)! If the answer had been ‘No’ then I would have no funding. But the answer is really between the two."

While one quantum effect, superposition, makes quantum computing possible, a couple of other effects will make it difficult to achieve – which is why quantum computers will not go into commercial production tomorrow. One problem is that quantum systems, being very small, are also very fragile. Noise – any random activity of sub-atomic particles – doesn’t do much harm inside a classical computer, but it affects data stored in qubits like a baseball ricocheting through a china shop.

A second problem is that, in very simple terms, the all-possible-states-existing-at-once effect continues only so long as nobody tries to observe or measure what’s going on. Any contact from the outside causes the possibilities to collapse into one actuality: either 1 or 0. This is called decoherence. In 1995, Shor and others theorized on a way to correct quantum errors using another quantum effect called entanglement. Described by Einstein as “spooky action at a distance,” entanglement means that a particle’s neighbours retain information on the state of that particle – the direction of its spin, for example – and the neighbours, when scanned, can help you correct any errors caused by noise. But there was a problem. How could you correct errors without causing more errors?

That’s when Raymond Laflamme came on the scene. “As soon as I heard about quantum computing,” he recalls, “I wanted to show that it would never work.” In the course of trying to demolish the argument, he changed his own mind – “And that led me to error correction.”

It was Laflamme, with his colleagues at Los Alamos, who demonstrated in 1997 that it’s possible to carry out long computations on a quantum computer, and to successfully correct errors, provided noise is kept below a certain threshold. Their Accuracy Threshold Theorem convinced other scientists that quantum computers were not only possible, but buildable, in theory. Soon after, they put theory into practice by devising the first three-qubit computer ever made, using atoms of carbon-13. This demonstration was named by the journal Science in 1998 as one of the year’s top 10 breakthroughs. In 2000, Laflamme and his colleagues at Los Alamos and MIT went on to build the first seven-qubit device. “People still think it’s a long shot, but at least this really big stumbling block has been removed,” Laflamme says.

The seven-qubit computer created by Laflamme and his partners in 2000 is still the largest, although it’s not alone. (IBM has one too.) What can it do? It can carry out some very simple programming instructions, and not much else, “What the prototypes do best is show that the idea of quantum computing is not completely crazy,” Laflamme says. “You might think of it as ‘proof of principle’. They demonstrate that the science at the bottom of this is sound."

Wanted: Dead and Alive

Erwin Schrodinger, one of the giants of quantum physics, published his famous cat-in-the box thought experiment in 1935 to satirize a key idea of a group led by Niels Bohr (another giant). Because is was a thought experiment, no actual cats die. Imagine a steel chamber containing a cat, a small mass of radioactive material, a radioactive detector connected to a hammer, and a flask of poison gas. There is a fifty-fifty chance that an isotope in the radioactive mass will decay in the next hour, which will register on the detector, which releases a hammer to smash the flask, which kills the cat. At the end of the hour, the cat must be either dead or alive, right? Wrong. According to quantum theory, the cat will be both dead and alive until an outside observer opens the box and looks in. Which is absurd, on the face of it. But it’s also true. The idea that the isotope, at least, exists in more than one state at once has been tested over and over, and proven. One of its practical applications is quantum computing. All the same, Schrodinger’s puzzle still raises questions about what constitutes an observer, what role consciousness plays in measuring reality, and where the real boundary between quantum effects and the classical world lies. Physicists have spent the last 65 years wrestling with these questions.

Moore’s Law – Hitting the Wall

More than 25 years ago, when Intel was developing the first microprocessor, Dr. Gordon E. Moore, company cofounder, predicted that the number of transistors on an microprocessor would double approximately every 18 months and consequently that the cost per function of integrated circuits falls by half in about the same time.

To date, Moore’s law has proven remarkably accurate. In fact, the Semiconductor Industry Association (SIA) 1997 edition of the National Technology Roadmap for Semiconductors projects that this rate of improvement will extend to the year 2012. However there are individuals who believe that for years beyond 2006, the roadmap is filled with uncertainty due to what appears to be insurmountable physical device limitations.

In the September 24, 1999 edition of Science, Paul Packan of Intel in his article “Pushing the Limits” states that Moore’s Law “now seems to be in serious danger.” In the article he also describes some of the technical difficulties that are expected to be encountered within the next decade in the attempts to maintain this rate of improvement. These technical issues will, according to Dr. Packan, produce “the most difficult challenge the semiconductor industry has ever faced.”

On current integrated circuits the insulating barrier between a transistor gate and its channel between source and drain is just atoms thick. One of the physical device limitations described by Dr. Packan is that transistor gates, as further miniaturization is pursued, will become so thin that quantum mechanical “tunneling” effects will arise. These quantum effects will create leakage current through the gate when the switch is “off” that is a significant fraction of the channel current when the device is “on”. This could reduce the reliability of the transistors resulting in increased cost and decreased availability of more powerful chips. In turn, this will affect every device that uses computer chips, from cell phones and pagers, to personal and business computers.

privacy policy