Pesquisar este blog

Translate

quarta-feira, 7 de dezembro de 2011

A High-Stakes Search Continues for Silicon’s Successor

Lianne Milton for The New York Times

THE NEW CIRCUIT Max Shulaker and colleagues grow billions of carbon nanotubes on a quartz surface.


STANFORD, Calif. — In a cluttered chip-making laboratory on Stanford’s campus, Max Shulaker is producing the world’s smallest computer circuits by hand.
Mr. Shulaker, a graduate student in electrical engineering, is helping to pioneer an extraordinary custom manufacturing process: making prototypes of a new kind of semiconductor circuit that may one day be the basis for the world’s fastest supercomputers — not to mention the smallest and lowest-powered consumer gadgets.
If the new technology proves workable, it will avert a crisis that threatens to halt more than five decades of progress by chip makers, who now routinely etch circuits smaller than a wavelength of light to make ever more powerful computers.
Even light waves, it turns out, have their limits. In an industry renowned for inventions both radical and resourceful, designers are in urgent need of new ways to make circuits smaller, faster and cheaper.
This year Intel, the world’s largest chip maker, introduced a 3-D transistor that pushes a thin pillar out of the plane of the silicon surface, in an effort to accommodate billions of tiny switches on a single microprocessor.
That approach is controversial; the challenge is not just to squeeze in more switches but to get them to turn on and off quickly and cleanly, and many people in the industry believe there are less drastic ways to do that.
And whichever approach proves most effective, there is a growing consensus among engineers and industry executives that silicon’s days are numbered. On the horizon is an even smaller manufacturing world, nanoelectronics, that will be characterized by the ability to build circuits on a molecular scale.
So at universities and corporate laboratories around the world, researchers are trying to develop the next generation of chip-making technologies.
Mr. Shulaker is a member of the Robust Systems Group at Stanford, led by Subhasish Mitra, a former Intel engineer. The new switch he and other student researchers are making is called a carbon nanotube field effect transistor, or C.N.F.E.T.
To make prototype switches, Mr. Shulaker first chemically grows billions of carbon nanotubes — each only about 12 atoms wide — on a quartz surface. He coats them with an ultrafine layer of gold, and then uses a piece of tape — much like a lint remover — to pick them up by hand and transfer them gently to a silicon wafer.
 The difference is that for the first time circuits are not etched with light waves; rather, they are at least partly “self-assembling.” The ultrafine wires made from carbon nanotubes are being laid down using a chemical process as the first step in making a computer circuit.
What results are nanocircuits that are far smaller and use far less power than today’s most advanced silicon-based computer circuits.
With the light-etching method, the smallest part of a semiconductor is currently 32 nanometers, and the nanotechnology is an approach that both Intel and I.B.M. have high hopes for by the time semiconductor parts are down to seven nanometers — which could be as early as 2017.
“We’re exploring this very seriously,” said Supratik Guha, director of physical sciences at I.B.M.’s Thomas J. Watson Research Laboratory. “We feel that if we can place carbon nanotubes a couple of nanometers apart, they will outperform silicon.”
The idea of scaling down electronic circuits goes back at least to 1960, when a young electrical engineer named Douglas Engelbart spoke at a radio and electronics technical conference in Philadelphia. Dr. Engelbart had hit on the idea that shrinking the basic circuitry of the first digital computers could lead to a drastic increase in power.
“Boy, are there going to be some surprises over there,” he told his audience.
It turned out to be an understatement.
A half decade later, Gordon Moore, then a chemist at Fairchild Semiconductor, formalized the ability of a new technique called photolithography to scale down components, saying it could be done at regular intervals and predicting that it would be exponential — doubling the number of transistors that could be put on a microchip every year.
Moore’s Law, as it came to be called, was only a little optimistic: The doubling has taken place every 18 months or so for nearly five decades. Today several billion transistors can fit on a single chip, and the resulting era of microelectronics has transformed the world, touching virtually every aspect of human existence — from African subsistence farmers who can now get market prices via text message to supercomputers that can simulate nuclear explosions and predict climate change.
But with each new generation of technology, the obstacles have grown more imposing, and the cost of surmounting them is going up, not down. For example, the Taiwan Semiconductor Manufacturing Company, one of the world’s largest chip makers, expects to spend almost $10 billion on its next factory.
Moreover, as standard CMOS computer circuits pack more and more transistors, they tend to leak electricity — and thus generate excess heat.
The warning signs began a decade ago, when Patrick P. Gelsinger, then Intel’s chief technology officer, warned that if the trends continued, microprocessor chips would reach the temperature of the sun’s surface by 2011. To prevent that, the company executed what it called a “hard right turn,” gaining speed by adding parallel computing capabilities rather than increasing the chips’ clock speeds.
Even that approach has its limits, however. This year, researchers at the University of Washington and Microsoft warned of what they called “dark silicon.” With so many processors on a single chip, it is impractical to supply power to all of them at the same time. So some of the transistors are left unpowered — dark, in industry parlance.
The new limits are particularly daunting for supercomputer designers, who are looking to build an “exascale” system — 1,000 times the speed of today’s fastest computers — by 2019. Using today’s components, that would require 10 million to 100 million processors — compared with almost one million today — and would consume more than a billion watts.
At the annual supercomputing conference in Seattle last month, Jen-Hsun Huang, chief executive of Nvidia, a maker of graphics accelerator chips used in game machines and computers, warned that while supercomputing performance had improved one million times in the last two decades, the power needed to run a computer had increased just 40 times. This rate of increase had been predicted by Robert H. Dennard, the I.B.M. electrical engineer who invented the dynamic random access memory, or DRAM, chip. However, in the face of the growing problem of current leakage, the huge benefit, which had offered a constant eight times performance increase per watt, has reached its limit.
“The impact of that small analysis is dramatic over time,” Mr. Huang said. “It’s fundamental to our industry — this is our gravity.”
Mr. Huang’s company, which began as a Silicon Valley maker of graphics cards for 3-D video games and has recently begun to offer software that optimizes its processors for scientific and engineering applications, mirrors broader trends in the computer industry.
Until the 1990s, state-of-the-art computing systems began with corporate and military applications. Since then, increasingly the technology has been driven from the bottom up. The vast economies of scale offered by consumer electronics products has dictated that many of the fastest supercomputers are now built from components first designed for consumers.
At this year’s supercomputing conference, for example, researchers at the Barcelona Supercomputing Center in Spain announced that they were planning a system based on a new Nvidia chip that combines its graphics processors with the ARM microprocessor that is widely used in smartphones.
Combining graphics processors, microprocessors and other computer components onto single integrated chips is only a stopgap measure, however. Within a few years, experts say, the conventional CMOS transistors will no longer be able to scale down at a Moore’s Law pace.
Here at Stanford, Dr. Mitra says a system based on carbon nanotubes may greatly outperform Intel’s current 3-D transistor technology. Indeed, it might be possible to stack multiple layers of these carbon switches, creating genuine three-dimensional circuits.
But he acknowledges that carbon nanotube technology “still has its catches.”
Other technologies, too, may be contenders in the nanoelectronics sweepstakes.
Researchers at HP Labs have said they are close to commercializing a new semiconductor technology based on a circuit element called a “memristor,” which can substitute for transistors, initially in a memory chip that might offer an alternative to both Flash and DRAM memories.
The researchers previously reported in The Proceedings of the National Academy of Sciences that they had devised a new method for storing and retrieving information from a vast three-dimensional array of memristors. The scheme could potentially free designers to stack thousands of switches in a high-rise fashion, permitting a new class of ultradense computing devices even after two-dimensional scaling reaches fundamental limits.
In a recent lecture at Stanford, Stan Williams, a physicist who is leading the effort at HP, said the group was focusing on a new type of semiconducting material, titanium dioxide, which he said could rival silicon.
“Suffice it to say this is not in the deep dark future,” he said. “This is not 10 years out.”
Dr. Williams said the memristor could have significant power and size advantages over conventional transistors used in logic devices like microprocessors, which must be continually powered to preserve information. By contrast, the HP technology is nonvolatile — it is necessary only to apply power to change the state of the switch and to read its value.
Moreover, like the carbon nanotube design, it lends itself to three-dimensional structures.
“This is something that has been eluding the community for 30 years,” he said.