By JOHN MARKOFF
SAN FRANCISCO, Feb. 3 -- In the world
of computer chips, Moore's Law is
becoming less of an axiom and more of
a drag race.
At the world's premier chip design
conference, which begins here today, the
spotlight will be on blinding computer speed.
That emphasis suggests that the trajectory of
desktop PC performance increases of the
last two years will not slow in the near future,
but actually accelerate.
Intel, the world's dominant manufacturer of microprocessors, will present a paper
detailing a portion of a microprocessor chip that has performed at up to 10 gigahertz at room
temperature -- the fastest calculating speed yet reported for a microprocessor, the chip that
controls the math, logic and data-transfer functions of a computer.
Such laboratory results generally precede commercial products by several years. But the
performance specifications indicate that the chip industry has once again surpassed a new
speed threshold, one well beyond limits that chip designers had perceived only a few years ago.
Moore's Law is the observation made in 1965 by the Intel co-founder Gordon Moore that the
number of transistors on a chip and so, approximately, the chip's computing power -- would
continue to double roughly every 18 months. But while Moore's Law proved to be a remarkably
accurate engineering forecast for three and a half decades, it is now apparent that chip speeds
are doubling even more frequently than every 18 months.
The Intel announcement, at this year's International Solid State
Circuits Conference, is the clearest example yet of a shift in
technology emphasis in the semiconductor industry -- away from
the storage capacity of memory chips and toward pure speed in
microprocessors, which are more complex but also much more
profitable than memory chips.
Until now, Moore's Law has been based on the assumption that
more processing power could be packed onto chips by continually
shrinking the dimensions of the tiny circuits that are etched onto
Last November, however, the industry's annual technical forecast of
progress in basic chip-making technologies -- known as the
international technology road map for semiconductors -- revealed
that the industry is able to shrink one crucial dimension of modern
microprocessors far faster than other dimensions.
The dimension is called physical gate length, and it refers to the
infinitesimal space between two key components in a solid-state
transistor. Lately, scaling down the physical gate length has proved
to be the most powerful tool in the chip engineer's arsenal. "This is
probably the most valuable real estate on the planet," said Russell
Lange, I.B.M.'s chief technology officer, who noted that the chip
industry had spent billions of dollars focusing on shrinking that tiny
dimension. Today, in the most advanced chip designs, the space
has shrunk to just 90 nanometers -- equivalent to about 360 atoms
laid end to end.
The previous International Technology Roadmap, released two
years ago, forecast the industry would be able to reach no smaller
scaling than 140 nanometers in the crucial gate length by now.
"What happened was a significant acceleration; I think we're
charting reality here," said Peter Zeitzoff, a research scientist at
Sematech International, a research organization supported by the
world's makers of chip-making equipment. Mr. Zeitzoff is one of the
authors of the technology road map.
The shrinking size of the physical gate was one of the key
developments that aided Intel engineers in Beaverton, Ore., in
creating their 10- gigahertz processor module. The component is
similar in function to a corresponding portion of the current Pentium
4 microprocessor used in the fastest of today's PC's. Over all, the
fastest Pentium 4 now reaches a clock speed of 2.2 gigahertz,
while internal portions of that chip achieve 4.4 gigahertz -- portions
corresponding to the experimental module that Intel engineers say has reached a speed of 10
Intel also detailed several other design breakthroughs its engineers have used to push
processor speed -- including a technique called adaptive body bias that involves applying a tiny
current to each transistor to permit higher switching speeds at lower power levels.
I.B.M. (news/quote) will also present several papers at the conference, and the company said
that its technology focus has been on lowering the power consumption of its processors.
"Power and heat can add complexity," Mr. Lange said.
One of I.B.M.'s papers will focus on a 1-gigahertz PowerPC microprocessor that can almost
instantly raise or lower the amount of power it consumes, significantly increasing power savings.
Other research advances to be presented at the conference include a single-chip, solid-state
gyroscope from Analog Devices and an all-electronic sensor array making it possible to do
DNA analysis without optical components.
The topic with perhaps the greatest near-term commercial relevance will be Intel's technical
description of its new McKinley microprocessor, the next generation of its Itanium family of
processor chips. In a partnership with Hewlett-Packard (news/quote), Intel has invested heavily
in the development of the McKinley -- a chip that processes data in 64-bit chunks and is meant
to compete with 64-bit products including I.B.M.'s PowerPC, Sun Microsystems (news/quote)'
SPARC and Compaq Computer (news/quote)'s Alpha. The McKinley is expected on the market
Although speed is important in processor design, for large corporate server, engineering and
scientific applications it is vital that the processor can directly process vast arrays of data. Many
computing problems have now reached beyond the limits of today's desktop microprocessors
that process data 32 bits at a time.
Success for the McKinley is crucial to Intel, because the first version of the Itanium
microprocessor received a lackluster reception last year. The McKinley will include 221 million
transistors, making it by far the largest microprocessor ever designed. Most of the transistors
will be dedicated to fast "cache" memory intended to speed operations.
Today most microprocessors keep most cache on a separate chip, and software programs pay
a performance penalty in moving information between processor and memory.
The New York Times