“
The chips are down for Moore’s Law,” read the headline in last
month’s
Nature. Finally, the semiconductor industry was going to admit “what has become increasingly obvious to everyone involved,” and the party will be over. After
50 years of riding an exponential wave, we’re maxing out on our ability to double the number of transistors on a chip.
From the Nature article: “The doubling has already
started to falter, thanks to the heat that is unavoidably generated when more and more silicon circuitry is jammed into the same small area… Top-of-the-line microprocessors currently have
circuit features that are around 14 nanometres across, smaller than most viruses. But by the early 2020s… ‘we'll get to the 2–3-nanometre limit, where features are just 10 atoms
across…’ [A]t that scale, electron behaviour will be governed by quantum uncertainties that will make transistors hopelessly unreliable.”
advertisement
advertisement
So that’s it. No more buying
a new computer every other year, twice as good for the same price. No more going from a megabyte to a gigabyte to a terabyte in what seems like weeks. No more transformative shifts in technology at
warp speed.
Except, of course, that that’s not what’s happening, and that’s not it, not by a long shot. I mean, that may be it for Moore, but not for the rest of us.
All we need is a simple shift in perspective.
Instead of looking specifically at transistors on a chip, as Moore did, we should be looking at how much computing power we can generate for how
much money. Ray Kurzweil called it the “price-performance of computing,” and it refers to the number of instructions
per second you can buy for $1,000.
Turns out this price-performance malarkey was following a doubling curve long before Gordon Moore was even a glint in his mother’s eye. Sure, it may
have taken different forms -- in the early 1900s it was electromechanical punch cards, in the 1950s vacuum tubes -- but every time we hit the limits of a particular technology, a new one comes along,
and the result is the same: You buy a new computer every other year, twice as good for the same price.
“But Kaila,” I hear you protesting, “We’ve never reached this
kind of limit before! We’ve never had to come up with solutions beyond the scale of the nanometer!”
No, we haven’t. But remember, we’re not talking about scale anymore.
We’re talking about what we can get and how much it will cost. Which likely means we’re talking about an entirely different computing paradigm.
What will that paradigm be? Nobody
knows yet.
In November, a team of researchers from the University of Sheffield and the University of Leeds published a study showing that certain kinds of sound waves can move large amounts of data with very little power.
Last
month, Motherboard reported on a startup called Koniku, “integrating lab-grown neurons onto computer chips in an
effort to make them much more powerful than their standard silicon forebears.”
And earlier this week, nanotechnologists at Lund University in Sweden announced a way to use cellular
organelles for parallel computing, which could “shrink a supercomputer to the size of a laptop.
Koniku founder Oshiorenoya Agabi summed up the phenomenon perfectly. “The fact that we’re constantly getting increases in computational power, that law of computing holds for the last
2,000 to 5,000 years. Moore’s Law is a little patch, it’s only a little piece of that law… One of these two laws is going to have to give in, and I suspect that Moore’s Law is
the one that’s going to have to give in. But our power to calculate faster and faster -- that law is here to stay.”