ARM Pioneer Sophie Wilson Also Thinks Moore’s Law Coming to an End
April 13, 2017 Jeffrey Burt
Intel might have its own thoughts about the trajectory of Moore’s Law, but many leaders in the industry have views that variate slightly from the tick-tock we keep hearing about.
Sophie Wilson, designer of the original Acorn Micro-Computer in the 1970s and later developer of the instruction set for ARM’s low-power processors that have come to dominate the mobile device world has such thoughts. And when Wilson talks about processors and the processor industry, people listen.
Wilson’s message is essentially that Moore’s Law, which has been the driving force behind chip development in particular and the computer industry as a whole for five decades, has hit its limits, and that despite the best efforts by chip designers around the world, the staggering gains in processor performance over that time will not be seen again.
Wilson says that since Intel founder Gordon Moore introduced his prediction in 1965 that the number of transistors in a processor would double every two years – it later was amended to every 18 to 24 months – the IT industry has been on a relentless march to fulfill that prophecy, with significant success. She said that in her lifetime, the performance of computers has increased by a factor of 10,000, due in large part to the continual shrinking of transistors on chips.
“You have a $30 billion industry trying to fulfill Moore’s Law, and they bend material science and really arcane bits of technology to make it happen,” Wilson said. “And they’ve been doing quite well.”
She pointed to her own work at Acorn Computers, which is where she first developed the instruction set for the RISC-based ARM processors before ARM spun out of Acorn to become its own company. The low-power architecture has become the dominant architecture in such mobile devices as smartphones and tablets, and has become a significant player in the embedded system space and such data center systems as networking gear. ARM and some of its partners, such as Qualcomm, Cavium and Applied Micro, also are pushing the architecture into servers in hopes of carving into Intel’s dominant market share.
In 1975, Wilson was part of the team that developed the 6502, a 2MHz, eight-bit chip for the BBC Micro-computer that held 4,000 transistors and came in a 21-square-mm package. Ten years later came the 32-bit ARM 1, with 25,000 transistors on a 37mm space, and now more than 95 billion ARM-powered chips have been sold, she said. Broadcom’s ARM-based 330MHz Firepath processor, which Wilson helped design, first launched in 2003 with 6 million transistors and 64-bit capabilities.
The industry learned a decade ago that it couldn’t continue to increase chip performance by increasing their speeds because they simply got too hot. Intel’s Pentium Pro neared the point of being as hot as a hot plate, Wilson said. Chip makers turned to other methods of improving performance, including adding multiple processing cores. That worked for a while, but even adding processing cores is reaching its limits.
“Can I keep doing this forever?” Wilson said. “Can I keep just being given more transistors and putting more Firepaths down? Oops. There is a law in the way, and this one actually is a law. It has got a graph and an equation and everything. How else would you know it wasn’t a law?”
Moore’s Law ran into Amdahl’s Law, which essentially says the speedup derived from multiple processors is limited by the sequential part of the software program. Essentially, programs will only benefit from multi-processors systems to a certain point, at which time performance plateaus. Even for highly parallel workloads like ray tracing, the performance increase levels off at about 20 times. “No matter how many processors I apply, ray tracing ain’t going to go any faster than 20 times faster,” Wilson said, adding that even with such tasks as web browsing, the performance will hit the ceiling after a two- to four-times bump. “This is kind of bad news.”
It’s bad news both for vendors and for consumers. Smartphone and computer makers are determined to sell devices that have increasing numbers of cores. The problem for consumers is that the performance of the devices and systems will plateau regardless of the number of cores that are stuffed into the chips.
“So you’ll hand over your money for this stuff and it ain’t going to do you any good,” she said.
There are multiple problems the industry is running into. Traditional scaler programming languages aren’t a good fit for parallel environments, which reduces the benefits from multi-processor systems. At the same time, there is still the problem of heat and power efficiency when growing the number of transistors in a single chip. Given that, chip makers have turned to other methods to control heat and energy consumption, including turning some of the cores off during jobs. So now consumers are buying expensive systems with a lot of processing cores, but not all the cores will on at the same time.
“You’re going to buy a 10-way, 18-way multi-core processor that’s the latest, all because we told you you could buy it and made it available, and we’re going to turn some of those processors off most of the time,” Wilson said. “So you’re going to pay for logic and we’re going to turn it off so you can’t use it.”
The use of 3D FinFET transistor architectures help increase performance but also run into problems with heat. In addition, along with the heat and energy issues, chip makers also face problems of cost – it’s increasingly expensive to design and build processors as they get smaller, and the plans are to continue to make them smaller. Intel already is coming out with its 14nm processors, and plans are already underway for 10nm and 7nm. But each step is going to cost more, and those costs are going to drive up the price of systems. Very few systems are going to be worth the expense, she said.
“This is sort of the end of Moore’s Law,” Wilson said.
Intel officials have argued that Moore’s Law will continue to be viable into the forseeable future, but Wilson and others say the end is in sight. Wilson said the tipping point is at the 28nm level. After that, more gates might be added, but utilization is reduced. At the same time, the cost per gate – which had steadily declined until 28nm – begins to increase as the chips get smaller, and while there will still be processors made on the 16/14nm and 22/20nm processors, their use will stay stagnant or decline between now and 2025, Wilson said. Meanwhile, the use of 28nm processors will increase.
“Twenty-eight lives on, so you better get used to making them in 28,” she said.
Looking ahead, Wilson said she expects that smartphone and other system makers will continue to leverage an array of chips – not only CPUs, but also GPUs and digital-signal processors (DSPs), as well as systems-on-a-chip (SoCs) – and that while performance may increase, so will prices. In addition, users of these systems will have to adjust their expectations, she said. The 10,000-times increase in chip performance that marked the past several decades will not be repeated.