Modern CPU scaling has largely flatlined, thanks to a combination of scaling challenges, intrinsic limits to silicon performance, and a general shift in the market towards lower power devices. Chip designers who want to continue to push the limits of what computers are capable of need to look to specialized architectures and custom designs. That was the message at the Linley Processor Forum according to Linley Gwennap, principle analyst at the Linley Group.
Dean Takahashi covered the event for VentureBeat and reports that Linley’s comments dovetail with the industry trends we’ve seen emerging over the last few years. Companies like Nvidia are marketing their chips as ideal for deep learning and AI calculations. Intel purchased the FPGA manufacturer Altera and has built its own specialty coprocessor hardware, dubbed theXeon Phi. From IBM’s TrueNorth to Qualcomm’s heterogeneous compute capabilities in the Snapdragon 820, a number of companies are attacking performance problems in a wide variety of ways. Self-driving cars, drone deliveries, and even chatbot AI could transform computing as we know it today — but not without fundamentally changing how we think about computers.
Unintended consequences, industry trends, and power consumption
For most of the past 40 years, the story of personal computing was, well, personal. Desktops and laptops were positioned as devices that allowed the customer to rip music, explore the Internet, write, work, game, or stay in touch with family and friends.
In the beginning, the hardware that enabled many of these functions was mounted in separate expansion slots or installed in co-processor sockets on the motherboard. Over time, these capabilities were either integrated into the motherboard and CPU socket. Channel I/O and FPU coprocessors went away, as did most independent sound cards. Dedicated GPUs have hung on thanks to intrinsic thermal issues, but on-die graphics solutions from AMD andIntel have improved every single year.
There was a simple, virtuous cycle at work: The computer industry collectively delivered faster hardware, consumers bought it, and developers took advantage of it. The initial end to CPU clock scaling went nearly unnoticed, thanks to the widespread availability (and superiority) of dual-and-quad-core chips compared with their single-core predecessors. This cycle has been repeated in smartphones and tablets, but we’ve already seen signs that the rate of improvement is slowing.
To demonstrate, here’s a graph of Apple iPhone single-core CPU performance in Geekbench 2 and Geekbench 3. We split the data into two sections to capture how the various iDevices compared over time, and to capture the shift from GB2 to GB3 when the latter became available. We’re using Apple because that company has spent more resources on improving its single-core performance than the various Android SoC developers, which tend to prioritize higher core counts.
From 2008 to 2012, single-core CPU benchmark scores in Geekbench 2 increased from 141 on the iPhone 3G to 1602 on the iPhone 5. That’s a nearly 12x improvement in just four years.