Computing historians may look back on 2016 as the Year of Silicon Photonics. Not because the technology has become ubiquitous – that may yet be years away – but because the long-awaited silicon photonics offerings are now commercially available in networking hardware. While the advancements in networking provided by silicon photonics are indisputable, the real game changer is in the CPU.
For over half a century, Moore’s Law has been the name of the game. The transistor density on chips has been remarkably cooperative in doubling on schedule since Gordon Moore first made his observation in 1965. But Moore’s Law is still subject to the constraints of physics. At some point, halving the size of the transistor will mean splitting atoms. It’s likely, of course, that economic constraints will kick in long before anyone introduces a nuclear fission CPU. Indeed, Intel has already shown that economic conditions can act as a brake on Moore’s Law.
This means that chip makers will have to get creative about how they drive performance improvements. Silicon photonics offers a tantalizing approach, as outlined in a paper by Ke Wen and colleagues presented at the Post-Moore’s Era Supercomputing Workshop colocated with Supercomputing 16. Given the limits of raw compute horsepower that are approaching, Wen suggests shifting to other areas for performance gains.
The ability to get data in and out of the CPU is just as important as the ability to pass data between hosts. Memory bandwidth is subject to its own physical limits. Intel’s Knights Landing chip requires 3647 pins per memory stack, and the pins can only be packed so densely. By using silicon photonics, a single waveguide can be used to provide the same bandwidth (100 gigabits per second) as an 8-channel High Bandwidth Memory cube.
The low power requirements and relatively (when compared to copper-based electronics) unlimited transmission distance that silicon photonics provide could potentially lead to a new paradigm in hardware design. At a first pass, systems no longer would need to use non-uniform memory access (NUMA) to enable large memory configurations. This removes the need, both for the operating system and the programmer, to attempt processor affinity in order to keep threads and their data logically nearby.
Fans of silicon photonics further suggest that the very nature of compute hardware may change. The current basic configuration of servers and blades is bound by the need to keep components close for maximum performance, as well as power and cooling. But if data can be passed at high bandwidth, high speed, and low power over great distances, why not reconsider how hardware is designed? If the components can be disaggregated, then systems can be built more modularly. This allows for greater flexibility and upgradeability.
Of course, we aren’t there yet. Silicon photonics only recently hit the networking space.
Vendors like Mellanox, (March), Inphi (March), and Intel (June) have launched silicon photonics products this year. As we reported in August, Intel’s silicon photonics offering was originally expected in early 2015. Indeed, Intel has been working toward this for years.
Getting to this point was not easy; as recently as 15 years ago, this was not considered a feasible solution. Researchers had to develop modulators that could operate in the 10 gigahertz range necessary for high-bandwidth communication. And there was another problem: silicon tends to produce heat, not light, when the electrons are excited. Intel researchers had to develop an electric waveguide field to build up photons in order to create a continuous laser beam. Several teams within Intel worked to make silicon-based lasers a usable reality.
Some hyperscale shops will undoubtedly leap on these offerings, and HPC environments that build out new clusters every year will consider silicon photonics. Intel touted Microsoft Azure as an early adopter of the former’s silicon photonics networking products. The question is how quickly will the rest of the industry come along? Switching from copper to silicon photonics will require replacing the networking gear in large swaths at once. Given that many datacenter managers are still content with gigabit or 10 gigabit at the top of the rack, it’s hard to imagine a sudden, broad uptake. Indeed it may prove that even though the networking side of silicon photonics has a head start in the market, the coming CPU advancements end up with a quicker adoption curve.
While you can be sure that chip makers are actively investigating this as a future product offering, there are no silicon photonics CPUs on the market yet. Intel, for example, is only publicly discussing silicon photonics for networking, not for CPUs. A team of researchers lead by Vladimir Stojanović at UC Berkeley only produced a prototype late last year. The manufacturing process will need to get cheaper and, if we are to reach the disaggregated system goal, manufacturers of other components will need to buy into the idea as well. 2016 may be the first Year of Silicon Photonics, but it’s likely not the last.
Sign up to our Newsletter
Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Hi Ben–thank you for this nice summary of the state of affairs. In my work around silicon photonics, I think there are many factors that might create risk for early adopters of the technology. My thinking is that the solutions on the market today, which are at the server I/O, not at the chip level, will be superseded by new designs that have lower power, lower cost, and more performance. The product landscape will be bumpy for the next 10 years. A large part of my reasoning on this is that the level of R&D in silicon photonics is still increasing at an exponential pace. There are already many viable alternative designs for silicon photonics to support 4×25 Gb/s or higher.
The other thing to keep in mind, which I think you point towards, is that there is even more work to do to get silicon photonics integrated at the processor-memory and processor-I/O points. There are some rumors Intel has a solution, but I think it may be some time until products appear. If I had to bet I would say Intel would come out first, but we will have to wait and see.