Intel Leverages Chip Might To Etch Photonics Future

Computing has gone through a few waves. There was human to human computing in the first few decades, and in recent years it has been dominated by human to machine computing with hyperscale consumer-facing applications, and we are on the cusp of a third wave of machine to machine computing that will swell compute, storage, and networking to untold zettabytes of traffic.

Under such data strain, there is an explosive need for bandwidth across datacenters as a whole, but particularly among hyperscalers with their hundreds of millions to billions of users. (Ironically, some datacenters are only now moving to 10 Gb/sec Ethernet switching as the hyperscalers dream of 1 Tb/sec or higher speeds.)

If you look inside of the datacenter today, approximately 75 percent of the traffic is east-west – meaning between machines inside of the datacenter itself – and only 25 percent of it is north-south – meaning talking up between tiers of machines and out to users and back over a private network or the public Internet. A decade ago, before the rise of hyperscalers and clouds, about 90 percent of network traffic was north-south, into and out of the wide area network. Network virtualization, which allows for pools of compute and storage to be created and dynamically provisioned for workloads running on clusters, is what is driving all of this east-west traffic, whether it is for bare metal, virtualized, or containerized workloads.

“All of this drives a need for any-to-any connectivity in the network,” Alexis Bjorlin, vice president of the Connectivity Group within Intel’s Data Center Group, tells The Next Platform. And therefore the networks above the racks of servers and storage have a Clos or fat tree architectures, such as those used by search engine giant Google or social media juggernaut Facebook, just to name two. “To get to a high performance network, the trend is to move away from high end routers and network appliances to pizza box devices, scale out fabrics. What you see is fabric interconnects where companies are driving a huge amount of bandwidth and the number of hops and switches you have to go through as you traverse the network goes up and drives up latency. Latency is an important performance metric, so the networks are being flattened, driving high radix switching and high bandwidth connectivity.”

Within any one of these hyperscale datacenters, approximately 15 percent to 20 percent of the spending is on the network – the rest is servers, storage, and facilities. And within that network spending, a growing portion of the money is going to optical interconnects, which are necessary for switch to switch links over long hauls in the datacenter, and it is about 45 percent and growing, according to Bjorlin.

composable-infrastructure-intel-siliicon-photonics

In the datacenter, links dropping down from top of rack switches to servers is all based on copper cables, while linking top of rack switches to middle of row of end of row aggregation switches is making the conversion to optical cables. The reason is simple. Copper has been cheaper, but there is an inverse relationship between the length of a copper cable and the bandwidth you can drive through it; the higher the bandwidth, the shorter the cable has to be to get a clean signal.

With optical links, the bandwidth is not affected by distance – photons are not as fussy as their fat electron relatives – and hence this is why switch to switch links in hyperscale datacenters (and indeed any large datacenters) have been optical for some time, and it is here that Intel is beginning its journey with silicon photonics and brining its volume manufacturing and engineering to bear against established players in this space. But make no mistake. Intel wants nothing short of utterly transforming the datacenter, putting optical transceivers inside of compute and networking chip packages and eventually down in the dies themselves.

In the past, the network scale was limited by the size and expense of routers, but with these fabric architectures, you scale by adding more and more switches and fiber optic cables to link them. So, for instance, a Facebook datacenter might have three buildings with a total of 750,000 square feet of space. Each building can house around 200,000 servers or so and has around 10,000 switches linking this gear together, for a total of 600,000 machines and 30,000 switches, all linked with 21 million feet of single mode fiber optic cable.

The hyperscalers moved to single mode fiber a long time ago, knowing they would have to traverse long distances, but they want this fiber and the optical transceivers to get a lot less expensive and represent a lower portion of their datacenter spending. This was a big topic of discussion at the Open Compute Summit in March of this year. It is not a coincidence, of course, that Intel, the largest chip manufacturer in the world, the dominant supplier of compute engines in the datacenter, and a networking chip supplier of growing importance is working on silicon photonics that will bring optical technologies down to chips and allow to the lasers that drive optical signaling to be integrated with digital circuits and not only lower the power requirements of photonics, but also lower the costs and give Intel another reason to charge a premium for its processing and networking chips.

intel-silicon-photonics-tam

The 100 Gb/sec networking market is starting to ramp, and has taken a bit longer to materialize than Intel had expected. Fortunately for Intel the slower ramp of 100 Gb/sec technologies has coincided with its initial development of 100 Gb/sec transceivers based on its silicon photonics technologies, which also took a little bit longer than expected to perfect. (The word on the street in early 2015, when they were expected to come to market, was that they ran a little hot.) The hyperscalers are doing their ramps to 100 Gb/sec Ethernet right now starting in the third quarter, according to Bjorlin, and other large enterprises, cloud builders, and HPC shops will follow suit.

“The landscape is continually evolving,” Bjorlin says. “But it is exciting with the machine to machine traffic growing at more than 2X per year and that silicon photonics is exactly what the market has been waiting for a high bandwidth, high density, and high volume opportunity to really take this to scale. We are right on time for the roll out of this next wave.”

intel-silicon-photonics-future

The total addressable market will scale up as more datacenters move to 100 Gb/sec and then 200 Gb/sec and 400 Gb/sec backbones and as optical links move into the rows and down into the racks. We also think, and have previously discussed, how silicon photonics will eventually effectively replace the motherboard in a system, breaking machines down into composable blocks of compute, main memory, adjacent non-volatile memory, and I/O devices like legacy network links and archival storage, all of which will be able to be replaced independently of each other.

Right now, Intel is doing discrete optical transceivers that integrate lasers on their chips (and use its manufacturing prowess to presumably drive the cost of these transceivers down), but over time there will be more integration. Intel will be able to put the transceivers onto Xeon and Xeon Phi packages or Altera FPGAs as well as on its Ethernet and Omni-Path networking switch chips and adapter cards to drive down the cost of optics on both ends of any links. Eventually, silicon photonics lasers will be etched onto processors, much as Ethernet and PCI-Express controllers are on Xeon and Xeon Phi processors today.

intel-silicon-photonics-future-2

The silicon photonics transceivers that Intel is launching this week at its Intel Developer Forum are not tied to its own protocols, and can be used with Ethernet, InfiniBand, and Omni-Path switches from any vendor (including Intel). The silicon photonics devices launched by Intel this week are more than just electrical to optical converters; they include performance monitoring and forward error correction features, both of which are necessary as bandwidth increases. There are two flavors of switch to switch links, one that uses the PSM4 standard, which is shipping now, and another that supports the CWDM4/CLR4 standard that is sampling now and that will ship in volume later this year. Pricing was not divulged for either. Bjorlin says that Intel has a broad base of cloud builders, hyperscalers, and server and networking OEMs kicking the tires on its optical transceivers.

“As the data rates go up to 50 Gb/sec and higher for links, and as time goes on, we will see the inevitability of multichip packaging and then on-chip packaging of photonics,” Bjorlin says. “I would put it in the 2019 to the 2021 timeframe, so 2020 is a good average to plan for.”

intel-silicon-photonics-hybrid-laser

The trick that Intel and other silicon photonics companies are working on is to shrink a laser and get it down to a chip that can be put into an electronics package. Intel has come up with a hybrid integrated laser that can takes a lasing material and bonds it onto a normal 300 millimeter (12 inch) silicon wafer, and after this bonding is done, all other steps to make a chip that incorporates those lasers are normal chip manufacturing techniques that it has already perfected. Photolithography defines those lasers and couples them to circuits.

“It is really a hybrid between the light-emitting materials and the traditional photonics materials, in a sense, but the laser itself is created in silicon, and this is the real differentiator that we bring,” says Bjorlin. “This integration is what gives us the ability to scale on a number of vectors. We can put many lasers next to each other and we can scale out, or we can make different color lasers right next to each other and combine then into a single optical fiber and scale through wavelength division multiplexing. Or we can scale on those two vectors at once. Other silicon photonics that are out there are not as integrated. The other commercially available silicon photonics products out there use externally created lasers based on the 3/5 material system that are then completely processed off chip and are placed either down or next to a silicon photonics integrated circuit, and these have to be meticulously aligned with high precision instruments to couple the photonics to the circuit. It is not really a full silicon platform.”

It always comes back to the fab for Intel, and that is what drives this business into new and adjacent markets. Intel is a manufacturer first and a chip designer second, and the important and distinguishing thing is that it is both in a world where most companies do one or the other because they cannot afford to do otherwise.

Sign up to our Newsletter

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Subscribe now

Be the first to comment

Leave a Reply

Your email address will not be published.


*


This site uses Akismet to reduce spam. Learn how your comment data is processed.