Silicon photonics has emerged as one of those areas of such far-reaching potential that its challenges and benefits tend to be clouded in generalities. Light as the main medium does indeed promise to alter fields including biological and chemical sensing, navigation, radio frequency sensing, communications, but for our purposes here, the potential within large-scale computing is of greater interest. On that point, take a look at the proceedings of a recent technical conference on photonic devices, noting the range of problem areas–not to mention potential applications. And these just skim the topical surface.
On the computing interface and integration fronts, IBM has produced new chips that integrate optical and electrical components on the same die, HPE Labs has produced the free space optical interconnect, and Intel has long-standing ambitions to replace copper with photonics for future datacenter applications and is rolling out silicon photonics switching this year. All three of these vendors, as well as others, have pushed funds toward research and development and made solid cases for the performance potential. But despite all of these (and many other) efforts, why does it seem like this technology, with its promise of upending datacenter efficiency trends, still sits at the fringes? And to what extent is any of this ready for primetime?
We shouldn’t be holding our breath for photonic devices to make it to market and hit large systems within the next year—or even for the next few years, says David Calhoun, a PhD fellow who focuses on the integration of photonic devices with larger systems at Columbia University’s Lightwave Research Laboratory. There are some examples now of such devices appearing on the horizon for high performance computing (HPC) but for these to become ubiquitous, there is quite a leap—up to ten years before these are a common element on extreme scale systems, he says.
What’s interesting here is that the roadblocks to such a point are very much rooted in the manufacturability of such devices. To reach the economies of scale needed to push photonic devices to market at a reasonable price, there must be key technological barriers broken—and enough incentive from the market to make photonic devices something key vendors invest in. In other words, there is something of a chicken-and-egg problem. Without the ability to prototype and implement silicon photonics-based devices inside of systems, manufacturability efforts stagnate. Nothing can progress, in other words, at least to produce photonic devices that can be created and tested at scale following a manufacturing process that is not simple—even for companies that are already producing transistors and chips.
Calhoun is one of several participants in the federally-funded Integrated Photonics Institute for Manufacturing Innovation, which is helping to push the research and development and ultimate production of photonic devices for the datacenter, military, communications, and other markets. Although there are disparate development efforts happening around the world at the component level in particular, there are yet any bold, comprehensive strategies to effectively integrate photonic devices into large systems at scale, cost, and with the reliability required.
Much of this boils down to the fact that the field itself is scattered—and not just in terms of the application areas. Rather, the way to tackle the photonics integration problem itself is not clear—or more accurately, it has not been settled on. Just as there are several camps that believe silicon photonics will either change supercomputing in the next decade (and an evenly balanced camp that believes they will go nowhere), the research effort itself is bifurcated. This makes the challenge and opportunity for Calhoun’s group at the Institute even greater—but doesn’t help us settle the question about where silicon photonics in 2016, not to mention five or even ten years from now.
“One of the biggest challenges, especially for photonics in computing, is bridging the gap between the novel functionality of these devices and the real need from the application side,” Calhoun says. “Data has a particular profile on a computing system and depending on the application, the way it moves through the network can change. So we’re spending a lot of effort in electrical networks looking for a happy medium—no one network can solve all problems at this point, electrically, with silicon photonics, or with some hybrid combination of those.”
There are three ways researchers and early developers of photonics for computing systems are looking at the integration problem. Rapid prototyping, 3D integration, and a co-existence model where photonics are wrapped with electronic components on a single die. Of these, chipmakers like Intel and IBM are interested in the latter for obvious reasons, but also because they have the ability to manufacture devices at scale using existing technologies. However, for the same reasons, having a 3D or stacked approach with the photonics, electrical components, and compute layer are piled onto one another—kept separate but interacting.
Although these are all separate modes of research, they are interdependent, thus complicating things. One cannot develop a hybrid approach to integration without understanding and being able to implement a hybrid solution—at least not at this stage. And rapid prototyping on its own is not sufficient at manufacturing scale because it is not reliable and robust enough.
Some technologies already exist in the market and are being used in datacenters and HPC where there are applications for photonics, for instance, connecting one side of a datacenter to another. Companies like Luxtera make these small form factor pluggable devices that provide an optical connection. But if we think beyond those use cases to powering large-scale supercomputers with such devices, they have to not only work, but be manufacturable—a problem Calhoun’s research center is seeking to address.
“We have significant plans to make these things widely available and provide a good basis for people in the data communications realm to say, here is a networking or computing problem, we examine and understand the problem, present a photonic architecture from the ground up and a manufactured product with full interfacing to implement it with all the engineering steps in between.” In essence, the government funding is pushing a new industry in this regard, starting with one of the biggest problems (interfaces and integration) and following it through to the manufacturing challenges.
At that intermediate step of problem solving are some tough physics and computer science challenges. “For instance, from a computing perspective, these devices tend to have a lot of input and output characteristics, so being able to get all the right data on and off the chip that holds the architecture is but one challenge.” There are already many fast prototyping solutions that tackle this problem, “but pretty good isn’t good enough when it comes to making something manufacturable at scale. We need it to be excellent—so there’s still a little bit of a revolution with the technology to be had, which we’re working on.”
Sign up to our Newsletter
Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Well technically IBM is no longer a chip manufacturer it has no manufacturing capabilities as it quit the foundry business. I though Intel is shipping Silicon Photonics switches for data centers this year?
IBM still has its research labs, and I’ll bet that they have some non mass production wafer etching machines still available, for making small numbers of test dies. So let the production fabs have the expense of running the large runs of fully developed designs for the marketplace. I’ll bet that both Samsung and GlobalFoundries will be happy to accept IBM’s business once the parts are out of development and are ready for large scale production, just as IBM is glad to get the expensive to maintain large scale chip fab upkeep expenses off of their backs. Let Samsung and GlobalFoundries(Both chip fabrication/IP technologies sharing partners with IBM for a few years now) properly amortize across an entire market the massive costs of multi-billion(US dollar) chip fab upkeep/operating costs so IBM can focus on that world class R&D of new technologies.
Let Intel keep those chip fabs and their need to be operating at as close as 100% capacity a possible or they will bleed profitability, and let Intel incur the upkeep expenses, why one of Intel’s newest fab facilities sits in mothballs waiting to be rigged out in equipment, if the market could stop shrinking for long enough.
I’m no big fan of Big Blue, they could become like ARM Holdings for all I care and just remain licensing CPU/Other IP to the third party market, but those IBM labs are the best in the world, and they are located around the globe doing some damn good R&D! And still IBM designs some very good CPU cores for its own internal mainframe/server use, and its OpenPower licensed Power/Power8 designs for third party licensees. With the PC market shrinking year in and year out those chip fabs for a single company’s internal use can become giant money pits, while the third party chip fab marketplace can keep its expensive beasts supplied with more than one company’s production and thus spread the costs across a larger number of clients.
Oh and just to remind you of something you already know, IBM gave GlobalFoundries(GF) the chip fab plants, and a large Wad-O-Cash to sweeten the deal so GF is contractually bound to Supply IBM’s chip Fab needs for a while, and so IBM is assured of its supply of fab capacity, from GF and probably from Samsung as well, a lot of IP back scratching among Samsung, GF, and IBM for a good while now with that technology sharing foundation/consortium that they have had going!
Im interested to know why those people who think silicon photonics isnt going anywhere think that.
And isnt Intel Knights Hill supposed to have silicon photonics integrated to either the package or the chip itself?