Counting The Cost Of Under-Utilized GPUs – And Doing Something About It
The compute engines keep changing as the decades pass, but the same old problems keep cropping up in slightly different form. …
The compute engines keep changing as the decades pass, but the same old problems keep cropping up in slightly different form. …
Maximizing the aggregate amount of compute that can be brought to bear for any given pile of money is what traditional high performance computing is all about. …
Multiplying things by two and putting them on a roadmap is easy, even if it does take a lot of courage to do that. …
While the long overdue upgrade to PCI-Express 4.0 is finally coming to servers, allowing for high bandwidth links between processors and peripherals. …
A system is more than its central processor, and perhaps at no time in history has this ever been true than right now. …
Moore’s Law might be slowing down CPU compute capacity increases in recent years, but the innovation has been coming at a steady drumbeat for the interconnects used inside servers and between nodes in distributed computing systems. …
While there are plenty of distributed applications that are going to chew through the hundreds of gigabits per second of bandwidth per port that modern Ethernet or InfiniBand ASICs deliver inside of switches, there are still others that might benefit from having a more streamlined stack that is also more malleable and composable. …
Applications do not need to use all elements of a system all the time, and usually not all at the same time for that matter. …
No matter what, system architects are always going to have to contend with one – and possibly more – bottlenecks when they design the machines that store and crunch the data that makes the world go around. …
When IBM started to use the word “open” in conjunction with its Power architecture more than three years with the formation of the OpenPower Foundation three years ago, Big Blue was not confused about what that term meant. …
All Content Copyright The Next Platform