CXL from promise to reality with real silicon on customer platforms

Sponsored Post: We all know that inferencing and training AI models needs a lot of CPU muscle, but we don’t necessarily appreciate how important other components are in supporting AI and ML applications.

DRAM plays a critical role by hosting the vast amounts of data which these models crunch, and make it readily available for the CPU as and when its needed to avoid a debilitating bottleneck.

For cloud providers and hyperscalers, having sufficient chunks of memory available on tap in their data center systems will prove essential to handle the type of AI/ML workloads that customers in multiple industries are starting to demand in volume.

Astera Labs makes that possible by allowing DRAM to be attached directly to a server PCI Express (PCIe®) slot using its Compute Express Link (CXL™) Leo E-Series Smart Memory Controllers. CXL then allows the systems’ CPUs, DRAM, accelerators and other components to send and receive large amounts of data quickly over a common low-latency and high bandwidth interconnect technology.

Astera Labs’ Leo P-Series chips already support CXL memory pooling, whereby attached DRAM can be accessed by multiple host CPUs to deliver even more throughput for each CPU and efficiently use the DRAM across the CPUs.

Astera Labs is already shipping these features in its Leo Memory Connectivity Platform — billed as the industry’s first purpose-built CXL platform to support memory expansion, memory pooling and memory sharing. Version 3.0 of the CXL interconnect will also add rack level disaggregation that enables even more flexible memory pooling and sharing by host servers and Astera Labs is already working to enable these features in its next-generation Leo controllers.

Want to know more? Watch this video featuring Avinash Sharma, Astera Labs Head of Field Applications Engineering, speaking at the SC22 conference in November this year.

You’ll hear Avinash explain how the company is working to develop interoperable software for industry standard workloads using AMD’s latest fourth generation EPYC processors and Supermicro server chassis.

If you’re a cloud service provider or hyperscaler looking to boost AI/ML performance in a cost-effective way using innovative new DRAM technology, you can’t afford to miss it.

Sponsored by Astera Labs.

Sign up to our Newsletter

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Subscribe now

Be the first to comment

Leave a Reply

Your email address will not be published.


*


This site uses Akismet to reduce spam. Learn how your comment data is processed.