The Next Platform
  • Home
  • Compute
  • Store
  • Connect
  • Control
  • Code
  • AI
  • HPC
  • Enterprise
  • Hyperscale
  • Cloud
  • Edge
Latest
  • [ September 29, 2023 ] Finally: The Roadmap To Profits For Nutanix Compute
  • [ September 28, 2023 ] Supermicro At 30: From Designing AI Chips To Selling AI Systems Compute
  • [ September 28, 2023 ] The Geek Is Back! Everything You Missed At Intel Innovation Compute
  • [ September 28, 2023 ] Bringing AI To Reality AI
  • [ September 27, 2023 ] Sky-High Hurdles, Clouded Judgements for IaaS at Exascale HPC
  • [ September 27, 2023 ] AI Means Re-Architecting The Datacenter Network Connect
  • [ September 26, 2023 ] Meta Platforms Is Determined To Make Ethernet Work For AI Connect
  • [ September 26, 2023 ] Julia Still Not Grown Up Enough to Ride Exascale Train Code
Homein-memory processing

in-memory processing

Compute

Putting In-Memory Processing Through The Paces

February 4, 2020 Timothy Prickett Morgan 3

From a conceptual standpoint, the idea of embedding processing within main memory makes logical sense since it would eliminate many layers of latency between compute and memory in modern systems and make the parallel processing inherent in many workloads overlay elegantly onto the distributed compute and storage components to speed up processing. …

About

The Next Platform is published by Stackhouse Publishing Inc in partnership with the UK’s top technology publication, The Register.

It offers in-depth coverage of high-end computing at large enterprises, supercomputing centers, hyperscale data centers, and public clouds. Read more…

Newsletter

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Subscribe now

  • RSS
  • Twitter
  • Facebook
  • LinkedIn
  • Email the editor
  • About
  • Contributors
  • Contact
  • Sales
  • Newsletter
  • Books
  • Events
  • Privacy
  • Ts&Cs
  • Cookies
  • Do not sell my personal information

All Content Copyright The Next Platform