The Next Platform
  • Home
  • Compute
  • Store
  • Connect
  • Control
  • Code
  • AI
  • HPC
  • Enterprise
  • Hyperscale
  • Cloud
  • Edge
Latest
  • [ July 1, 2022 ] Hard Drives Are The Mark Twain Of Technology Store
  • [ June 30, 2022 ] So, You Think You Can Design A 20 Exaflops Supercomputer? HPC
  • [ June 29, 2022 ] GreenLake: Finally, A Platform That HPE Utterly Controls Cloud
  • [ June 29, 2022 ] HPE Is The First Big OEM To Adopt Ampere Computing Arm Chips Compute
  • [ June 28, 2022 ] NOAA Gets 3X More Oomph For Weather Forecasting; It Needs 3,300X HPC
  • [ June 28, 2022 ] Project Arctic Means VMware Doesn’t Get Left Out In the Hybrid Cold Control
  • [ June 27, 2022 ] The Faster The Switch, The Cheaper Bit Flits Connect
  • [ June 24, 2022 ] AMD Needs To Complete The Datacenter Set With Switching Connect
Homein-memory processing

in-memory processing

Compute

Putting In-Memory Processing Through The Paces

February 4, 2020 Timothy Prickett Morgan 3

From a conceptual standpoint, the idea of embedding processing within main memory makes logical sense since it would eliminate many layers of latency between compute and memory in modern systems and make the parallel processing inherent in many workloads overlay elegantly onto the distributed compute and storage components to speed up processing. …

About

The Next Platform is published by Stackhouse Publishing Inc in partnership with the UK’s top technology publication, The Register.

It offers in-depth coverage of high-end computing at large enterprises, supercomputing centers, hyperscale data centers, and public clouds. Read more…

Newsletter

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Subscribe now

  • RSS
  • Twitter
  • Facebook
  • LinkedIn
  • Email the editor
  • About
  • Contributors
  • Contact
  • Sales
  • Newsletter
  • Books
  • Events
  • Privacy
  • Ts&Cs
  • Cookies
  • Do not sell my personal information

All Content Copyright The Next Platform