On today’s podcast episode of “The Interview” with The Next Platform, we talk with computer architecture researcher Roman Kaplan about the role memristors might play in accelerating common machine learning algorithms including K-means. Kaplan and team have been looking at performance and efficiency gains by letting ReRAM pick up some of the data movement tab on traditional architectures.
Kaplan, a researcher at the Viterbi faculty of Electrical Engineering in Israel, along with his team, have produced some interesting benchmarks comparing K-means and K-nearest neighbor computations on CPU, GPU, FPGA, and most notably, the Automata Processor from Micron to their own memristor-based accelerator approach called PRINS, which is detailed in this paper.
We talk about some of the differences between in-storage and in-memory processing (an important point given the linked paper) and where memristors could fit despite some challenges on the endurance, energy consumption, and manufacturability fronts.
Aside from the general introduction, we also walk though some of his team’s benchmarking results that compare several architectures, the results of which are shown in the table below.
PRINS is a massively parallel and scalable SIMD architecture with in-situ processing capabilities. It uses associative processing in a crossbar array (based on resistive random access memory or ReRAM) instead of traditional logic to allow the execution of any logic or arithmetic operation in a fixed number of cycles. Each of these circuits can contain many millions of data rows with each row serving as an associative processing unit.
While there is a lot of overhead to mapping an entire dataset to PRINS, Kaplan says the results show a performance improvement of over 3X with better power efficiency for both algorithms.
Be the first to comment