High Expectations for Low Precision at CERN

The last couple of years has seen a steady drumbeat for the use of low precision in growing numbers of workloads driven in large part by the rise of machine learning and deep learning applications and the ongoing desire to cut back on the amount of power consumed.

The interest in low precision is rippling through the high-performance computing (HPC) field, spanning companies that are running applications sets to the tech vendors that are creating the systems and components on which the work is done.

The Next Platform has kept a steady eye on the developments the deep-learning and machine-learning areas, where the focus continues to be on low precision. Chinese tech company Baidu has been aggressive in pursuing low precision for its deep-learning work, with Baidu officials last month telling us how the search giant not only is extending its open-source DeepBench deep-learning benchmarking tool beyond training and into inference, but also that the benchmarking can be done a lower precision to help with the effort to make training and inference faster for such applications as speech and image.

Chip makers also are driving down the precision for devices that are on their way to market as vendors and those using their products are looking for a way to retain the accuracy of their work while driving down the power consumption for artificial intelligence (AI) and other emerging workloads. They’re looking at low or mixed precision capabilities, and vendors like Intel (in its upcoming Knights Mill many-core processors, an AI-focused variant of the Knights Landing Xeon Phi processors) and Nvidia (in its GPUs based on the Pascal architecture) are pursuing more balance in precision. IBM officials also have talked about low precision in its deep-learning efforts.

Not all HPC workloads are moving away from the traditional double-precision in the field, though it’s becoming increasingly clear that not all applications require it. However, some are sticking with it, even in the areas of deep learning and machine learning. Engineers in China are taking the massive Sunway TaihuLight supercomputer – the world’s fastest supercomputer – in interesting directions. Not only are they looking to show that high-performance deep-learning work can be done on a CPU-only architecture – rather than leveraging GPUs – but they also are continuing to focus on double-precision floating point even as the trend in the field is reducing floating point and instead emphasizing other features as well.

Researchers at CERN, the stewards of the Large Hadron Collider (LHC), also are putting a greater focus on precision as they continue to collect massive amounts of data from their experiments. The datasets are the results of experiments in which high-energy particle beams traveling at almost the speed of lights are made to collide, creating clouds of particles – some of which are well-known and others that are less so – that help physicists better understand the basic makeup of the universe. Most famously, it was the LHC that helped researchers discover the long-rumored Higgs boson five years ago. More recently, scientists at CERN last week announced the discovery of a new subatomic particle. It’s a baryon, which comprises even smaller particles called quarks. The significance of the new baryon – which is four times heavier than a proton and consists of two charm quarks and a light quark – is that it could help explain how matter in the universe sticks together.

Since revving up the LHC in 2008, the CERN scientists have amassed a huge amount of data, reportedly reaching the 200-petabyte mark last month after running experiments in which particles within the LHC detectors collide about 1 billion times per second, with each collision generating a petabyte of data.

At the EPS International Conference on High Energy Physics this week, CERN researchers are presenting dozens of results from existing datasets from LHC experiments – over the past two years, the machine has generated huge amounts data from particle collisions – and from even more study of the Higgs boson. To get as much useful information and insight as possible from the data, the scientists have begun to plumb the depths of precision.

The group has been able to supplement what’s called the Standard Model of prediction – which helps show how the Higgs boson interacts with other particles – with the ATLAS and Compact Muon Solenoid (CMS) to better understand what happens to the Higgs boson during its short life, and how it decays to fundamental particles quarks and leptons.

The LHC data goes beyond the Higgs boson, with CERN scientists this week talking about the results of tests in connection with the search for dark matter. They also will talk about the high-level precision that led to the discovery of the newest baryon, more information on matter-antimatter asymmetry and the results of heavy ion collisions in experiments.

Sign up to our Newsletter

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Subscribe now

1 Comment

  1. You said “particles within the LHC detectors collide about 1 billion times per second, with each collision generating a petabyte of data.”

    If each collision results in 1PB of data, and collisions are occurring 1 billion times per second, and we only recorded for one second, we’d have 1 billion PB of data… Assuming that we record ALL collisions. That’s definitely not clear from this article.

    I expect each collision records a much less than one petabyte of data. I also expect that only a tiny fraction of collisions are “retained for analysis”, with most collisions being discarded… No doubt this beast churns out data by the truckload…

Leave a Reply

Your email address will not be published.


*


This site uses Akismet to reduce spam. Learn how your comment data is processed.