March 22, 2017 Jeffrey Burt
The future of Moore’s Law has become a topic of hot debate in recent years, as the challenge of continually shrinking transistors and other components has grown.
Intel, AMD, IBM, and others continue to drive the development of smaller electronic components as a way of ensuring advancements in compute performance while driving down the cost of that compute. Processors from Intel and others are moving now from 14 nanometer processes down to 10 nanometers, with plans to continue onto 7 nanometers and smaller.
For more than a decade, Intel had relied on a tick-tock manufacturing schedule to keep up with …Read more
March 20, 2017 Jeffrey Burt
After years of planning and delays after a massive architectural change, the Blue Waters supercomputer at the National Center for Supercomputing Applications at the University of Illinois finally went into production in 2013, giving scientists, engineers and researchers across the country a powerful tool to run and solve the most complex and challenging applications in a broad range of scientific areas, from astrophysics and neuroscience to biophysics and molecular research.
Users of the petascale system have been able to simulate the evolution of space, determine the chemical structure of diseases, model weather, and trace how virus infections propagate via air …Read more
March 15, 2017 Jeffrey Burt
The Smith-Waterman algorithm has become a linchpin in the rapidly expanding world of bioinformatics, the go-to computational model for DNA sequencing and local sequence alignments. With the growth in recent years in genome research, there has been a sharp increase in the amount of data around genes and proteins that needs to be collected and analyzed, and the 36-year-old Smith-Waterman algorithm is a primary way of sequencing the data.
The key to the algorithm is that rather than examining an entire DNA or protein sequence, Smith-Waterman uses a technique called dynamic programming in which the algorithm looks at segments of …Read more
March 14, 2017 Jeffrey Burt
Nvidia has staked its growth in the datacenter on machine learning. Over the past few years, the company has rolled out features in its GPUs aimed neural networks and related processing, notably with the “Pascal” generation GPUs with features explicitly designed for the space, such as 16-bit half precision math.
The company is preparing its upcoming “Volta” GPU architecture, which promises to offer significant gains in capabilities. More details on the Volta chip are expected at Nvidia’s annual conference in May. CEO Jen-Hsun Huang late last year spoke to The Next Platform about what he called the upcoming “hyper-Moore’s Law” …Read more
March 10, 2017 Jeffrey Burt
Google has always been a company that thinks big. After all, its mission since Day One was to organize and make accessible all of the world’s information.
The company is going to have to take that same expansive and aggressive approach as it looks to grow in a highly competitive public cloud market that includes a dominant player (Amazon Web Services) and a host of other vendors, including Microsoft, IBM, and Oracle. That’s going to mean expanding its customer base beyond smaller businesses and startups and convincing larger enterprises to store their data and run their workloads on its ever-growing …Read more
March 9, 2017 Jeffrey Burt
The lineup of ARM server chip makers has been a somewhat fluid one over the years.
There have been some that have come and gone (pioneer Calxeda was among the first to the party but folded in 2013 after running out of money), some that apparently have looked at the battlefield and chose not to fight (Samsung and Broadcom, after its $37 billion merger with Avago), and others that have made the move into the space only to pull back a bit (AMD a year ago released its ARM-based Opteron A1100 systems-on-a-chip, or SOCs but has since shifted most of …Read more
March 9, 2017 Jeffrey Burt
Google’s Cloud Platform is the relative newcomer on the public cloud block, and has a way to go before before it is in the same competitive sphere as Amazon Web Services and Microsoft Azure, both of which deliver a broader and deeper range of offerings and larger infrastructures.
Over the past year, Google has promised to rapidly grow the platform’s capabilities and datacenters and has hired a number of executives in hopes of enticing enterprises to bring more of their corporate workloads and data to the cloud.
One area Google is hoping to leverage is the decade-plus of work and …Read more
March 2, 2017 Jeffrey Burt
Cloud computing makes a lot of sense for a rapidly growing number of larger enterprises and other organizations, and for any number of reasons. The increased application flexibility and agility engendered by creating a pool of shared infrastructure resources, the scalability and the cost efficiencies, are all key drivers in an era of ever-embiggening data.
With public and hybrid cloud environments, companies can offload the integration, deployment and management of the infrastructure to a third party, taking the pressure off their own IT staffs, and in private and hybrid cloud environments, they can keep their most business-critical data securely behind …Read more
March 1, 2017 Jeffrey Burt
Large enterprises are embracing NVM-Express flash as the storage technology of choice for their data intensive and often highly unpredictable workloads. NVM-Express devices bring with them high performance – up to 1 million I/O operations per second – and low latency – less than 100 microseconds. And flash storage now has high capacity, too, making it a natural fit for such datacenter applications.
As we have discussed here before, all-flash arrays are quickly becoming mainstream, particularly within larger enterprises, as an alternative to disk drives in environments where tens or hundreds of petabytes of data – rather than the …Read more
February 28, 2017 Jeffrey Burt
Exascale computing, which has been long talked about, is now – if everything remains on track – only a few years away. Billions of dollars are being spent worldwide to develop systems capable of an exaflops of computation, which is 50 times the performance of the most capacious systems the current Top500 supercomputer rankings and will usher in the next generation of HPC workloads.
As we have talked about at The Next Platform, China is pushing ahead with three projects aimed at delivering exascale systems to the market, with a prototype – dubbed the Tianhe-3 – being prepped for …Read more