The dark and mysterious art of artificial intelligence and machine learning is neither straightforward, or easy. AI systems have been termed “black boxes” for this reason for decades now. We desperately continue to present ever larger, more unwieldy datasets to increasingly sophisticated “mystery algorithms” in our attempts to rapidly infer and garner new knowledge.
How can we try to make all of this just a little easier?
Hyperscalers with multi-million dollar analytics teams have access to vast, effectively unlimited compute and storage of all shapes and sizes. Huge teams of analysts, systems managers, resilience and reliability experts are standing up …
The dark and mysterious art of artificial intelligence and machine learning is neither straightforward, or easy. AI systems have been termed “black boxes” for this reason for decades now. We desperately continue to present ever larger, more unwieldy datasets to increasingly sophisticated “mystery algorithms” in our attempts to rapidly infer and garner new knowledge.Read more
You can’t swing a good-sized cat without hitting an enterprise running Oracle software in some shape or form. If it’s not Oracle’s ubiquitous database, then it’s one of its middleware platforms or its enterprise applications in the Fusion suite or its predecessors in the Oracle, Siebel, PeopleSoft, and JD Edwards suites.
Currently Oracle boasts 430,000 customers running its software – that’s quite an installed base. And it’s all teed up to become quite a battleground. Why?
Six months or so ago, news broke that Oracle was laying off a large number of hardware folks. Something like 2,500 Sparc and Solaris …Read more
There continues to be an ongoing push among tech vendors to bring artificial intelligence (AI) and its various components – including deep learning and machine learning – to the enterprise. The technologies are being rapidly adopted by hyperscalers and in the HPC space, and enterprises stand to reap significant benefits by also embracing them.
As we’ve noted many times here at The Next Platform, at the most basic level, machine learning and deep learning can enable enterprises to quickly sort through and analyze the massive amounts of data that they’re collecting to find patterns that can lead to better …Read more
We all know about the Top 500 supercomputing benchmark, which measures raw floating point performance. But over the several years there has been talk that this no longer represents real-world application performance.
This has opened the door for a new benchmark to come to the fore, in this case the high performance conjugate gradients benchmark, or HPCG, benchmark.
Here to talk about this on today’s episode of “The Interview” with The Next Platform is one of the creators of HPCG, Sandia National Lab’s Dr. Michael Heroux. Interestingly, Heroux co-developed HPCG with one of the founders of the Top …Read more
Computing resources – including storage and networking – are continuing their march toward the network edge, drawn like a magnet to the rapidly proliferating connected devices in the world and the huge amounts of data that they’re generating that need to be collected, processed and analyzed.
As we’ve talked about here at The Next Platform over the past few months, the distributed nature of computing, fueled by such drivers as the cloud, the Internet of Things (IoT) and greater mobility, and the demand for capabilities like artificial intelligence (AI), machine learning and analytics to manage the data call for moving …Read more
Cavium has raised its profile over the past several years as one of the pioneers in developing Arm-based systems-on-a-chip (SoCs) for servers, rolling out multiple generations of its ThunderX chips in hope of pushing Arm’s low-power architecture make gains in a datacenter environment that for years has been dominated by Intel and its x86-based Xeons.
However, like similar chip makers, Cavium didn’t start with the Arm server chips, but instead built to that point atop a broad array of products for other areas of the datacenter, including adapters, controllers, switches and MIPS-based processors for networking and storage devices. …Read more
Containerization as a concept of isolating application processes while sharing the same operating system (OS) kernel has been around since the beginning of this century. It started its journey from as early as Jails from the FreeBSD era. Jails heavily leveraged the chroot environment but expanded capabilities to include a virtualized path to other system attributes such as storage, interconnects and users. Solaris Zones and AIX Workload Partitions also fall into a similar category.
Since then, the advent and advancement in technologies such as cgroups, systemd and user-namespaces greatly improved the security and isolation of containers when compared to their …Read more
There are two supercomputers named “Aurora” that are affiliated with Argonne National Laboratory – the one that was supposed to be built this year and the one that for a short time last year was known as “A21,” that will be built in 2021, and that will be the first exascale system built in the United States.
Details have just emerged on the second, and now only important, Aurora system, thanks to Argonne opening up proposals for the early science program that lets researchers put code on the supercomputer for three months before it starts its production work. The proposal …Read more
The field programmable gate space is heating up with new use cases driven by everything from emerging network, IoT, and application acceleration trends. Keeping ahead of the curve means expanding on devices that have quite steady improvement cycles, which means the few companies at the top need to get creative to stay competitive.
Xilinx and Altera – which was bought by Intel in 2015 for $16.7 billion – have been the top vendors of FPGAs, which can be programmed and reprogrammed, enabling organizations the ability to adapt the processors to the varying workloads running on the systems. The high price …Read more
It has been more than two months since Google revealed its research on the Spectre and Meltdown speculative execution security vulnerabilities in modern processors, and caused the whole IT industry to slam on the brakes and brace for the impact. The initial microbenchmark results on the mitigations for these security holes, put out by Red Hat, showed the impact could be quite dramatic. But according to sources familiar with recent tests done by Intel, the impact is not as bad as one might think in many cases. In other cases, the impact is quite severe.
The Next Platform caught wind …Read more
These days, organizations are creating and storing massive amounts of data, and in theory this data can be used to drive business decisions through application development, particularly with new techniques such as machine learning. Data is arguably the most important asset, and it is also probably the most difficult thing to manage. Well, excepting people.
Data is tangled mess. It can be structured or unstructured, and it is increasingly scattered in different locations – in on-premises infrastructure, in a public cloud, on a mobile device. It is a challenge to move, thanks to the costs in everything from bandwidth to …Read more
The artificial intelligence revolution is quickly changing every industry, and modern data centers must be equipped to capitalize on these extraordinary new capabilities. Hewlett Packard Enterprise (HPE) and Nvidia are partnering to bring best-of-breed AI solutions to every customer, offering AI-integrated systems, services, and support capabilities to help all organizations seamlessly optimize their AI foundation, deliver differentiated outcomes, and gain competitive advantage.
High performance computing has become key to solving many of the world’s grand challenges in the realms of science, industry, and engineering. However, traditional CPUs are increasingly failing to deliver the performance gains they used to, and the …Read more
In part one of our series on reaching computational balance, we described how computational complexity is increasing exponentially. Unfortunately, data and storage follows an identical trend.
The challenge of balancing compute and data at scale remains constant. Because providers and consumers don’t have access to “the crystal ball of demand prediction”, the appropriate computational response to vast, unpredictable amounts of highly variable complex data becomes unintentionally unplanned.
We must address computational balance in a world barraged by vast and unplanned data.
Before starting any discussion of data balance, it is important to first remind ourselves of scale. Small …Read more
On today’s episode of “The Interview” with The Next Platform, we discuss the role of higher level interfaces to common machine learning and deep learning frameworks, including Caffe.
Despite the existence of multiple deep learning frameworks, there is a lack of comprehensible and easy-to-use high-level tools for the design, training, and testing of deep neural networks (DNNs) according to this episode’s guest, Soren Klemm, one of the creators of Python based Barista, which is an open-source graphical high-level interface for the Caffe framework.
While Caffe is one of the most popular frameworks for training DNNs, editing prototxt files in …Read more
There are a number of key areas where exascale computing power will be required to turn simulations into real-world good. One of these is fusion energy research with the ultimate goal of building efficient plants that can safely deliver hundreds of megawatts of clean, renewable fusion energy.
Japan has announced that it will install its top-end XC50 supercomputer at the at the Rokkasho Fusion Institute.
The new system will achieve four petaflops, which is over double the capability of the current machine for international collaborations in fusion energy, Helios, which was built by European supercomputer maker, Bull. The Helios system …Read more
Researchers at Volkswagen have been at the cutting edge of implementing D-Wave quantum computers for a number of complex optimization problems, including traffic flow optimization, among other potential use cases.
These efforts are generally focused on developing algorithms suitable for the company’s recently purchased 2000-qubit quantum system and have expanded to a range of new machine learning possibilities, including what a research team at the company’s U.S. R&D office and the Volkswagen Data:Lab in Munich are calling quantum-assisted cluster analysis.
The art and science of clustering is well known for machine learning on classical computing architectures, but the VW approach …Read more
On today’s episode of “The Interview” with The Next Platform, we talk about an open source data management platform (and related standards group) called iRODS, which many in scientific computing already know—but that also has applicability in enterprise.
We found that several of our readers had heard of iRODS and knew it was associated with a scientific computing base, but few understood what the technology was and were not aware that there was a consortium. To dispel any confusion, we spoke with Jason Coposky, executive director of the iRODS Consortium about both the technology itself and the group’s role …Read more
On today’s episode of “The Interview” with The Next Platform we talk about the growing problem of networks within networks (within networks) and what that means for future algorithms and systems that will support smart cities, smart grids, and other highly complex and interdependent optimization problems.
Our guest on this audio interview episode (player below) is Hadi Amini, a researcher at Carnegie Mellon who has focused on the interdependency of many factors for power grids and smart cities in a recent book series on these and related interdependent network topics. Here, as in the podcast, the focus is on the …Read more
The idea of bringing liquids in the datacenter to cool off hot-running systems and components has often unnerved many in the IT field. Organizations are doing it as they look for more efficient and cost-effective ways to run their infrastructures, particularly as the workloads become larger and more complex, more compute resources are needed, parts like processors become more powerful and density increases.
But the concept of running water and other liquids through a system, and the threat of the liquids leaking into the various components and into the datacenter, has created uneasiness with the idea.
Still, the growing demands …Read more
Changes to workloads in HPC mean alterations are needed up and down the stack—and that certainly includes storage. Traditionally these workloads were dominated by large file handling needs, but as newer applications (OpenFOAM is a good example) bring small file and mixed workload requirements to the HPC environment, it means storage approaches need to shift to meet the need.
With these changing workload demands in mind, recall that in the first part of our series on future directions for storage for enterprise HPC shops we focused on the ways open source parallel file systems like Lustre fall short for users …Read more
The Next Platform Weekly
- Welcome To The Next Platform
- Rockets Shake And Rattle, So SpaceX Rolls Homegrown CFD
- More Knights Landing Xeon Phi Secrets Unveiled
- The Tiny Chip That Could Disrupt Exascale Computing
- Inside an Evolving Genomics Cluster
- Flink Sparks Next Wave of Distributed Data Processing
- Tesla Compute Drives Nvidia Upwards
- Pivotal Opens Up More Of Its Platform
- Manufacturers Making Workstation To Cluster Leap