This summer, the Partnership for Advanced Computing in Europe (PRACE) added to its roster another of the world’s most powerful high performance computing systems. The Barcelona Computing Center’s new MareNostrum 4, delivered by IBM with the help of partners Lenovo and Fujitsu, and fueled by HPC technologies from Intel, will facilitate extensive engineering and scientific research in fields like astrophysics, weather forecasting, and genome research. Nestled within a unique building – the Torre Girona chapel, which fell out of use – the fourth generation MareNostrum system relies on a general purpose cluster working with three specialized clusters to achieve its …Read more
Supercomputer maker Cray is always looking for ways to extend its reach outside of the traditional academic and government markets where the biggest deals are often made.
From its forays into graph analytics appliances and more recently, machine and deep learning, the company has potential to exploit its long history building some of the world’s fastest machines. This has expanded into some new ventures wherein potential new Cray users can try on the company’s systems, including via an on-demand partnership with datacenter provider, Markley, and now, inside of Microsoft’s Azure datacenters.
For Microsoft Azure cloud users looking to bolster modeling …Read more
Intel’s multi-year effort to expand its reach beyond its PC and server processor roots has taken the chip maker down multiple paths, some of which have ended in dead ends.
The most memorable of those was the billion-plus-dollar attempt to challenge ARM Holdings and its various partners – such as Qualcomm and Samsung – in making chips for mobile devices. Under current CEO Brian Krzanich, Intel has retrenched, dropping its mobile device efforts and pulling back from wearables, and instead is pushing to provide the foundational technologies that will underpin the trends that will continue to shape the industry, from …Read more
For the past five and a half years, which is not quite an eternity in the IT business but is something akin to a half of a generation or so, IBM’s revenues have been declining, quarter in and quarter out. As has happened many, many times in its more than century of existence, Big Blue, which used to be a peddler of meat slicers, time machines, scales, and punch card tabulators early in its history, has had to constantly evolve and reimagine itself.
The transformation that IBM had to undergo in the late 1980s and early 1990s was a near …Read more
No one knows better than IBM that the time, money, energy, and risk associated with changing platforms can hinder that change. In some cases, as with the System z mainframe, this helps the company preserve its footprint in the datacenter. But in other cases, it hurts IBM’s ability to get people to try out different public or private infrastructure.
It is no secret that Big Blue wants a much bigger cloud business, and that it got a late start compared to Amazon, Microsoft, and Google. But IBM does have a presence at most of the large companies on earth, and …Read more
For more than a year, container pioneer Docker has pushed its own Docker Swarm as the orchestration tool for managing highly distributed computing environments based on its eponymous containers in physical and virtual environments. But it is hard to deny the rapid uptake of Kubernetes, the container orchestration technology that was derived from Google’s internal Borg and Omega cluster managers and that the search engine giant open sourced three years ago.
Kubernetes has become highly popular, gaining momentum with top cloud providers like Amazon Web Services and Microsoft Azure, and obviously Google Cloud Platform, and is getting support from …Read more
In this webcast, we learn from Nick Curcuru, vice president of the big data practice at MasterCard, about what needs to be in place both technically and in terms of management models and processes so that the benefits can be fully achieved.
High performance computing, long the domain of research centers and academia, is increasingly becoming a part of mainstream IT infrastructure and being opened up to a broader range of enterprise workloads, and in recent years, that includes big data analytics and machine learning. At the forefront of this expanded use is MasterCard, a financial services giant …Read more
If the profit margins are under pressure among the switch and router makers of the world, their chief financial officers can probably place a lot of the blame on Nick McKeown and his several partners throughout the years. And if McKeown is right about what is happening as the network software is increasingly disaggregated from the hardware – what is called software defined networking – they will either have to adapt or be relegated to the dustbins of history.
McKeown cut his teeth after university in the late 1980s at Hewlett Packard Labs in Bristol, England, one of the hotbeds …Read more
The highly distributed and increasingly cloud-based nature of the modern IT environment is adding to the complexity that organizations have to deal with, particularly in terms of managing their infrastructures. Mobility, the internet of things, new development paradigms, containerization, more distributed applications, data analytics and multi-cloud deployments are all conspiring to create even more challenges in what is an already complicated management scenario for enterprises facing cost and time constraints.
At a time when speed and scalability are imperative and human errors can be costly, the answer to many of these challenges may lie in the cloud. That’s the …Read more
After a long, long wait and years of anticipation, it looks like IBM is finally getting ready to ship commercial versions of its Power9 chips, and as expected, its first salvo of processors aimed at the datacenter will be aimed at HPC, data analytics, and machine learning workloads.
We are also catching wind about IBM’s Power9-based scale-up NUMA machines, which will debut sometime next year and take on big iron systems based on Intel Xeon SP, Oracle Sparc M8, and Fujitsu Sparc64-XII processors as well as give some competition to IBM’s own System z14 mainframes.
The US Department …Read more
Today, most machine learning is done on processors. Some would say that acceleration of learning has to be done on GPUs, but for most users that is not good advice for several reasons. The biggest reason is now the Intel Xeon SP processor, formerly codenamed “Skylake.”
Up until recently, the software for machine learning has been often more optimized for GPUs than anything else. A series of efforts by Intel have changed that – and when coupled with Platinum version of the Intel Xeon SP family, the top performance gap is closer to 2X than it is to 100X. This …Read more
For disaster recovery, political, and organizational reasons, enterprises like to have multiple datacenters, and now they are going hybrid with public cloud capacity adding in the mix. Having networks scattered across the globe brings operational challenges, from being able to easily migrate and manage workloads across the multiple sites and increased complexity around networks, security to adopting emerging datacenter technologies like containers.
As the world becomes more cloud-centric, organizations are looking for ways to gain greater visibility and scalability across their environments, automate as many processes as possible and manage all these sites as a single entity.
Cisco Systems …Read more
Governments like to spread the money around their indigenous IT companies when they can, and so it is with the AI Bridging Cloud Infrastructure, or ABCI, supercomputer that is being commissioned by the National Institute of Advanced Industrial Science and Technology (AIST) in Japan. NEC built the ABCI prototype last year, and now Fujitsu has been commissioned to build the actual ABCI system.
The resulting machine, which is being purchased specifically to offer cloud access to compute and storage capacity for artificial intelligence and data analytics workloads, would make a fine system for running HPC simulation and models. But that …Read more
IBM has spent the past several years putting a laser focus on what it calls cognitive computing, using its Watson platform as the foundation for its efforts in such emerging fields as artificial intelligence (AI) and is successful spinoff, deep learning. Big Blue has leaned on Watson technology, its traditional Power systems, and increasingly powerful GPUs from Nvidia to drive its efforts to not only bring AI and deep learning into the cloud, but also to push AI into the enterprise.
The technologies are part of a larger push in the industry to help enterprises transform their businesses to take …Read more
One of the reasons we have written so much about Chinese search and social web giant, Baidu, in the last few years is because they have openly described both the hardware and software steps to making deep learning efficient and high performance at scale.
In addition to providing several benchmarking efforts and GPU use cases, researchers at the company’s Silicon Valley AI Lab (SVAIL) have been at the forefront of eking power efficiency and performance out of new hardware by lowering precision. This is a trend that has kickstarted similar thinking in hardware usage in other areas, including supercomputing …Read more
Someone is going to commercialize a general purpose, universal quantum computer first, and Intel wants to be the first. So does Google. So does IBM. And D-Wave is pretty sure it already has done this, even if many academics and a slew of upstart competitors don’t agree. What we can all agree on is that there is a very long road ahead in the development of quantum computing, and it will be a costly endeavor that could nonetheless help solve some intractable problems.
This week, Intel showed off the handiwork its engineers and those of partner QuTech, a …Read more
Everyone in the IT industry likes drama, and we here at The Next Platform are no different. But it is also important as the industry in undergoing gut-wrenching transformations, as it has been for five decades now and will probably do so for a decade or two more, to keep some perspective. While the public cloud is certainly an exciting part of the IT market, it hasn’t taken over the world even if it has become the dominant metaphor that all kinds of IT – public, private, and hybrid – aspired to mimic.
That’s something, and it is important. But …Read more
The potent combination of powerful CPUs, floating point laden GPU accelerators, and fast InfiniBand networking are coming to market and reshaping the upper echelons of supercomputing. While Intel is having issues with its future Knights massively parallel X86 processors, which it has not really explained, the two capability class supercomputers that are being built for the US Department of Energy by IBM with the help of Nvidia and Mellanox Technologies, named “Summit” and ‘Sierra” and installed at Oak Ridge National Lab and Lawrence Livermore National Laboratory, are beginning to be assembled.Read more
Red Hat has been aggressive in building out its capabilities around containers. The company last month unveiled its OpenShift Container Platform 3.6, its enterprise-grade Kubernetes container platform for cloud native applications that added enhanced security features and greater consistency across hybrid and multi-cloud deployments.
A couple of weeks later, Red Hat and Microsoft expanded their alliance to make it easier for organizations to adopt containers. Red Hat last year debuted OpenShift 3.0, which was based on the open source Kubernetes orchestration system and Docker containers, and the company has since continued to roll out enhancements to the platform.
The …Read more
During the dot-com boom, when Oracle was the dominant supplier of relational databases to startups and established enterprises alike, it used its profits to fund the acquisition of application serving middleware, notably BEA WebLogic, and then applications, such as PeopleSoft and Siebel, and then Java and hardware systems, from its acquisition of Sun Microsystems. It was an expensive proposition, but one that paid off handsomely for the software giant.
In the cloud and hyperscale era, open source middleware is the driving force and in a lot of cases there is nothing to acquire. Projects either go open themselves or are …Read more
The Next Platform Weekly
- Welcome To The Next Platform
- Rockets Shake And Rattle, So SpaceX Rolls Homegrown CFD
- More Knights Landing Xeon Phi Secrets Unveiled
- The Tiny Chip That Could Disrupt Exascale Computing
- Inside an Evolving Genomics Cluster
- Flink Sparks Next Wave of Distributed Data Processing
- Tesla Compute Drives Nvidia Upwards
- Pivotal Opens Up More Of Its Platform
- Manufacturers Making Workstation To Cluster Leap