The global server market is increasingly driven by the hyperscalers, and the trendsetter for all of them is Amazon Web Services. The massive company dominates the fast-growing public cloud space, outpacing rivals like Microsoft Azure, Google Cloud Platform, and IBM Cloud, and is the top consumer of servers among a group of hyperscalers that are becoming the most powerful buyers of systems and new components, such as processors.
This can be seen in the numbers. According to IDC analysts, hyperscalers in the first and second quarters this year made a significant push to deploy servers, with AWS accounting for more than 10 percent of the systems shipped during the three months ended in June. And as the hyperscalers go, so does the rest of the market, and vendors make their plans accordingly, commented Kuba Stolarski, research director for IDC’s Computing Platforms unit.
“As hyperscalers tend to lead the market on most architectural updates, we expect the rest of the market to catch up over the next several quarters,” Stolarski said in a statement when the market research firm released its latest numbers in September. “As the market cycles through this refresh, we are seeing changes in vendor portfolios with new modular system designs and a greater focus on accelerator technologies, as well as the continued evolution of the role of cloud services in corporate IT.”
AWS officials this week said the latest high performance virtual are now available for the Amazon Elastic Compute Cloud (EC2) and aimed at a broad range of compute-intensive workloads, from high-performance computing (HPC), distributed analytics, batch processing and video encoding. The C5 instances also will be powered by Intel’s latest “Skylake” Xeon SP processors optimized for EC2 and will sport a new hypervisor developed by AWS and based on KVM, a move away from the Xen hypervisor that had been favored by AWS in the past. In a post on the company blog, Jeff Barr, chief evangelist for AWS, wrote that the “new hypervisor … runs hand-in-glove with our hardware. The new hypervisor allows us to give you access to all of the processing power provided by the host hardware, while also making performance even more consistent and further raising the bar on security.”
Intel released the Xeon SPs in July, with company officials saying that the new chips with their greater performance and power efficiency enable public cloud environments to move easily between general-purpose computing, HPC and emerging technologies like artificial intelligence (AI) and deep learning. Barr wrote that the Xeon Platinum 8000 series chip has been optimized for EC2, allowing customers “to have full control over the C-states on the two largest sizes, allowing you to run a single core at up to 3.5 GHz using Intel Turbo Boost Technology.” For their part, Intel officials said that the chip maker worked with AWS to optimize AI and deep learning engines through the latest version of Intel’s Math Kernel Library and Xeon SPs. In addition, MXNet and other deep learning frameworks have been optimized to run on the C5 instances. The chip maker has said that with the Xeon SPs, the performance of deep learning inference is 2.4 times better than with the previous generation of processors, and deep learning training performance is 2.2 times better.
AWS – which last month was the first of the hyperscalers to get Nvidia’s Tesla “Volta” GPU accelerators up and running in its environment – initially will use the new hypervisor with the new C5 instances, but the overall plan is to eventually have all new instances use the KVM-based hypervisor, though in the near-term some new instances will still rely on Xen. According to a FAQ section on the AWS site, “the new EC2 hypervisor provides consistent performance and increased compute and memory resources for EC2 virtualized instances by removing host system software components. It allows AWS to offer larger instance sizes (like c5.18xlarge) that provide practically all of the resources from the server to customers. Previously, C3 and C4 instances each eliminated software components by moving VPC [virtual private cloud] and EBS [Elastic Block Storage] functionality to hardware designed and built by AWS. This hardware enables the new hypervisor to be very small and uninvolved in data processing tasks for networking and storage.”
Customers will see some differences in instances that us the new hypervisor and those using the Xen hypervisor, officials said. For example, instances with the new hypervisor will boot from EBS volumes using an NVM-Express interface, rather than those with Xen, which boot from an emulated IDE hard drive and switch to the Xen paravirtualized block device drivers. Another difference is that with the new hypervisor, operating systems can identify when they are running under a hypervisor. However, most applications will run the same way under both hypervisors.
“The new hypervisor allows us to give you access to all of the processing power provided by the host hardware, while also making performance even more consistent and further raising the bar on security,” Barr wrote, adding that more information will come at the AWS re:Invent conference later this month in Las Vegas.
Overall, organizations will be able to choose from among six different C5 instances, which hold between two and 72 virtual CPUs (the C4 instances topped out at 36 vCPUs). Barr wrote that the new instances will offer a 25 percent price/performance improvement over the C4 instances, with that bumping to as high as 50 percent with some workloads. There’s also more memory per vCPU and twice the performance for vector and floating point workloads, he wrote. The C5 instances are available now in the U.S. East and West regions as well as the EU region, and can be used as On Demand, Reserved, or Spot instances.
By shipping Skylake Xeons on EC2, AWS has finally caught up with Google Cloud Platform and Microsoft Azure, which has already been shipping instances based on Skylake.
Sign up to our Newsletter
Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.