The leading supplier of end-to-end Ethernet and InfiniBand intelligent interconnect solutions and services for servers, storage, and hyper-converged infrastructure.
Containers represent a hot trend in cloud computing today. They enable virtualization of servers and portability of applications minus the overhead of running a hypervisor on every host and without a copy of the full operating system in every virtual machine. NFV moves network functions from dedicated appliances to COTS servers and microservices further disaggregates monolithic VNF functions into scalable components. When you decompose monolithic applications into microservices, there are increased demands on the network not just through the north-south host to host traffic but also due to the inter container east-west traffic. Thus, a highly efficient microservices based, containerized NFV architecture needs a reliable, high speed and performance optimized network infrastructure. Learn more about Mellanox’s Ethernet switch and adapter technology for building a highly efficient NFV infrastructure.
This report by the Tolly Group explains the fundamental differences between Mellanox Spectrum and Broadcom Tomahawk based switches. Learn some interesting facts about buffering, fairness in networking, and cut-through compromises. This report provides a benchmark of the performance and predictability of the Mellanox Spectrum ASIC and delivers wire rate performance at all packet sizes, better buffering, and low latency.
LEGO bricks are made of basic shapes/components. LEGO bricks have well defined and simple interface so that they can be combined freely. LEGO building requires a basic set of skills, the rest is your imagination and creativity. This is exactly the direction modern networking is heading too. At Mellanox, we call it Open Composable Networks or OCN.
Analyst firm, Neuralytix, just published a terrific white paper about the revolution affecting data storage interconnects. Titled Faster Interconnects for Next Generation Data Centers, it explains why customers are rethinking their data center storage and networks, in particular around how iSCSI and iSER (iSCSI with RDMA) are starting to replace Fibre Channel for block storage.
Open Composable Networks, Partnership With Cumulus
Pushing networking space into the future | #OCPSummit16
A recently published Tolly Report demonstrates that 25, 50, and 100 Gb/s Ethernet switches based on Mellanox Spectrum deliver predictable performance and Zero Packet Loss. By contrast, Broadcom Tomahawk-based switches showed fundamental weaknesses.
After the Spectrum vs. Tomahawk Tolly report was published, people asked me: “Why was this great report commissioned to Tolly? Isn’t there an industry benchmark where multiple switch vendors participate?” So, the simple answer is: No, unfortunately there isn’t…
For many a predictable network is simply assumed. But it turns out that at the most advanced network speeds predictable performance is extremely hard to deliver, and some vendors actually fall short. Unfortunately, for application level and data center architects the unpredictability of the underlying network can be hidden from view.
|Q&A: Ethernet Evolution—25 is the New 10 and 100 is the New 40||Set VMware vMotion into Fast Motion over High-Speed Interconnect|
|Mellanox has been known for its support of InifiniBand, but it’s also a major player in the high-speed Ethernet market.
One of the latest technologies is 25-Gbit/s Ethernet. Dell’Oro predicts that it will have the most rapid adoption compared to any previous speed. In fact, it expects two million adapters to be shipped in 2018.
|Virtual Machine (VM) live migration is an important feature to enhance the high availability of applications, guarantee quality of service, and simplify infrastructure maintenance operations.
In this blog, we compare the difference of performing vMotion over 40Gb/s network vs. 10Gb/s network.
One of the key themes to improving the performance of clusters running simulations has been the offloading of common routines from the central processors in the servers to accelerators in the network adapter cards that plug into the servers and that interface with switches.
The HPC market is going through a technology transition – the Co-Design transition. This transition has emerged in order to solve the performance bottlenecks of today’s infrastructures and applications.