$45 Million in Funding to Fix Low Server Utilization

If time was any judge, no amount of money in the world could break through problems with poor CPU and server utilization. Everyone from startups to OS vendors to middleware providers have tried to tackle the issue but it remains persistent.

Perhaps the answer is starting at the kernel—within the OS itself with machine learning and good old fashioned user direction to provide cues about what parts of the workload matter most efficiency and performance wise. This is where startup Granulate began its journey just over two years ago. Now, $45 million in funding later, entirely from within the Israeli VC community, they count select Fortune 500 companies and several others in the mid-market among their ranks.

The company’s co-founder and CEO, who interestingly, has almost the exact same career trajectory as his other co-founder (Israeli Defense Forces in software/R&D followed by a stint at YL Ventures as Entrepreneur in Residence), Asaf Ezra, says the realization that most companies were getting 10-40% utilization from their hardware was shocking, especially given the stated performance capabilities of both on-prem and cloud systems. They began by working with a single company until they understood the problem, tracking it back through the stack and landing at the OS as the source of some fixes.

“When you look at what we’re doing today, we’re the only ones who have actively changed the way the kernel is allocating resources, which thread is running on what CPU, which task is scheduled at any point. It’s not just on the process level, it’s on the thread level.” Ezra says these are problems the user can’t tackle, even if they understand there’s a huge discrepancy between what’s expected performance-wise and what they get.

“Whether VMs running in the cloud or on prem, the fact is, the one making decisions about performance is the OS. It’s been designed as general purpose but your own production servers usually have predictable patterns around data flow (data coming in, processed, memory allocations, network services). You can write a simulation, an automaton, of where the data flows in and what happens in a very transactional way. You can find those patterns and optimize the resource allocation accordingly and no longer do something general purpose,” Ezra explains.

Every management framework to date has been trying to build this in. From schedulers to optimization tools, this is not an unsolved problem if you know where to look. The difference is how Granular is handling the issue. For somewhat regular workloads and problems this makes complete sense but where does it differ from existing tools and frameworks already built into schedulers and workload managers? And what is so different that it warrants so much funding at a time when there are so many other ways to address low utilization?

The difference, Ezra says, is that they’re replacing the way the kernel schedules the tasks on the CPU and distributes them to different threads. Without that, making the decision is an “unopinionated” process; the kernel doesn’t care you need end-to-end low latency, for instance. With each server handing anything from tens to tens of thousands of requests, there are a huge number of tasks waiting on the CPU or to do other actions but no actual way to tell the kernel, hey, crush that latency on the port where the user sends x request, for instance. It’s this, well, granularity of Granulate that distinguishes it.

This is all relevant for CPUs, not necessarily for the GPU compute portion since all of that is handled by the accelerator. Where it is needed in a heterogenous system is in the movement to CPU and memory. Currently all operating systems are game except for Windows (not that this gap should present a problem for anyone doing interesting work at scale) and serverless, although Ezra says that’s on their development roadmap.

Right now only approximately 20% of their user base is on-prem, the rest are all trying to get their cloud infrastructure to deliver on the promise of projected performance. Over the past 10 months, Granulate says it has experienced 360% new customer growth and 570% revenue growth, with the number of CPU cores under management rising by over 10X to over 300,000 cores. “All told, Granulate has saved customers over 3 billion hours of core usage. Likewise, the rising adoption of Granulate’s optimization technology has led to a substantial reduction in computing energy needs, with over 15,000,000 pounds of carbon emissions saved.”

The $45 million in funding is to date. Just today the company announced its initial $15 million expanded by $30 million with the participation of existing investors Insight Partners, TLV Partners, and Hetz Ventures. Dawn Capital also joined the round as a new investor. The Series B is Granulate’s second round of funding over the past ten months, as adoption of the company’s solution has more than tripled since the Series A.

“With the rapid adoption of usage-based public clouds, we’ve returned to a world – not unlike the mainframe era – where dramatic improvements in speed and efficiency of computing systems drop directly to the bottom line. This is particularly salient for companies that are rapidly digitizing their offerings in the midst of a global pandemic and the associated financial crisis, and this trend will continue far into the future,” said Lonne Jaffe, Managing Director at Insight Partners. “It’s hard to think of a company that won’t benefit from Granulate’s offering since it’s showing such significant performance and cost improvements across both sophisticated data and transactional workloads. Our increased investment is a testament to our excitement about Granulate’s market opportunity and momentum.”

This is quite a bit of cash and growth to solve a problem that is so pressing and widespread and long-standing. We have to wonder how far it will go beyond other utilization tools that are built into other layers of the stack but perhaps the OS is the only answer. If this was the case we would have expected more control capabilities inside RedHat and other enterprise operating systems. Still, to see low utilization as the target for its standalone product means there’s still work to be done. And to see it funded to this degree is even more telling.

Sign up to our Newsletter

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Subscribe now

1 Comment

  1. This is ultra-cool, and has wide ranging implications! It has long been proven that scheduling belongs to a class of problems known as NP-complete. These are hard problems, amongst them being able to decrypt an arbitrary code. It isn’t hard to decrypt the code, it just takes a very long time to try every possibility; thus Non Polynomial (NP).
    It is well known that every NP-complete problem can be converted into any other NP-complete problem in P (polynomial) time. So, if this company has actually solved an NP-complete problem, then all NP-complete problems can be considered solved. Encryption is dead, but google maps (traveling salesman) can get a lot better!
    Unless it is just an advertisement for a dead end technology trying to gain investment via a rigged demo….

Leave a Reply

Your email address will not be published.


*


This site uses Akismet to reduce spam. Learn how your comment data is processed.