McLaren Builds Infrastructure And F1 Race Cars For Speed

For The McLaren Group, it’s all about speed.

Born in 1963 as a Formula 1 race car company, it initially was about speed on the track. However, over the past five-plus decades, the company has expanded. Later in the 1960s, McLaren began rolling out high-end, fast cars based on one of its racing cars but made for the streets, and still sells what it calls supercars that start at about $165,000. And for about 30 years, the company has taken all the technological know-how that it’s developed while building these racing and road cars – from high-speed design to data management and analytics – and applied it other industries.

Since a reorganization in 2017, McLaren’s Racing, Automotive and Applied Technology divisions has formed the three legs of the stool that make up the company. McLaren Applied Technologies is the unit that takes the cutting-edge technology and data work it does in developing F1 race cars and premium road cars and leverages it for four industries the company says are undergoing large-scale disruption: motorsports, automotive public transportation and healthcare.

The work McLaren Applied Technologies and the other divisions are doing in developing technologies for their racing and street cars – from managing and analyzing the massive amounts of data the cars generate to leveraging artificial intelligence and machine learning techniques – are translating into products and solutions that can be used in these other industries. At the same time, the infrastructure in McLaren’s two datacenters at the McLaren Technology Center in England has had to adapt to the more modern workloads the company is running, which includes not only adopting such solutions as hyperconverged infrastructure and containers on-premises but also embracing hybrid cloud and multicloud environments.

“We’re taking all of our heritage and knowledge and skill sets, everything we do in Formula 1 in terms of modeling and simulation, data analytics, decision science – things that make us a unique group – and then repackage that to push into other markets,” Paul Brimacombe, head of IT architecture at The McLaren Group tells The Next Platform. “Applied Technologies really operates across healthcare and human performance, across automotive, such as in road cars, motor sports – working with the likes of Indy Car and Formula 1, and public transport. In transport we run a number of connected train services.”

Like many companies that have been around for more than five decades, McLaren has had to evolve the infrastructure in its datacenters, which have to support all three divisions. They had built up legacy systems, applications and services. More than three years ago, the company teamed up with Dell EMC to consolidate its storage systems. McLaren had a range of storage types from various vendors, including NetApp and Hewlett Packard Enterprise.

“If someone had produced some storage, we bought some,” Brimacombe says. “You get to the point where you have all these different vendors supplying storage types, different performance levels, different characteristics, different support requirements and it’s just a nightmare for an IT department to try to support it.”

McLaren moved to a Dell EMC VNX storage array, consolidating the various storage types into a single VNX layer and used VPLEX for a software-defined storage (SDS) layer across the top. However, the company found there were drawbacks in the move.

“By consolidating onto one storage type, we inadvertently constrained the business side,” he says. “Now we have one storage type with one performance tier and one characteristic, where actually our modern businesses – Applied, Automotive, Racing – sometimes want block storage. Sometimes they want object storage. Sometimes they want file storage. Sometimes they just want archive, so what we’re doing now – within this year – is migrating from our VNX platform to Isilon for file storage, to ECS for object storage and to VxFlex for block storage.”

The VxFlex hardware also enables McLaren to move from its traditional blade chasses to a hyperconverged VxRail infrastructure.

“We’re moving storage and servers again,” Brimacombe says. “After three years, when we did the first transformation, we’re hitting it again. We’re doing this so our business can keep pivoting in all the work that it does – moving it into connected transporting, moving into connected cars, moving into more advanced motor sports. That hasn’t happened by accident. That’s happened as part of a designed business strategy and IT needs to be able to continue to support that pivot.”

That pivot will include an expanded business. McLaren is moving into the professional bicycle racing arena, which like F1 racing requires the infrastructure to run the data analytics, AI and machine learning workloads that are fed by the huge amounts of data being generated and collected.

“There are huge parallels with Formula 1,” he says. “If Formula 1 is the pinnacle of engineering achievement and human performance, it’s exactly the same in pro cycling. Pro cycling is finding that lightest, fastest, quickest bike, the best equipment that you can design and manufacture. Finding the human to stick in the saddle in order to make them operate at peak performance, that human is the engine in pro cycling, so the more you get out of the human individual, the more power and faster you get to get to the front of the pack. That kind of missing element you get in pro cycling is strategy. You’re cycling as a team [and] that team is there to protect the lead cyclist. If you understand what the other cyclists on the route are doing, how fast they’re going, what their cadence is, you understand your own capabilities and how fast you can go.”

Leveraging AI and machine learning in a data-driven environment also means taking advantage of the cloud.

“AI and machine learning are becoming far more important and something the cloud has always done very well,” he says. “We’re very used to handling very large volumes of data, so we have our own platform tools. Atlas helps us do data visualization and data manipulation; supply it to all the F1 teams. The car itself as 300 sensors, thousands of parameters, some of those sensors are logging at 100 kilohertz – that’s 100,000 times per second – dealing with data across thousands of parameters. If you tried to watch it, it’d be like The Matrix in front of you, just trickling down. There’s no chance of interpreting any of that, so you have to use AI and machine learning tools to find the anomalies, to find the patterns, to find where that data tells us when something’s wrong or when something is going well.”

It also allows us to see what the other teams are doing. And how do we know how to respond.

“The data drives us. There’s always a human in the loop,” Brimacombe says. “There’s no real automation from that data, but it informs our human engineers that this is the best strategy that we should execute on. Strategy is really interesting. In Formula 1, the time to make a decision is very quick. You have very little time to decide what the best thing to do is. There’s no opportunity to make no decision. It just doesn’t exist.”

McLaren also has moved to the cloud as its operations have become more global. The company’s central location is the McLaren Technology Center, but it has moved people out to offices around the world, so having it no longer makes sense to hold all information centrally. McLaren now uses software-as-a-service (SaaS) tools like Microsoft Office 365, SAP SuccessFactors and Salesforce and moving some workloads into the cloud – including Exchange, Skype, SharePoint, and file services through Microsoft’s OneDrive – has freed up space in the datacenters.

In the cloud, McLaren uses Microsoft Azure and Amazon Web Services for both platform-as-a-service (PaaS) and infrastructure-as-a-service (IaaS), as well as for native SaaS services. In the on-premises datacenters, the company runs high-availability clusters between the two, driven by VxFlex and Isilon.

“What we’re doing is moving the high availability up the stack to the application level,” Brimacombe says. “That helps us be much more ready for things like containerized services and Kubernetes, where it’s already abstracted from the hardware. Kubernetes just doesn’t care where the hardware is, so there’s no point in have data replication at the hardware layer.”

Overall, McLaren has about 850 VMs on physical servers on premises, with similar numbers in the cloud. It also runs a lot of HPC workloads and supercomputers and high-end storage that run simulation and modeling analysis that Brimacombe wouldn’t detail. For general computing, it has about 1 PB of storage.

McLaren also has mobile datacenters that it brings trackside to 21 F1 race locations around the world that are outfitted with two racks of Dell EMC PowerEdge R740xd rack servers. The company wanted a hyperconverged environment because it needed the SDS layer.

“We want a software-defined storage layer and because we can save weight,” he says. “By not having a disk tray to go with it, we can save some weight, and actually when you’re shipping kilos of servers around the world to 21 global locations, every kilo makes a difference. That’s our mobile datacenter we carry around.”

The infrastructure the company has in place helps support the simulations and similar work that is being done in all three divisions and fueled by the data the McLaren generates. The amount of data is growing exponentially and the company needs to be able to corral it and leverage it.

“We want to get more,” Brimacombe says. “We want to understand more. Understanding greater fidelity in the real world helps us to improve our virtual digital twins for simulation. It’s in that virtual world where all the pre-work happens. There’s about 30,000 components on the Formula 1 car and from the start of the season in Australia to the end of the season in Abu Dhabi, about 80 percent to 85 percent of the components on that car will change. It’s a huge, continuous cycle of disruption and innovation within our own team just to keep up with everyone else and keep our cars as competitive as possible.”

Those 30,000 components need to be continuously improved in design and performance, and done so as quickly as possible.

“All of that is done in a virtual world. It’s done through simulation. It’s done through modeling. There isn’t the opportunity to create a new part and take it to the track to see if it works and wait until the end of the race to see if it was good or not. You do that in a virtual world. You do that with your digital twin. That gives you a much higher hit-rate. When you take that new physical component to the track and fit it, you’re about 98 percent confident it’s going to make a positive difference. So that process, and IT being able to keep up with the data, that again is one of the reasons why I moved to VxFlex and Isilon and ECS.”

Sign up to our Newsletter

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Subscribe now

1 Comment

Leave a Reply

Your email address will not be published.


This site uses Akismet to reduce spam. Learn how your comment data is processed.