In many industries, embracing AI in the application software stack it is not just a matter of training some large language models or recommender systems against general and then specific datasets and plugging it in. And in any regulated industry – particularly financial services because it deals with Other People’s Money – any system that is making decisions about how money changes hands has to be explainable and reproduceable as well as better and quicker than current manual and computerized methods of quantifying risk and moving money around asset classes.
This, in a nutshell, says Gil Perez, chief innovation officer and head of cloud and innovation network at Deutsche Bank, is why we have not seen wide deployment of AI in financial services at the scale we have seen it among the hyperscalers and third party software suppliers and even HPC centers with their giant simulations and models augmented by machine learning.
But the potential benefits for AI in financial services are so large that the multinational investment bank was approach by Nvidia co-founder and chief executive officer, Jensen Huang, 18 months ago to find out how the bank, which is based in Frankfurt, Germany, which has €1.5 trillion in assets under management, and which had €25.4 billion in revenues in 2021, was deploying AI.
But there are hurdles, and many of them are not technical but regulatory, particularly when it comes to data privacy and governance as well as not having an unintended biases.
“We are going to ensure that we have the right frameworks and control environments to ensure that not only are the models are non-biased at the beginning but that they continue to be over time to prevent any kind of drift and to give insight to the regulators,” Perez explains. “So there’s a lot of work that we need to do with that to get the regulators comfortable. We are working with about 46 different regulators, to just so everybody understands the scope, it’s not one or two. It’s not just the Fed in the United States or BaFin in Germany. When we’re deploying, we don’t want to deploy a solution for a single country. Our intent as a global bank is to do a global rollout of capabilities. And AI ML is definitely one that that needs to be taken with that approach.”
Everything that Nvidia learns from working with Deutsche Bank will feed back into its own financial services practice., and the only reason the German bank, which has operations in 58 countries and over 80,000 employees worldwide, would sign such a partnership deal is to get at the front of the AI line and get access to the most innovation at a high speed. In a sense, AI knowledge is just another investment class and another trade that it is willing to do.
As you might expect, Perez was a bit secretive about precisely how Deutsch Bank was experimenting with AI, but now that Nvidia and the bank have inked a partnership to embed AI into financial services applications, we can get a sense of where banks, trading companies, hedge funds, and insurance companies might start on their AI journeys before they stop talking about it at all because AI will become strategic.
Deutsche Bank and Nvidia have already set up a center of excellence in Dublin, Ireland and have dedicated researchers and data scientists that have been looking at potential use cases, and now through this partnership the German company is choosing the Nvidia AI Enterprise software stack to train and run its AI models both in its on-premises datacenters and at its chosen cloud provider, which is Google Cloud. This partnership is not just about AI, either. The RAPIDS component of AI Enterprise, which accelerates Spark in-memory analytics as well as more generic Python data processing routines by parallelizing them on GPUs with high bandwidth memory and which can accelerate performance by 20X, is also part of what the two companies are exploring for deployment in production.
The first use case that Deutsche Bank is working on is an avatar for its own human resources department that runs atop the Omniverse stack. Starting here makes sense since it is a self-sandboxing workload inside of the bank and only affecting employees. But one can imagine this being perfected and then rolled out into customer facing roles at some point.
One area of experimentation was using financial services derivatives of the BERT transformer model from Google to do risk management for the transactions it performs every day to manage those €1.5 trillion in assets and hopefully grow them at a faster pace than the market overall.
Nvidia and Deutsche Bank have been working on GPU acceleration for what Perez called its “more advanced high performance computing stack,” and added that it has seen 10X improvements over its existing platform. (He did not elaborate, as is usually the case with banks, and we presume that this had more to do with RAPIDS than anything else.) The price discovery, risk valuation, and model back testing applications have all been running on CPU-only clusters to date, and are usually done in batch mode overnight. But with the GPU acceleration, these can be run in real-time and that allows traders at Deutsche Bank to run more scenarios to better quantify risk because the software runs an order of magnitude faster.
Many of the features that are in the new AI Enterprise 3.0 stack, also announced today, are ones that Deutsche Bank needs to move forward with AI in production.
One of them is a set of unencrypted pre-trained models. Up until now, Nvidia and other model providers have been encrypting their trained models, presumably to preserve their intellectual property. But that is not going to fly with regulators, who will want to be able to have governance and compliance software peer into the model and see how it is making its decisions and what kinds of biases are inherent in the parameter weights in the models.
AI Enterprise 3.0 also includes something that Nvidia calls AI solution workflows, which is just a fancy way of saying that the stack has preconfigured models ready to do for cybersecurity threat detection, customer service, and business automation driven by AI algorithms, all running on the many frameworks in the AI Enterprise stack and more than 50 unencrypted pretrained models. These workflows consist of AI frameworks, pretrained models, Jupyter notebooks, Helm charts, and workflow documentation for each use case.
AI Enterprise 3.0 is now also deployable on the server virtualization hypervisors of Oracle Cloud Infrastructure, Hewlett Packard Enterprise Ezmeral, and Red Hat KVM/OpenShift as well as the VMware vSphere stack that Nvidia has been targeting for the past year.
Sign up to our Newsletter
Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.