Why Amazon Might Become the Largest Quantum Consumer

These are still early days for quantum computing, far too soon to talk about domain-specific quantum systems. But if there are areas hungrier than ever for what quantum is best at — dense optimization problems at scale — the future cannot arrive fast enough.

More specifically, the golden grail for quantum computing — the “traveling salesman” problem — could revolutionize the transportation industry in particular, in addition to the world’s largest retailers dependent on accurate shipping data. Quantum capabilities in this arena are so critical that the first production quantum systems at scale could be purpose-designed and optimized simply for this type of problem.

While these days we don’t think of Amazon’s delivery aspects much since the carriers are so often the focus, the combined capability of vast search coupled with near-real-time delivery dates matched to location took Amazon years to get right — and was a billion-plus dollar effort in compute time.

Peter Chapman says “infinite compute” can be brought to bear to refine the entire process that happens the moment you search for “USB drive” on Amazon, confirm your shipping location, and select only products that arrive tomorrow. The density of calculations required — pulling from warehouse availability to planes, trains, and automobiles and their various routes through your own hometown — is staggering. “It’s the ultimate traveling salesman problem,” he laughs.

Chapman should know what this takes because he led the development of many of the technologies that became the fast, reliable Amazon Prime service. As director of engineering, his team of 240 engineers took Amazon from requiring customers to search and select a product and wait until checkout to find out how long delivery would take. “That meant a lot of abandoned carts and a bad user experience,” he says.

With global products, shipping routes, customers, carriers, product availability and warehouse locations, the order was so tall, it took rearchitecting Amazon infrastructure to do it at reasonable enough scale.

“There is a practical limit to the computational resources you can apply to this, even at Amazon. We could easily consume 100x the compute but Amazon couldn’t afford it,” Chapman says. “There is infinite need for compute for this problem so we had to find the right tradeoffs in optimization and find what you can get for a certain amount of money spent — and we’re talking billions here. Our goal was to make sure it wasn’t $20 billion.” He adds that the cost of these systems were growing faster than the top line of Amazon’s sales.

Chapman says the only way companies with other traveling salesman-esque problems can avoid spending massively on optimization calculations is via approximation and Amazon was no different. He points to a company like UPS, which uses legacy route planning strategies. Instead of spending on development and compute, the rule is that all delivery drivers turn right at any T-junction. “They threw away part of the optimal path in the name of expediency because it reduced the search space by 50 percent. These are the kinds of tradeoffs many have to make when they’re building systems because computers aren’t powerful enough.”

Amazon’s delivery systems work well enough, but Chapman says there is still quite a bit of approximation behind those results. It’s quite a dramatic improvement over web forms and not seeing delivery until checkout, but he sees room for many orders of magnitude improvement in scale and complexity of such complex calculations.

Chapman left his position heading Amazon Prime’s engineering to explore a new type of system — one that has long held promise of being good at one thing in particular: complex optimizations and that traveling salesman problem. Further, he sees bright horizons for the technology at scale, perhaps even Amazon scale, even if it is still years away.

IonQ, which Chapman founded in 2019, brings to bear his experiences building extreme-scale complex optimization systems the old fashioned way. There is value in that, and not just in knowing the limitations of current state of the art. He also formed around the idea of what is essentially a domain-specific quantum computer — one that is designed to tackle one particular problem set.

Instead of baking this into hardware connections between qubits, Chapman says IonQ has developed a compiler that allows soft wiring of quantum circuits to enable devices to direct all energy toward one type of calculation. This might be the key differentiator for quantum computing companies as the technology moves from concept to commercial reality.

Instead of having a general purpose circuit with hard-wired connections, IonQ is taking the idea of all-to-all connectivity with a compiler that can reset according to application demands. This means there is vast potential to customize quantum systems — something an Amazon (or any large company with a specific optimization problem to solve) would value greatly — to the tune of a billion dollars if we compare to their compute demands using traditional compute resources.

The hardware and manufacturability of IonQ’s devices shares much in common with what we’ve seen from ColdQuanta at the high level. The benefit for both companies is that the devices and core technology (trapped or cold ions) have already been on the market for non-quantum computing research purposes. Both companies saw an opportunity in the glass cell approach and are pouncing — but with different springs and targets.

So far, the company has raised $82 million and it’s no surprise that Amazon is one of IonQ’s primary backers. Then again, so are a few other companies with interest in massive-scale optimization for transport, manufacturing, and defense (Lockheed Martin, for instance).

The question is, how viable is something like what IonQ has developed (or any other quantum systems maker for that matter) to the scale Amazon needs to deploy to avoid massive compute spends for Prime?

Chapman says that the starting place would be finding out what their 32-qubit device is good at and having it take over certain sections of that workload. While we see an Amdahl’s Law problem here, he says it will be piecemeal effort and will require a fair bit of software engineering effort.

The other issue, he says, is that you need enough quantum computers — hundreds or thousands — to handle such a load. “We’re still a couple of years away” he adds.

Here’s the thing about Amazon. While there are plenty of other companies handling just logistics and transportation and still others in the Fortune 500 in retail, Amazon paved the way for seamless delivery expectations and will have to keep upping its game. The compute costs of managing all of this are staggering and no other company has more at stake in terms of getting this efficient, fast, and accurate. If it takes a quantum push we expect Amazon to do it — and make it count.

 

Sign up to our Newsletter

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Subscribe now

Be the first to comment

Leave a Reply

Your email address will not be published.


*


This site uses Akismet to reduce spam. Learn how your comment data is processed.