How AI could make the next big crisis way, way worse

There are plenty of big global problems that people are hoping AI can finally help solve: climate change, traffic deaths, loneliness.

But what if AI, faced with a sudden crisis, is actually the wrong tool to manage a big problem in real time? What if it might make a bad situation drastically worse?

That’s the bleak potential future that Anselm Küsters, a tech researcher and historian at the Center for European Policy in Berlin, explored in a research paper published last December titled “AI as Systemic Risk in a Polycrisis.”

If that last word looks unfamiliar, “polycrisis” is an idea laid out by Columbia University historian Adam Tooze to describe the slow-rolling, mutually-reinforcing combination of parallel risks we’re living through — risks to climate, markets, and the security of Europe, just to name a few examples.

In that environment, Küsters makes the argument that when something goes gravely wrong, AI systems trained on older data from a relatively “peaceful” world might be woefully equipped to handle a more chaotic one.

How much should we worry about this, and is there anything we can do? I called him yesterday to discuss the origins of his project, the gulf between “data haves” and “data have nots” in a global crisis and what the European Union is getting right (and wrong) in the AI Act currently making its way through its parliament. An edited and condensed version of our conversation follows:

Why did you decide to write a paper about something outside your field, which hasn’t really yet happened?

I am an economic historian, so I look at things from a historical perspective, especially technology. What I’ve noticed is that most people are well aware that you have biased data in the sense that some groups are underrepresented, or that there are historical injustices in the data that then get perpetuated by the system — which is really good that this is now a commonly known problem.

But then I wondered if there’s also a temporal bias to the data —if all the data we use from the past 20 years have been collected in times of relative macroeconomic and political stability, if when we try to use AI systems to reduce complexity in a polycrisis it might have the opposite effect and actually make things worse.

In the paper you cite the work of Cathy O’Neil, who is a huge critic of how math and computer systems can go terribly wrong when they collide with real life. I take it she influenced your approach?

It influenced me a lot, because she comes from a computer science background but takes a broader view — she goes into the real world and looks at actual people and the problems that occur when they encounter these systems. One of the things I found very early on in my research was the story about how credit rating systems malfunctioned during the pandemic. The systems automatically thought that you don’t shop a lot online, you buy most of your groceries in physical stores, so in the early months of the pandemic of course they malfunctioned. This was the kind of human story that made me think about the problem in a more general sense.

What kind of systems are most vulnerable to the risk you’ve identified?

I give three primary examples: Finance, medicine and security. The fundamental problem in these areas is that their systems are largely automated, and thereby automatically affect a large number of people which makes it difficult to reverse their effects in a short period of time. What you are lacking in a crisis is time.

In your paper you mention the “data haves” and “data have nots.” Who are they, and how would inequality make a crisis like this worse?

We tend to think that the whole world is digitized, but in fact that’s mostly the Western world and various countries that are quite well off. In other countries we lack many indicators or data points we could use to feed these systems, which is a problem because the more data we have, the better we can train our models.

For instance, with more data we could better predict abnormal weather events in developing countries. But even having more data might not always be good, because you can then rely too much on models instead of common sense and human intuition. The more general point to make here is that detecting anomalous events is always difficult with AI because you have to train the system, and by definition anomalies are rare events, so you have a lack of data. To compare a normal situation and an anomaly requires comparable data that we usually don’t have — and we especially don’t have it in countries that are lacking in data, where we would need it the most.

What does the European Union’s AI Act do well, and not do well, to mitigate this kind of risk?

The EU’s AI Act takes a risk-based approach, so it’s based on systems’ perceived risk, which might not be completely perfect in the context of avoiding crises because the environment is changing so quickly and so dramatically.

We can’t change this, so I think it’s important to have a higher proportion of AI systems classified as “high-risk.” At this point in the negotiations there’s a lot of talk about lessening the extent to which systems will be classified as high risk, and I think that’s going in the wrong direction.

To some extent you can never fully understand this risk, because machine learning systems are often so sophisticated that even their designers don’t fully understand them. How do you think about risk management given that reality?

Many observers and policymakers think that we should have AI audits that you conduct on a yearly basis, or when you introduce a new product. I agree this is an important first step, but I want to highlight how the future is so unpredictable that no matter how well you do the audit, and no matter how much staff you employ, there will always be blind spots.

That’s partly related to AI systems being a black box, but also to the future being a black box. There are certain risks that might never be quantifiable, but that we should still be aware of — we will always have waves of new technology, and you can never know their effects ex ante, only over the course of history unfolding.

The global “decoupling”/realignment/nascent trade war over semiconductors is having, predictably, some unintended geopolitical consequences.

Yesterday POLITICO’s Pieter Haeck reported for Pro subscribers on how the U.S.’ successful efforts to get the Netherlands to stop selling China advanced semiconductor manufacturing technology has driven a wedge between the nation and the European Union. One EU diplomat told Pieter that the move could leave the EU as a whole vulnerable to retaliation by China, despite the deal being solely between the U.S. and the Netherlands, and one policy fellow argued it made the union look weak by virtue of its effective role as a bystander.

And the fallout isn’t limited to the EU and its member nations: POLITICO’s Graham Lanktree and Annabelle Dickson reported yesterday as well on the fallout in the U.K., where Downing Street is worried it’s falling behind in the race to decrease its reliance on China. “The U.K. needs to — at pace — understand what it wants its role to be in the industries that will define the future economy,” one lobbyist told Graham and Annabelle.

One major victim of the tech downturn: Microsoft’s AR/VR efforts, which were drastically cut amid the company’s 10,000-person layoff at the end of last month.

There’s one particularly interesting detail to those layoffs: The company’s HoloLens project, which after agreeing to a contract with the U.S. Army in 2021 that could have been worth up to $22 billion now seems defunct. Last month Congress rejected a request for $400 million to buy more of the goggles, which have been plagued by issues — like inciting headaches and nausea — since their introduction.

Congress instead authorized $40 million to the Army to try to develop a new model of the goggles, according to Bloomberg, but as Microsoft cuts back that might be a taller order than it once was.