The Diplomat
Overview
The Mundane Danger Lurking in AI
Depositphotos, agsandrew
Security

The Mundane Danger Lurking in AI

AI might never become truly intelligent. It can still be wildly unpredictable.

By Jacob Parakilas

Artificial intelligence is increasingly a topic of national security conversations. China has placed it at the center of its national competitiveness strategy; the U.S. Defense Department’s commission on AI sounded an urgent call for the federal government to do the same, and the impact that AI will have on strategic competition has become a more and more urgent theme in the conversations around the next strategic era.

These assessments rely on a basic underlying projection that AI will become ever-more capable, in much the same way that Moore’s law predicted the regular, exponential growth of basic computer processing power. It seems that all sides in the “strategically competitive” future are counting on the development of systems that are increasingly capable of assessing situations and making reliable, complex decisions in an environment where changes occur too rapidly for much, if any, meaningful human control. This is most obviously true on a prospective future battlefield ruled by stealth, speed, and electronic warfare, but it may equally be true off the battlefield as well: in governmental and industrial espionage, in financial markets, or in the various ways that algorithms are driving and shaping political and cultural conversations.

The broad contours of that future do not necessarily look like the AI-driven apocalypses of science fiction; few theorists believe that an Artificial General Intelligence that matches or exceeds human capabilities is possible in the coming years or decades. The assumptions being made now are slightly more mundane: that the combination of sensors, algorithms, and processors deployed in the near future will be sufficiently sophisticated and reliable to push human decision-making farther up the executive chain.

Those assumptions, though, create a different sort of danger than the existential types of risk that have up until now been prevalent in the cultural imagination: a Terminator- or Matrix-type scenario where self-aware machines decide that human civilization is dispensable and act accordingly. Instead of dealing with implacably hostile AI, whether operating in the service of a human nation or not, we should also be preparing for dealing with AI that simply doesn’t make sense.

It’s important to note here that when we discuss artificial intelligence, we are not talking about a single, unified system. Rather, an artificially intelligent future is a future where a whole range of software platforms and hardware systems are more complex and faster, but not necessarily better integrated with each other, or with us.

Consider the present-day internet, which works on the basis of a set of shared basic protocols with an increasingly complex, dense web of systems layered on top of it. Those systems are operated by a dizzying array of commercial, governmental, and not-for-profit entities, and while most of those actors have a shared interest in keeping it running smoothly, there is far less agreement on how to meaningfully govern or steer it. The gaps in governance allow for all kinds of strange, unpredictable, and undesirable effects such as rating systems being hijacked by bots, abandoned domains being used to inject inappropriate content into mainstream sites, and more.

Now imagine that principle as applied to a massively more complex and evolved set of systems capable of independent decision-making, all designed by a widely disparate set of programmers and implemented, upgraded, and maintained by a variety of institutions whose interests may be fundamentally at odds. If the difference between our existing systems and AI systems is that the latter are capable of evolutionary behavior, both the frequency and the difficulty of fixing unpredictable and unwanted effects will be massively increased whether or not the AI systems are, in any meaningful way, “intelligent.”

Perhaps most of these impacts will be minor. Some may even be amusing. But if we are – as it seems we are – planning on offshoring increasing responsibility for life-and-death decisions to algorithms, the possibility of unpredictable emergent behavior resulting in real danger or loss needs to be addressed.

Part of the problem is that there is little political capital to be gained in the kinds of highly technical, labor-intensive, and unglamorous work that might limit this type of risk (even accepting that strategic rivals are likely to be cagey about the inner workings of their decision-making systems and that some degree of unpredictability is inevitable as a result). A declaration that U.S. artificial intelligence must be more capable than Chinese artificial intelligence will unlock bipartisan support and substantial appropriations; a declaration that the U.S. must institute more stringent standards for AI training data will not.

If you believe that the encroaching age of artificial intelligence opens the world up to unpredictable and potentially uncontrollable new dangers, the idea that AI might turn out to be significantly more limited might seem comforting. But we should also be aware of the dangers that lurk in unpredictable systems, and work toward managing those subtler but nonetheless important threats.

Want to read more?
Subscribe for full access.

Subscribe
Already a subscriber?

The Authors

Jacob Parakilas is an author, consultant, and analyst working on U.S. foreign policy and international security.

US in Asia
The Road Ahead for Taiwan-US Relations
Security
A Brief History of UFOs in Japan
;