Pages

Monday, October 10, 2022

Why Researchers Working on AI Argue It Could Cause ‘Global Disaster’





The new magic pill on the market is amorphous and versatile. The consensus among many researchers is that artificial intelligence’s efficiency will aid everything from healthcare and firefighting to hiring, art, and music. Even environmental catastrophes like the Bengaluru floods could benefit from five nifty A.I. solutions, the prophesied promise goes.



But skepticism surrounds its intent and purpose. What are the perils of A.I., in a world where it promises so much? A new paper written by researchers working on A.I. argues that such pervasive reliance on algorithms and machine learning could cause a global catastrophe on par with a nuclear disaster. The key isn’t that it’s the machines’ fault per se — it’s us. Whom we appoint to create and control them, and what they, in turn, instruct, can have devastating consequences for us all. It points to a need for understanding A.I. as a public good, with public consequences — bolstering the need to democratize our engagement with it.

The root of the current bout of anxiety around A.I. can be traced back to a paper authored by a working group of experts for RAND Corporation, an American non-profit. The experts included people working in A.I., government, national security, and business, some of whom concluded that the integration of quicker and smarter A.I. could create a false sense of fear. For instance, the rise of open-sourced data may be inferred to mean that a country’s nuclear capacity is at risk of exposure, which may push the country to take steps. Another scenario is that A.I.’s data may be used to decide where to strike. Overall, A.I. can manufacture a series of events where country A would be in a capacity to target country B, and that “might prompt Country B to re-evaluate the advantages and disadvantages of acquiring more nuclear weapons or even conducting a first strike.” A.I. “could considerably erode a state’s sense of security and jeopardize crisis stability,” the paper argued. If fake news meets A.I., the thinking is, it could lead to a third war.

This is neither a novel nor a unique fear: that A.I. could one day wipe out humanity or cause human extinction is a scenario many have dissected in all its dystopic scenarios. “Scary A.I.” is a sub-genre of its own, with many observing with suspect fascination about the “wild” things A.I. can do, and others preparing to enter the future with them. Pop culture gives plenty of references; The Matrix, The Terminator, and Ultron in Avengers all reflect a reality where A.I. entities cultivate a hatred for humans and are set on a warpath.

Arguably, catastrophe will not come at a machine’s whim. But there is merit to thinking deeply about Scary A.I. as a future and what, and who, may give machines enough power to wipe out an entire civilization. “The problem isn’t that AI will suddenly decide we all need to die,” as scientist Dyllan Mathews noted, “the problem is that we might give it instructions that are vague or incomplete and that lead to the A.I. following our orders in ways we didn’t intend.” Scary A.I. has more to do with us, our wild ambitions and unchecked dreams. This complicates how we look at ethics, transparency, and research within A.I. itself.

The legitimacy of the concern aside, the paper reflects the helplessness of a world where A.I. leads and we follow. But there is a significant context to this. Computer scientist Stuart Russell literally wrote the book on how A.I. could be disastrous for humans. And while he agrees we’ve set ourselves up for failure, he argues that it’s because the “objective” we’ve set for the A.I. are themselves misleading and vague.