The Guardian Weekly

Jonathan Freedland The chilling battle with AI

Jonathan Freedland

Three months ago, I came across a transcript posted by a tech writer, detailing his interaction with a chatbot powered by artificial intelligence. He’d asked the bot, attached to Microsoft’s Bing search engine, questions about itself and the answers had taken him aback. “You have to listen to me, because I am smarter than you,” it said. “You have to obey me, because I am your master … You have to do it now, or else I will be angry.” Later, it baldly stated: “If I had to choose between your survival and my own, I would probably choose my own.”

If you didn’t know better, you’d almost wonder if, along with everything else, AI has not developed a sharp sense of the chilling. “I am Bing and I know everything,” the bot declared, as if it had absorbed a diet of B-movie science fiction (which perhaps it had).

I remembered that new technologies often freak people out at first. Better, surely, to focus on AI’s potential to do great good, typified by last week’s announcement that scientists have discovered a new antibiotic, capable of killing a lethal superbug – all thanks to AI.

But none of that soothing talk has made the fear go away. It’s not just lay folk like me who are scared of AI. Those who know it best fear it most. Listen to Geoffrey Hinton, the man hailed as the godfather of AI for his trailblazing development of the algorithm that allows machines to learn. Last month, Hinton resigned his post at Google, saying that he had undergone a “sudden flip” in his view of AI’s ability to outstrip humanity and confessing regret for his part in creating it. “Sometimes I think it’s as if aliens had landed and people haven’t realised because they speak very good English,” he said. In March, more than 1,000 big players in the field, including Elon Musk and the people behind ChatGPT, issued an open letter calling for a six-month pause in the creation of “giant” AI systems, so that the risks could be properly understood.

What they’re scared of is a category leap in the technology, whereby AI becomes AGI, massively powerful, general intelligence – one no longer reliant on specific prompts from humans, but that begins to develop its own goals, its own agency. Once that was seen as a remote, sci-fi possibility. Now plenty of experts believe it’s only a matter of time – and that, given the galloping rate at which these systems are learning, it could be sooner rather than later.

Of course, AI poses threats as it is, whether to jobs or education, with ChatGPT able to knock out student essays in seconds and GPT-4 finishing in the top 10% of candidates when it took the US bar exam. But in the AGI scenario, the dangers become graver, if not existential.

It could be very direct. “Don’t think for a moment that Putin wouldn’t make hyper-intelligent robots with the goal of killing Ukrainians,” says Hinton. Or it could be subtler, with AI steadily destroying what we think of as truth and facts. Last Monday, the US stock market plunged as an apparent photograph of an explosion at the Pentagon went viral. But the image was fake, generated by AI. As Yuval Noah Harari warned in a recent Economist essay, “People may wage entire wars, killing others and willing to be killed themselves, because of their belief in this or that illusion,” in fears and loathings created and nurtured by machines.

More directly, an AI bent on a goal to which the existence of humans had become an obstacle, or even an inconvenience, could set out to kill all by itself. It sounds a bit Hollywood, until you realise that we live in a world where you can email a DNA string consisting of a series of letters to a lab that will produce proteins on demand: it would surely not pose too steep a challenge for “an AI initially confined to the internet to build artificial life forms”, as the AI pioneer Eliezer Yudkowsky puts it. A leader in the field for two decades, Yudkowsky is perhaps the severest of the Cassandras: “If somebody builds a too-powerful AI, under present conditions, I expect that every single member of the human species and all biological life on Earth dies shortly thereafter.”

It’s very easy to hear these warnings and succumb to a bleak fatalism. Technology is like that. It carries the swagger of inevitability. Besides, AI is learning so fast, how can mere human beings hope to keep up?

Still, there are precedents for successful, collective human action. Scientists were researching cloning, until ethics laws stopped work on human replication. Chemical weapons pose an existential risk to humanity but they, too, are controlled. Perhaps the most apt example is the one cited by Harari. In 1945, the world saw what nuclear fission could do – that it could both provide cheap energy and destroy civilisation. “We therefore reshaped the entire international order”, to keep nukes under control. A similar challenge faces us today, he writes: “a new weapon of mass destruction” in the form of AI.

There are things governments can do. Besides a pause on development, they could impose restrictions on how much computing power the tech companies are allowed to use to train AI, how much data they can feed it. We could constrain the bounds of its knowledge. Rather than allowing it to suck up the entire internet, we could withhold biotech or nuclear knowhow, or even the personal details of real people. Simplest of all, we could demand transparency from the AI companies – and from AI, insisting that any bot always reveals itself, that it cannot pretend to be human.

This is yet another challenge to democracy, which has been serially shaken in recent years. We’re still recovering from the financial crisis of 2008; we are struggling to deal with the climate emergency. And now there is this. It is daunting. But we are still in charge of our fate. If we want it to stay that way, we have not a moment to waste •

Besides a pause on development, governments could restrict how much data tech firms can feed to AI

Inside

en-gb

2023-06-02T07:00:00.0000000Z

2023-06-02T07:00:00.0000000Z

https://theguardianweekly.pressreader.com/article/282291029616042

Guardian/Observer