The Guardian Weekly

What is AI – and what is all the fuss about?

What is AI, and will it make us all redundant?

By Dan Milmo

What is artificial intelligence? The term was coined in 1955 by a team including Harvard computer scientist Marvin Minsky. With no strict definition of the phrase, almost anything more complex than a calculator has been called artificial intelligence by someone.

But in the current debate, AI has come to mean something else. It boils down to this: most old-school computers do what they are told, following instructions given to them in the form of code. For them to solve more complex tasks, scientists are trying to train them how to learn in a way that imitates human behaviour.

Computers cannot be taught to think for themselves, but can be taught to analyse information and draw inferences from patterns in datasets. The more you give them – computer systems can now cope with vast amounts of information – the better they should get at it.

The most successful versions of machine learning in recent years have used a system known as a neural network, which is modelled at a very simple level on how we think a brain works.

Where might I start to encounter more chatbots or AI content?

Almost anywhere you currently interact with other people is being eagerly assessed for AI-based disruption. Chatbots in customer service roles are nothing new but, as AI systems become more capable, expect to encounter more and more of them, handling increasingly complex tasks. Voice synthesis and recognition technology means they’ll also answer the phone, and even call you.

And then there are the less obvious cases. The systems can be used to label and organise data, to help in the creation of simple programs, to summarise and generate work emails – anything where text is available, someone will try to hand it to a chatbot.

The uses sound quite benign. Why are experts linking AI to the end of humanity or society as we know it?

We don’t know what happens if we build an AI system that is smarter than humans at everything it does. Perhaps a future version of a large-language model-based chatbot like ChatGPT, for instance, decides that the best way it can help people answer questions is by slowly manipulating people into putting it in charge. Or an authoritarian government hands too much autonomy to a battlefield robotics system, which decides the best way to achieve its task of winning a war is to first hold a coup in its own country.

Who will make money from AI?

The big tech companies at the forefront of AI development are San Francisco-based OpenAI, Google’s parent Alphabet and Microsoft, which is also an investor in OpenAI. Prominent AI startups include British firm Stability AI – the company behind image generator Stable Diffusion – and Anthropic.

For now, the private sector is leading the development race and is in a leading position to gain financially. According to Stanford University’s annual AI Index Report, the tech industry produced 32 significant machine-learning models last year, compared with three produced by academia.

How can I tell if my job is at risk from AI?

Asked recently what jobs would be disrupted by AI, Sundar Pichai, the Google chief executive, said: “Knowledge workers.” This means writers, accountants, architects, lawyers, software engineers – and more. OpenAI’s CEO, Sam Altman, has identified customer service as a vulnerable category where he says there would be “just way fewer jobs relatively soon”. The boss of technology group IBM, Arvind Krishna, has said he expects nearly 8,000 back-office jobs at the business, like human resources roles, to be replaced by AI over a five-year period.

Some of it sounds dangerous; why is it being released to the public without regulation?

The recent history of tech regulation is that governments and regulators scramble into action once the technology has already been unleashed. For instance, nearly two decades after the launch of Facebook, the UK’s online safety bill, which seeks to limit the harms caused by social media, is only just about to become law.

The same is happening with AI. That’s why last week, a group of 23 senior experts in the technology released a policy proposal document warning that powerful AI systems threaten social stability and that companies must be made liable for their products.

Their document urged governments to adopt a range of policies, including allocating one-third of their AI research and development (R&D) funding, and companies onethird of their AI R&D resources, to safe and ethical use of systems; giving independent auditors access to AI labs; establishing a licensing system for building cutting-edge models; obliging AI companies to adopt safety measures if dangerous capabilities are found in their models; and making tech companies liable for foreseeable and preventable harms from their AI systems.

The Big Story Artificial Intelligence

en-gb

2023-11-03T07:00:00.0000000Z

2023-11-03T07:00:00.0000000Z

https://theguardianweekly.pressreader.com/article/281865828170732

Guardian/Observer