The big idea: Should we worry about artificial intelligence? – The Guardian

The big idea: Should we worry about artificial intelligence? – The Guardian

Ever since Garry Kasparov lost his second chess match against IBM’s Deep Blue in 1997, the writing has been on the wall for humanity. Or so some like to think. Advances in artificial intelligence will lead – by some estimates, in only a few decades – to the development of superintelligent, sentient machines. Movies from The Terminator to The Matrix have portrayed this prospect as rather undesirable. But is this anything more than yet another sci-fi “Project Fear”?

Sign up to our Inside Saturday newsletter for an exclusive behind the scenes look at the making of the magazine’s biggest features, as well as a curated list of our weekly highlights.

Some confusion is caused by two very different uses of the phrase artificial intelligence. The first sense is, essentially, a marketing one: anything computer software does that seems clever or usefully responsive – like Siri – is said to use “AI”. The second sense, from which the first borrows its glamour, points to a future that does not yet exist, of machines with superhuman intellects. That is sometimes called AGI, for artificial general intelligence.

How do we get there from here, assuming we want to? Modern AI employs machine learning (or deep learning): rather than programming rules into the machine directly we allow it to learn by itself. In this way, AlphaZero, the chess-playing entity created by the British firm Deepmind (now part of Google), played millions of training matches against itself and then trounced its top competitor. More recently, Deepmind’s AlphaFold 2 was greeted as an important milestone in the biological field of “protein-folding”, or predicting the exact shapes of molecular structures, which might help to design better drugs.

Machine learning works by training the machine on vast quantities of data – pictures for image-recognition systems, or terabytes of prose taken from the internet for bots that generate semi-plausible essays, such as GPT2. But datasets are not simply neutral repositories of information; they often encode human biases in unforeseen ways. Recently, Facebook’s news feed algorithm asked users who saw a news video featuring black men if they wanted to “keep seeing videos about primates”. So-called “AI” is already being used in several US states to predict whether candidates for parole will reoffend, with critics claiming that the data the algorithms are trained on reflects historical bias in policing.

A real AI, Nick Bostrom suggests, might manufacture nerve gas to …….

Source: https://www.theguardian.com/books/2021/nov/29/the-big-idea-should-we-worry-about-artificial-intelligence