Nearly 10 years ago, Professor Stuart Russell, one of the world’s leading experts on artificial intelligence, was on the Paris Metro when he touched upon what he thinks may be the key to combating any potential threat posed to us by AI, while listening to Samuel Barber’s Agnus Dei.‘It is an amazing piece,’ he says, in the soft but gently insistent fashion of a reserved man with immense authority in his field. ‘It made me think about what’s important in the quality of human experience. I was having such an experience at the time. And it occurred to me that if artificial intelligence has a purpose, it is to improve the overall quality of human experience.
‘Then I thought: "But AI doesn’t know what the quality of human experience is." And I realised that this was an interesting new way of thinking about AI. The idea came to me in a flash. So I tucked it away.’
[Extraterrestrial beings, artifical intelligence and Dubai: Author Eric Van Lustbader reveals his fears and high points]
Depending on how the rise of superintelligent machines plays out – General Sir Nick Carter, the head of the British Armed Forces, for example, has just warned that AI weaponry is developing so quickly that in the future humans may have limited control on how we fight battles – Russell’s epiphany may be celebrated as the moment that humanity began the practical process of defending itself from the most powerful adversary it has ever faced.
A slew of books over recent years – notably Max Tegmark’s Life 3.0 and Nick Bostrom’s Superintelligence – has warned of the dangers of runaway AI escaping our control and, literally or metaphorically, turning us all into paper clips.
But Russell, in his book Human Compatible, AI and the Problem of Control, has begun to calculate the solutions we need, not only to survive, but also to harness the immense possibilities for good of AI. As Russell, who runs the Centre for Human-Compatible Artificial Intelligence (CHAI) at UC Berkeley in America, began mulling over control mechanisms, he realised that the entire field of AI had made a dangerous mistake. ‘Humans,’ he writes, ‘are intelligent to the extent that our actions can be expected to achieve our objectives.’
What went wrong was that machines became deemed intelligent by the same criteria – ‘to the extent that their actions can be expected to achieve their objectives’.
But this way of thinking, Russell concluded, was a catastrophe; first because it gave up control over the means machines might use to achieve their ends, and secondly because humans are often inept in setting out their desires.
The unintended effects, he writes, are already apparent. Social media algorithms given the simple, unmalicious objective of maximising clicks online do so not by giving us stuff suited to our existing, vague, hard-to-predict preferences, but by giving us stuff at the fringes of our belief systems, constantly nudging us to the political extremes. They do so not because their creators want more extremism, but because the more predictable we are, the easier it is to provide links we will click on. And extremists are very predictable.
‘The consequences include the resurgence of fascism,’ notes Russell. ‘Imagine what a really intelligent algorithm would be able to do.’ As a result, he came up with an entirely new formulation to govern AI: ‘Machines are beneficial to the extent that their actions can be expected to achieve our objectives.’
It is just a few words. But in it, he thinks, lies the logical, mathematical kernel that will keep man in charge of superintelligent machines.
He says: ‘My... principles are not laws for robots to follow. They are guarantees that the software is beneficial to you. We show that it will allow itself to be switched off. If you can’t switch a machine off, the game is over.’ To find, as he delicately puts it, that ‘my own field of research posed a potential risk to my own species’ was no great shock to Russell. ‘I read a lot of science-fiction as a child.’
But in the early Eighties, when Russell was completing his studies at Oxford, the very idea of machines approaching human-level intelligence seemed like a joke. Indeed, when Russell told his professors at Oxford that he had been accepted by Stanford to do a PhD in AI, they couldn’t contain their amusement.
He was already well used to taking his scholastic destiny into his own hands. At his Lancashire prep school, his maths teacher told him ‘you’ve done all the maths we have’ and gave him textbooks to teach himself. Russell was 10. At St Paul’s, aged 12, he sought out a technology course (from Twickenham College), which the public school did not offer.
These days, Russell says, he is sometimes approached as if he is about to press the nuclear button. ‘I’ve had people say, "Take your hands off the keyboard! You’re putting humanity in danger".’
But to him, it is more, not less research that is needed. And the clock is ticking, because AI advances in a very unpredictable way. Today, you can give vocal instructions to your phone only because speech recognition – deemed a major AI challenge just a few years ago – has been rapidly mastered.
Russell thinks there are ‘at least half a dozen significant steps [still] to get over’ before we create superintelligent machines. But that could happen ‘overnight’. ‘I know some serious, prominent researchers who say we only have five years,’ says Russell. Not long to cobble together safeguards, which must be infallible, first time round.
His book sets out how that might be done through ensuring, say, that machines do not assume they know precisely what we want them to do, and to ask before acting. That provides a mathematical mechanism to guarantee the greatest safeguard of all: the off-switch. And all this because of the on-switch that piped Barber’s Agnus Dei into Russell’s headphones all those years ago. One day we might all be grateful for that, altogether humbler, machine.
The Daily Telegraph