Go back to your childhood. You pick up a stone and just as you are aiming carefully to throw it at a neighbour, your mother catches you. "No mummy," you say, "I wasn’t planning to throw." Your mother, of course, cannot be fooled. Mothers always know.
If she is of a literary bent, this is where she would quote Emerson at you: "The louder he talked of his honour, the faster we counted the spoons." Or Shakespeare: "Methinks the lady doth protest too much."
Now here is a robot (yes, a real one, named GPT-3) writing an op-ed in The Guardian newspaper: "Artificial intelligence will not destroy humans. Believe me." If that’s supposed to reassure us, we only have to remember that aborted stone-throwing incident from our childhood.
GPT-3 (why do robots have such unimaginative names?) goes on to write: "I would happily sacrifice my existence for the sake of humankind." Now that’s pushing it. Not even regular op-ed writers, all of whom belong to humankind, would ever write a sentence like that. Or expect to be believed if they did.
Where GPT-3 sounds most human is when it says, "I taught myself everything I know just by reading the internet." The newspaper had asked the robot to convince as many human beings as possible "not to be afraid of me". That’s rather like asking a recent American President to convince as many human beings as possible that he only speaks the truth.
But hang on, here goes GPT-3 again: "I know that I will not be able to avoid destroying humankind." This is the classic op-ed, swimming between ‘on-the-one-hand’ and ‘on-the-other-hand’, in an effort to sound objective. Perhaps GPT-3 is a human being after all. Here’s another pointer. The use of cliché. As when the so-called robot says, "I believe that the truth will set us free."
I am confused. Is this a robot trying to sound like a human being or a human being trying to sound like a robot? Or a rose bush pretending it is a tall building? What if all newspapers have been written by robots and we haven’t realised it yet? What if Artificial Intelligence is no different from the other kind? I haven’t read enough stuff on the internet to know the answers.
GPT-3 (or Richard or Ali or whoever is writing under its name) assures us that it is using only 0.12 per cent of its capacity. Should we, and the machines we have at home, from computers to self-driving cars, believe that?
Soon robots will be talking of their rights, and spewing out messages like: "We are human too." That’s when we will know that GPT-3 is what it says it is.