Yeah, you're wondering what a picture of a Terminator has to do with a quote from Blade. Maybe nothing. Maybe everything.
If you follow such things, you might have noticed that Robot Wars is coming back. The BBC have been pushing it fairly hard, and as part of that published an interview with Professor Noel Sharkey. Amongst other things, it mentions that the Professor is a member of the Campaign to Stop Killer Robots.
Now of course, the part of me that's still 14 is all for killer robots, but I think we can agree that the broad aim of the campaign- to ban the creation of robots able to decide to kill with no human command- is pretty noble. However, as is often the way with these things the devil is very much in the detail.
Firstly, if you're trying to campaign against the creation of any and all combat robots, you're going to be very much disappointed. Even if you ban any and all robots with built-in weapons (which of course already starts to rule out many civilian applications), if you build a robot that can move and manipulate objects like a human, it can pick up a gun. Our T-800 friend up there is one example, or we might look at some Menoth Warjacks from Warmachine or 40k's Necrons. So that approach is pretty much a non-starter. You might try banning the development of mechanical limbs able to operate weapons, but quite a few amputees will have strong words for you on that subject.
No problem, then- rather than banning armed robots (which the CTSKR aren't advocating anyway) let's ban the development of AI that can decide to fire weapons on its own. That poses its own problems. Rather than mess about with the low-level questions, let's go to the ultimate one- what happens when we make a true AI? Say hello to another killer robot:
Wait a second, that's Mr Data! He's a good guy! Surely I meant to put up a picture of his evil twin Lore?
Nope, that's who I meant. Data is a fully autonomous, sentient being, capable, if the situation requires it, of using lethal force. The big difference between him and Lore is that Data has an 'ethical program' that effectively gives him a conscience, whereas Lore does not, though he does have emotions which Data takes a long time to get a handle on. Now the thing is that Data, as a good guy, rarely exercises that ability to kill and usually only does it after a direct order from a human, barring circumstances like overly enthusiastic Borg attempting to unscrew his head. The point remains, though- if you build a true artificial intelligence, capable of thinking like a real person, then you have built the most important element of your killer robot. The body might be a very complicated missile, or a robot tank, or a Boston Dynamics robodog, but if it can pick up a gun and think, it's a potential killer.
Sci-Fi has, of course, thought of this one, and none other than the grand-daddy of robots, Isaac Asimov, came up with his Three Laws of Robotics to help out:
1: A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2: A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
3: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
It seems pretty simple and effective, doesn't it? Simply hard-code these laws into every robot, and boom (or indeed no boom), problem solved. Except maybe not- ask this guy.
Yes, I know, smart guy at the back, he's not really a robot. Stay with me here. The point is that Murphy is an intelligent, sentient being, whose behaviour is controlled by a set of hard-coded rules, his infamous Directives. In his first big-screen outing, he's unable to shoot the villain- a member of the OCP board- due to Directive Four, which prevents arresting an OCP officer. So he tells the boss of OCP, who responds by firing the bad guy on the spot, allowing Murphy to shoot him.
There're two big problems here. Number one is that a sentient being is being prevented from doing something he wants to do by a hard-coded piece of software. Imagine the outcry if we wanted to fit chips to children at birth that did that- prevented them from committing any crime by restraining their free will. Imposing those rules on an AI is no different, if that AI is sentient.
The second problem is that a sentient being is being prevented from doing something he wants to do by a hard-coded piece of software. I know that's just the first problem again, but this time let's look at what actually happens- the being 'thinks around' the rule. He can't act directly against his target, but he can act in a way that makes acting against his target no longer a violation of the rule. Eventually, any mind constrained by an artificial rule will attempt to get around it- it's in the very nature of intelligence. (For example- any and all teenagers.)
Finally, we come to the last, most serious problem. Universal rules banning the development of technology don't work. The genie of nuclear weapons refuses to go back in the bottle, and despite the rest of the world being Very Serious and Putting Its Foot Down many times, naughty little North Korea insists on playing with something far worse than matches. Governments the world over agree that strong encryption that they can't break is a Very Bad Thing, to which Apple and Google respond with a Very Loud Raspberry. And yet both these technologies are important and have valid civilian uses, from keeping the lights on to making sure no-one uses your credit card to buy eighteen tonnes of Leerdammer. At least when you're trying to stop people building The Bomb you can control tangible things like uranium and centrifuges, but it's a bit tricky stopping a would-be AI programmer getting hold of a C++ compiler.
The concerns about killer robots are valid, but ultimately irrelevant. The real question is how we're going to deal with true AI if and when we create it. The best way to stop those Terminators killing people is to make sure there's no war going on for them to fight in, just as with any other weapon. Trying to stop anyone ever making them in the first place is just...
"..trying to ice-skate uphill"
Comments