Robots are now a fact of war, but the prospect of androids that can hunt and kill on their own should give us all pause
When U.S. forces invaded Iraq in 2003, they fought a traditional war of human on human. Since then, robots have joined the fight. Both there and in Afghanistan, thousands of “unmanned” systems dismantle roadside IEDs, take that first peek around the corner at a sniper’s lair and launch missiles at Taliban hideouts. Robots are pouring onto battlefields as if a new species of mechanotronic alien had just landed on our planet.
It is not the first time that the technology of warfare has advanced more rapidly than the body of international law that seeks to restrain its use. During World War I, cannons shot chemical weapons at and airplanes dropped bombs on unsuspecting cities. Only later did nations reach a verdict on whether it was acceptable to target a munitions factory next to a primary school.
Something similar is happening today with potentially even more profound and disturbing consequences. As Brookings Institution analyst P. W. Singer describes in “War of the Machines,” the rise of robots leads to the frightening prospect of making obsolete the rule book by which nations go to war. Armed conflict between nation states is brutal, but at least it proceeds according to a set of rules grounded both in international law and in the demands of military discipline. It is not true that anything goes in the heat of battle. “Such rules are certainly not always followed, but their very existence is what separates killing in war from murder and what distinguishes soldiers from criminals,” writes Singer in Wired for War, his recent popular book on the military robotic revolution.
Those rules are stretched to their breaking point when robots go to war. The legal and ethical questions abound. Who is accountable when a Predator’s missile hits the wrong target? Missiles from errant drones have already killed as many as 1,000 civilians in Iraq, Afghanistan and Pakistan. Does responsibility reside with a field commander in the Middle East where spotters identified the “target of interest”? Or should blame be apportioned to the “remote pilot” stationed at a military base near Las Vegas who launched the strike from 7,000 miles away? And what about a software engineer who might have committed a programming error that caused a misfire?
Considering rules of engagement for war-at-a-distance raises a surreal set of questions. Does the remote operator in Nevada remain a legal combatant—in other words, a legitimate enemy target—on the trip after work to Walmart or to a daughter’s soccer match? Would an increasingly sketchy line between warrior and civilian invite attacks on U.S. soil against homes and schools?
Remote-controlled robots are here to stay, and rules can be worked out to regulate their use. But the more serious threat comes from semiautonomous machines over which humans retain nothing more than last-ditch veto power. These systems are only a software upgrade away from fully self-sufficient operation. The prospect of androids that hunt down and kill on their own accord (shades of Terminator) should give us all pause. An automatic pilot that makes its own calls about whom to shoot violates the “human” part of international humanitarian law, the one that recognizes that some weapons are so abhorrent that they just should be eliminated.
Some might call a ban on autonomous robots naive or complain that it would tie the hands of soldiers faced with irregular warfare. But although robots have clear tactical advantages, they carry a heavy strategic price. The laws of war are an act not of charity but of self-interest; the U.S. would be weakened, not strengthened, if chemical and biological weapons were widespread, and the same is true of robots. They are a cheap way to offset conventional military strength, and other nations and groups such as Hezbollah in Lebanon are already deploying them. The U.S. may not always be the leader in this technology and would be well advised to negotiate restrictions on their use from a position of strength. We can never put the genie back into the bottle, but putting a hold on further development of this technology could limit the damage.
No comments:
Post a Comment