The Patriot Post® · An Emphatic 'Yes' to Killer Robots

By Rich Lowry ·
https://patriotpost.us/opinion/93412-an-emphatic-yes-to-killer-robots-2022-12-09

In the mid-1990s, Cyberdyne Systems Corporation created an artificial intelligence-based defense system called Skynet. When the system achieved self-awareness on August 29, 1997, it decided that humanity was the enemy and precipitated a devastating nuclear war.

And that’s pretty much why the San Francisco board of supervisors reversed an initial decision to allow its police force to deploy killer robots in extreme situations.

Of course, Skynet, the archvillain of the “Terminator” franchise, isn’t real. Yet, when the topic is robots very few people care.

There is indeed a large and entertaining body of movies about creepy and dangerous robots, from “Metropolis” to “Ex Machina,” from “The Day the Earth Stood Still” to “I, Robot,” but the key word in science fiction is “fiction.”

Taking cues from these films about how we should use robots is a little like trying to learn how to handle criminal gangs from “Minions: The Rise of Gru.:

The initial vote in San Francisco and its rapid reversal — plus, the rollout of an AI bot that can write reasonably well — have brought more handwringing about the potential threats of our technological future.

The risk is that we’ll take outlandish dystopian scenarios seriously and allow a poorly informed Luddism, combined with the special pleading of potentially threatened incumbent industries, to crimp technological advance.

Robots have had terrible PR going on a century now with little or no justification. What have they ever done to anyone, besides vacuum the corners of our houses and maybe deliver a pizza? On the basis of the historical record, it is robots who should fear humans. We are guilty of every imaginable crime, sometimes on an unspeakable scale; the Roomba might occasionally startle the dog.

The phrase "killer robots” is irresistible to people and, of course, has, shall we say, negative connotations. Still, robots are only a tool like any other.

The police already avail themselves of all sorts of mechanical implements that assist their efforts to track down suspects and, if necessary, kill them, from radios to cars to battering rams to helicopters to, of course, firearms. If we trust a police officer with, say, a Glock 19 — a lethal weapon — there’s no good reason to deny him or her a killer robot during a mass shooting or hostage-taking.

It’s always easy to say someone else should put themselves in harm’s way. There will come a day when insisting the police don’t deploy robots will seem like insisting every mission to neutralize a terrorist be flown by a manned mission instead of a drone.

By the same token, we don’t ask members of the bomb squad to poke and prod potential bombs themselves when they can have robots do it for them.

In Dallas in 2016, police used a robot mounted with explosives to take out a sniper who had shot and killed five officers. What would have been more dystopian — more officers getting shot, or a killer robot getting the job done without exposing anyone else to harm?

The deepest fear about robots and AI is that they will become so sophisticated and advanced they will spiral out of our control.

Even if this were theoretically possible, we are extremely far away from the time when robots achieve human-like autonomy, or when AI matches our intelligence. Human intelligence is still such a mystery — and the variety of human interactions that we take for granted so subtle and vast — that truly replicating anything approaching it is like trying to send a manned mission to Proxima Centauri b.

It’s true that robots, like every other technological advance, destroy jobs. They also create new ones.

With the U.S. experiencing lackluster productivity growth since 2010, we need the best robots and AI that we can muster. We shouldn’t fear them just because — decades-old spoiler alert — HAL turns out to be a dastardly villain in “2001: Space Odyssey.”

© 2022 by King Features Syndicate