Google’s Driverless Car Threatens to Unleash New Waves of Terrorism

To science fiction fans, the idea of one day owning a self-driving car is pretty fascinating. And if you haven’t seen all the hoohah in the news yet, Google has developed a driverless car that has been accepted by several states and the District of Columbia for experimental development. That means it is now legal to license and operate an automated vehicle on the roads of states like Nevada and Florida.

I personally find these legislative actions quite alarming. You have to ask if Google’s driverless car can be weaponized — and I suspect very strongly that the answer, in the context of today’s legal requirements for these vehicles, is a resounding “YES”.

Google's driverless car -- an early prototype.

Google’s driverless car — an early prototype.

Using autonomous vehicles in warfare is nothing new. The Israelis developed aerial drone technology in connection with the United States to keep an eye on its neighbors decades ago. The US introduced flying drones as part of their intelligence arsenal in the War on Terror’s multiple theaters, and the drones have since been armed with missiles and bombs. In fact, the weaponized drones have spurred much debate over the collateral deaths that continue to mount from drone strikes as militants around the world surround themselves with human shields (often their own wives and children) to discourage the strikes.

More benign drones (robots) are used to disarm terrorists’ roadside bombs and booby-traps by the military and police forces, and to carry/deliver critical supplies in dangerous terrain that is exposed to enemy fire. Police agencies are also starting to implement the use of drone technology for civil surveillance (probably mostly to track fleeing criminals and drunk drivers).

My great fear is that the American public will have become so desensitized to the use of robots by the military and police that they will lose sight of the fact that a commercial, civilian market in robot technology can easily be compromised by hostile groups if we fail to put safeguards in place to prevent the robots from being weaponized.

Scene from 'I, Robot' showing autonomous robots walking through a crowd of humans.

Scene from ‘I, Robot’ showing autonomous robots walking through a crowd of humans.

It’s probably too soon to be concerned about your Zoomba carpet cleaner rolling out of your home, down the street, and blowing up the local high school — but as we begin to introduce “personal assistant” humanoid robots into our lives we’ll inevitably give them some autonomy in their movements, just as in the Will Smith movie “I, Robot” (based on a series of short stories by Isaac Asimov). What would it take to weaponize one of these robots? In fact, all you would have to do is strap a bomb to its chassis and allow it to go about its normal business. Use a cell phone to set off the explosion and no one will ever know you were behind the crime.

Passive weaponization would be much easier to devise than active weaponization, where the robots’ on-board processing is compromised; still, if a criminal organization or terrorist group were to hack into driverless cars and personal assistant robots’ control systems, they could send them across borders, into public buildings, and through all sorts of complex maneuvers that would attract little to no attention — ultimately lining them up to execute all sorts of potentially devious plots.

Weaponized robotic systems don’t even have to destroy or damage infrastructure. They can just tie up our communications and transportation systems, distract emergency responders, send innocent people trekking off into the wrong areas, and do other things that might come across as simple mistakes. In order to see what is happening you would have to assemble a vast surveillance system that puts all the pieces of the puzzle together.

And as we recently found out with the Edward Snowden scandal, many people quickly become alarmed at the thought of governments watching their emails and phone calls. Never mind the fact that so much data is being collected that no individual can actually use it to personal advantage — people would rather risk being blown up by car bombs than allow their government to look for connections between known terrorists and other people.

Just over two years ago the New York Times published a map showing which countries where Al Qaeda operates; at the time there were over 20 such nations. The list has grown as Al Qaeda continues to attract new allies, take advantage of civil unrest, and appeal to “self-radicalized” militants. Al Qaeda’s ideology calls for launching a global war by 2020 to impose a world-wide Islamic caliphate. Although many researchers treat this proposal with skepticism they nonetheless caution that no matter how unrealistic the goal Al Qaeda continues to kill people and recruit new allies.

And it’s no secret that Al Qaeda’s attempts to take control of Somalia, Yemen, and Mali spurred reluctant European and American interventions to drive back the enemy forces — all without waiting for the gridlocked UN Security Council to pass new resolutions authorizing such interventions. The reality on the ground now is that as long as Russian and Chinese leaders continue to turn a blind eye to Al Qaeda’s political and strategic advances western nations will continue to act “unilaterally” — perhaps fulfilling a desire by the eastern powers to remain preoccupied with Al Qaeda so as to have no resources left to compete with Russia and China in their chosen spheres of influence.

As our political leaders play dangerous games on the chessboard of global politics our commercial leaders are playing an even more dangerous game on the chessboard of consumer sentiment. The demand for robots is there and growing year by year as American Baby Boomers and their counterparts in other wealthy nations grow older and resist the transition to assisted living and retirement homes. We are sacrificing productive jobs to automation in the name of making more money for corporate investors, but as we build more automation into our culture we do nothing to regulate its production, maintenance, and use.

We don’t have to worry about waking up one day to find that SkyNet has awakened — what is more likely to happen is that numerous fringe groups will begin concentrating their efforts on compromising automated systems, especially autonomous ones that can move freely through the civilian population, with the ultimate goal of building up robotic armies that can be activated through remote command-and-control centers.

And if you think that is an unrealistic projection, remember that only this past April a massive swarm of WordPress installations were turned into a botnet by hackers. The infection continues to spread even as compromised sites are cleansed. Robotic armies already exist on the Internet and their services are sold relatively cheaply to spammers and criminals; the self-styled “Anonymous” activists have been assisted by or utilized botnets in their “protest” attacks on government and corporate Websites.

It’s only a matter of time before these tactics are turned loose against autonomous robots because we are repeating one of the greatest follies in human history: we are failing to learn from recent history and therefore are dooming ourselves to repeat it. And the next wave of repetition may be more nightmarish than any Cassandra can possibly imagine.


Comments are closed.