This is pretty fascinating stuff. Bloomberg reports that the Navy’s experimental drone, X-47B, has managed to land on a carrier without the help of humans. Yes, you read that right, the drone uses its computers and communicates with computers on the ship to take off and land.
The X-47B is what’s known as an “autonomous” system, about halfway between model airplane and fully intelligent robotic killer. It takes off and lands with the aid of computers aboard the carrier (if needed, it can get some fine-tuning from a human on the deck wearing what looks like a bionic forearm). On a mission, the craft flies itself based on directions and commands sent by human controllers to its onboard computer.
Calm down, technophobes. While the drone has some capacity to deviate from its pre-programmed actions if it detects condition changes, it can’t “think” for itself and isn’t going to turn on its human master like HAL 9000 from “2001: A Space Odyssey.”
The X-47B does, however, move us closer to the day when an unmanned combat system may have to make its own call on whether to take lethal action, a scenario that brings with it enough ethical questions to keep a major university philosophy department busy for years.
A few thoughts.
- The Bloomberg article suggests that this gives new life to those arguing in favor of building additional aircraft carriers. That’s probably true, but they need to b the same massive systems which we now use to launch and retrieve manned aircraft. Remember that these behemoths are large targets and some consider them indefensible in the event of an all out attack. If smaller sea-based launching pads are feasible, drones might point the way to a less costly alternative. It could also allow for more ships so that the loss of one or two is not catastrophic.
- Given the public acknowledgement of advances in drone technology, it’s probably rational to assume that the still secret advancements are substantially greater. I suspect that the ability of these weapons to indeed make independent decisions is not that far away at all and may even be a reality. The ethical questions to which Bloomberg alludes are probably going to be raised and grappled with only after we learn that our drones are, dare I use the word, “self-aware.”
- Is this the death knell for manned combat aircraft? I think the answer is probably yes for three reasons. One, you can build cheaper and higher performance aircraft if you don’t have to provide for the human body. Two, the cost savings in training human pilots is a not insignificant cost and the wear and tear on equipment is vastly reduced with less training. Three, inevitably fighter drones will be developed which can defend against enemy air power.
- Drones and other robotic armaments represent the only way out of the cost squeeze the defense establishment faces.
Don’t fall into the trap of assuming that the obstacles to fully autonomous weapons are the stuff of science fiction or their introduction is decades away. Kevin Drum had a revealing piece in Mother Jones several weeks ago in which he made a strong case for robots being a part of everyday life sometime around 2040. Look at Watson trouncing the Jeopardy champions or Google’s driverless cars and realize how quickly artificial intelligence is advancing. Then think about the military and things like GPS and the Internet. They had those long before they became part of our lives and you can count on robotics following the same path.