Terminator-style killer robots could soon lay the groundwork for life and death decisions to be made by machines on the battlefield.
A sub-division of the US Marine Corps, known as Sea Mob, is developing artificially intelligent robots capable of collective autonomous decision making.
The first prototype test phase is reported to have been carried out in October 2018, with 35 ft inflatable ‘ghost fleet’ boats successfully piloted by AI enabled hardware off the coast of Virginia.
The machines were successful in intuitively sensing their surroundings, and operated in tandem to strategically position themselves.
It is the first step in developing fully autonomous sea-borne weaponry, capable of making life-or-death decisions. The machines would be equipped with the free-reign to target and rain down on the enemy without human input.
On a larger scale the US Navy is testing a ship dubbed Sea Hunter, which is hoped will be able to seek out and attack enemy submarines without any input from command control. It has so far travelled thousands of miles unaided.
Writing in a subcommittee paper on defence, former director of the strategic capabilities office Dr. William B. Roper, Jr said the Sea Mob/ Ghost fleet would convert “existing vessels into autonomous, collaborative ‘ghost fleets’ and ‘sea mobs’ capable of dangerous missions without putting critical ships at risk.”
“Teams of systems can survive—and even thrive—in contested environments where individuals, alone, would fail,” he told the Senate in 2017.
Although the details are on lockdown, it’s well known every other branch of the US war machine is researching how to harness the power of autonomous weaponry.
Roper confirmed this includes Avatar, a new Air Force drone technology and a broader kill-chain communications interface called Third Eye.
The secrecy of the research has not stopped fierce ethical opposition to the project.
Established in 2012, the Campaign to Stop Killer Robots is an organisation of activists working proactively to ban autonomous weapons and “retain meaningful human control over the use of force”.
They are urging people to put pressure on governments to “ban fully autonomous weapons that would select and attack targets without human control”.
They argue: “Fully autonomous weapons would decide who lives and dies, without further human intervention, which crosses a moral threshold. As machines, they would lack the inherently human characteristics such as compassion that are necessary to make complex ethical choices.”
Adding that “fully autonomous weapons would make tragic mistakes with unanticipated consequences that could inflame tensions.”
It is not known when the first killer-robot machines will be ready to be let loose in the battlefield.
What is clear, however, is that the future of AI enabled warfare is not just the stuff of movies, it’s a reality that cannot be ignored.