top of page
Writer's pictureKen Larson

Pentagon Clarifies ‘Confusion’ Around Autonomous Weapons And Artificial Intelligence (AI)


POLITICO NATIONAL SECURITY DAILY” By MATT BERG and ALEXANDER WARD


With help from Daniel Lippman


“The Defense Department’s original autonomous weapons policy was so unclear that even people inside the Pentagon had a hard time understanding it. Enacted in 2012, Directive 3000.09 was intended to set the record straight on how the department fields and develops autonomous and semi-autonomous weapons systems. It had the opposite effect.


The policy was updated to make things make sense. So far, the new version seems to be a welcome change.Because of technology advancements in recent years, the revised policy requires autonomous weapons systems that use artificial intelligence to follow DoD’s AI Ethical Principles policy. Those guidelines outline the design, development, deployment and use of AI.”

___________________________________________________________________________

“On top of standard department-wide approvals, both the old policy and the revision require a review process by senior officials before any autonomous weapons — that don’t meet specific exemptions, like being supervised by a person — are developed. The original exemptions list, though, left both officials and experts unsure what was allowed.


“We identified the need for clarification about the initial directive, both inside and outside the Pentagon,” MICHAEL HOROWITZ, director of the DoD’s emerging capabilities policy office, told NatSec Daily. “There was a lot of confusion.”


In a nutshell: Can the United States develop autonomous weapons? Yes. Are stacks of them hidden in the Pentagon’s basement? Probably not.


Because of technology advancements in recent years, the revised policy requires autonomous weapons systems that use artificial intelligence to follow DoD’s AI Ethical Principles policy. Those guidelines outline the design, development, deployment and use of AI.


There’s much overlap between autonomous weapons and AI, so the department had to clarify “how these two policy approaches interacted with each other, and that I think has been helpfully done,” GREGORY ALLEN, director of the AI Governance Project at the Center for Strategic and International Studies, told NatSec Daily.


Also, exemptions for senior review include autonomous weapons that involve a human operator; human-supervised autonomous weapons used for local defense; and autonomous weapons used to apply non-lethal force against targets.


The update further adds a new exemption for human-supervised autonomous weapons that defend drones.


“That’s interesting and definitely makes autonomous weapon use much easier in general,” ZAK KALLENBORN, a policy fellow at George Mason University, told NatSec Daily.


For instance, if a drone is operating in enemy territory, almost any weapon could be viewed as defending the platform, Kallenborn said. A robotic dog carrying supplies, he added, could carry a weapon to defend itself without approval.


“If Spot happens to wander near an enemy tank formation, Spot could fight. So long as Spot doesn’t target humans,” Kallenborn said. “Of course, clear offensive uses like turning Spot into a robo-suicide bomber would require approval, but there’s a lot of vagueness there.”


The United States isn’t currently developing autonomous weapons systems, at least publicly. But the directive lays the foundation for that in case such weapons are deemed necessary down the road.


“The mission needs of today and those of tomorrow will be different. So, the policies pertaining to the military capabilities of tomorrow need to keep pace,” Horowitz said.”



4 views0 comments

Comments


Post: Blog2_Post
bottom of page