“AIR FORCE MAGAZINE” By Amanda Miller
“DARPA’s GARD program developed a set of tools to teach developers of artificial intelligence common techniques to defend against attacks on their systems.
For the military to trust commercially sourced or even internally developed artificial intelligence, the technology will have to be defended. Now developers have a set of open-source tools to learn new defensive techniques and to test their products against simulated attacks.”
____________________________________________________________________
“Techniques to defend already-trained AI algorithms, or models, are as new as the attacks themselves—in other words, “brand new,” said Bruce Draper of the Defense Advanced Research Projects Agency.
In an attack, “the goal is to fool an AI system—make it behave incorrectly,” said Draper, DARPA’s program manager for the newly available set of tools called GARD. (It’s Guaranteeing AI Robustness against Deception.) In an interview, Draper said an attack might trick an AI system into misidentifying faces, for example, or even interfering with AI systems that detect conventional attacks.
Similarly to how AI is so broadly applicable, the team surrounding GARD hopes a common set of techniques will apply broadly across AI models.
“We’re trying to get the knowledge out so developers can build systems that are defended,” Draper said. That includes the military itself, other parts of the government, and the private sector. A related program will address the military more specifically. When utility companies upgrade their networks to protect from cyberattacks, for example, AI will likely factor in.
DARPA brought together researchers from IBM, Two Six Technologies, MITRE Corp., the University of Chicago, and Google Research to assemble the elements of GARD:
Building on a pre-existing open-source library of tools and techniques by IBM, the Adversarial Robustness Toolbox, GARD “made it better,” Draper said.
Google Research pitched in a self-study repository with so-called “test dummies” to teach developers common approaches to defensive AI.
Data sets within GARD’s APRICOT library (Adversarial Patches Rearranged In COnText) are there to help developers try to attack and defend their own systems, such as by altering what an AI system observes in its environment.
DARPA isn’t alone in questioning the security of AI.
The Air Force’s cyber policy chief Lt. Gen. Mary F. O’Brien has said that to be effective, AI has to be reliable—troops have to trust it. In using AI to augment human decision-making, for example: “If our adversary is able to inject any uncertainty into any part of that process, we’re kind of dead in the water.”
Western militaries—already “late to the party” in the creation of AI—risk unforeseen consequences by adopting AI made for the commercial sector, said NATO’s David van Weel.
Meanwhile, the now-concluded National Security Commission on Artificial Intelligence recommended in its 2021 final report that the Defense Department incentivize offices to adopt commercially available AI for business processes. The commission also acknowledged that “commercial firms and researchers have documented attacks that involve evasion, data poisoning, model replication, and exploiting traditional software flaws to deceive, manipulate, compromise, and render AI systems ineffective.”
For now, the defenses of commercially developed AI remain questionable.
“How do you vet that—how do you know if it’s safe?” Draper said. “Our goal is to try to develop these tools so that all systems are safe.”
Comments