Image “i-stock”
“NATIONAL DEFENSE MAGAZINE“
“While the emergence of new technologies often introduces new legal and ethical questions, analysts say artificial intelligence poses unique challenges.“
_____________________________________________________________________________
“Artificial intelligence is a top modernization priority for the U.S. military, with officials envisioning a wide range of applications, from back office functions to tactical warfighting scenarios. But the Pentagon faces the daunting challenge of working through a plethora of ethical issues surrounding the technology while staying ahead of advanced adversaries who are pursuing their own capabilities.
Developers are making strides in AI, adding urgency to the department’s efforts to craft new policies for the ethical deployment of the capabilities. In August, heads were turned when an AI agent defeated a seasoned F-16 fighter pilot in a series of simulated combat engagements during the final round of the Defense Advanced Research Projects Agency’s “Alpha Dogfight” Trials. The agent, developed by Heron Systems, went undefeated with a record of 5-0 against the airman whose call sign was “Banger.”
“It’s a significant moment,” said Peter W. Singer, a strategist and senior fellow at the New America think tank, comparing it to chess master Garry Kasparov losing to IBM’s Deep Blue computer at the complex game.
During the simulated dogfight “the AI shifted [its tactics] and it kept grinding away in different ways at him” until it won, noted Singer, co-author of Ghost Fleet and Burn-In, which examine the military and societal implications of autonomy and artificial intelligence.
Although keen to exploit the benefits of emerging AI capabilities, senior defense officials have repeatedly emphasized the need to adhere to laws and values while mitigating risks.
The challenge is “as much about proving the safety of being able to do it, then the capability of being able to do it,” Assistant Secretary of Defense for Acquisition and Sustainment Kevin Fahey said at the National Defense Industrial Association’s Special Operations/Low-Intensity Conflict conference. “We struggled with it policy-wise as much as anything.”
“This is a technology that is increasingly intelligent, ever-changing and increasingly autonomous, doing more and more on its own,” Singer said. “That means that we have two kinds of legal and ethical questions that we’ve really never wrestled with before. The first is machine permissibility. What is the tool allowed to do on its own? The second is machine accountability. Who takes responsibility … for what the tool does on its own?”
Paul Scharre, director of the Technology and National Security Program at the Center for a New American Security and the author of Army of None: Autonomous Weapons and the Future of War, said the laws of armed conflict have long been baked into how the Pentagon incorporates new technology. But artificial intelligence isn’t like standard weapon systems, and it requires more oversight.
“What I think you’ve seen DoD do, which I think is the right step, is say, ‘AI seems to have something different about it,’” Scharre said. “Because of how it changes the relationship with humans and human responsibility for activity, because of some of the features of the technology today and concerns about … reliability and robustness, we need to pay more attention to AI than we might normally would to, say, a more advanced missile or some other kinds of technology.”
In February, the Defense Department rolled out a list of five AI ethical principles based on recommendations from the Defense Innovation Board and other experts inside and outside of the government.
Military personnel must be responsible and exercise appropriate levels of judgment and care while remaining responsible for the development, deployment and use of AI capabilities, according to the list.
The technology should be “equitable” and steps taken to minimize unintended bias.
It must be traceable: “The department’s AI capabilities will be developed and deployed such that relevant personnel possess an appropriate understanding of the technology, development processes and operational methods applicable to AI capabilities, including with transparent and auditable methodologies, data sources, and design procedure and documentation,” according to the list.
Systems must also be reliable: “The department’s AI capabilities will have explicit, well-defined uses, and the safety, security and effectiveness of such capabilities will be subject to testing and assurance within those defined uses across their entire lifecycles.”
And finally, they must be governable: “The department will design and engineer AI capabilities to fulfill their intended functions while possessing the ability to detect and avoid unintended consequences, and the ability to disengage or deactivate deployed systems that demonstrate unintended behavior.”
The Pentagon’s Joint Artificial Intelligence Center has been tasked with developing the policies to turn the new AI principles into practice, and leading implementation across the department. Its policy recommendations will be delivered to leadership by the end of this year, according to Alka Patel, head of AI ethics policy at JAIC.
Patel and her colleagues are wrestling with a number of problems, including the enormity of the bureaucracy and force components.
“The DoD is such a large and diverse organization,” she said in an interview. AI ethics need to be integrated across the entire enterprise, “which adds to the complexity and the challenges of how we think about our policy recommendations and the need for making sure that we do it right.”
Policies must provide flexibility for the various agencies implementing them, and keep up with the rapidly evolving and maturing technology, she noted.
JAIC is focused on people, processes and partnerships as it tackles these wide-ranging challenges, she said.
On the personnel front, there is a knowledge gap when it comes to AI and its implications, officials say.
Special Operations Command Acquisition Executive Jim Smith said many officials don’t understand the terminology.
“There’s still a bit of a language barrier for artificial intelligence, machine learning, automation, neural networks,” he said at the SO/LIC conference. “You’ve got to understand all those.”
AI ethics concepts may take some getting used to, Patel noted.
“We need to make sure that … we’re creating or establishing a responsible AI culture,” she said. “That’s not something that we’re all born with. We don’t have that AI ethics bug in our brain. That is something that we need to learn and start creating muscle memory around.”
JAIC launched a Responsible AI Champions pilot program earlier this year to get after the problem, taking a cross-functional cohort of individuals from acquisition, strategy, policy, human resources and other communities across JAIC and put them through a nine-week “experiential” learning process.
“There is going to be a large learning curve,” Patel said. “There are ethics principles for the DoD, but then the next question is, ‘Well, what does that mean? What does that mean in my role? How do I ensure that I’m actually satisfying these principles?’”
The principles need to be kept in mind throughout the acquisition process and product lifecycle when officials are thinking about how to design, develop, deploy and use AI, she said.
Through the pilot initiative officials were able to gain insights that will inform their policy recommendations, she said. JAIC plans to scale the Responsible AI Champions program across the department over the next year.
The center is also engaging with industry and academia — organizations that will help design and build the systems — as it develops policies, to include requests for information.
“We have … created an expectation of having our respondents provide what their responsible AI strategies might look like and how they would address the DoD AI ethics principles,” Patel said. Responses will help identify gaps and inform how JAIC approaches implementation strategies, requirements and contracting.
JAIC is also engaging other nations as it works through policy issues. The center in September hosted the first-ever AI Partnership for Defense with military officials from 13 countries. The two-day meeting focused on ethical principles and best practices for implementation.
Singer said artificial intelligence tech shouldn’t be acquired in the same way that the Pentagon acquires uniforms or traditional weapons platforms. Very different processes are needed.
JAIC ethics personnel are looking at the research, development, test and evaluation enterprise to figure out the best way to approach that.
“We’re currently working with RDT&E folks in terms of thinking through how we can integrate the ethics aspects in their test harness” for software and other technology, Patel said. “We’re looking at the testing aspects, the algorithmic aspects, the system integration, and then the human-machine teaming aspects. … All of those pieces are critical aspects or potential areas for us to embed and engage in from a responsible AI perspective.”
Artificial intelligence must work as intended, or else bad things could happen and users won’t trust it. One issue that could undermine trust is known as algorithmic bias.
“Algorithmic bias is basically when either [the system] was trained in the wrong way for a scenario that it was applied to, or it was provided biased data of some kind,” Singer explained.
For example, in the civilian world there was a case where an artificial intelligence tool was used to aid in the treatment of heart disease, but it was providing bad medical advice for African Americans. “No one told that AI, ‘You be racist,’” Singer said. “But it was, because of the way it was trained in the data.”
Scharre noted that AI is also vulnerable to hacking or spoofing attacks that could corrupt data or cause other problems.
Patel said trust in the technology is a key concern for JAIC.
“When you think about it perhaps from a technical perspective of trust, whether it’s trustworthiness, whether it’s explainability or interpretability, what does that really mean?
What does it mean for each system?” she said. JAIC officials are looking at how to define and demonstrate those elements.
Scharre said AI is a “very alien” form of intelligence, which creates challenges in mitigating risk.
“The problem is that you can have machines that are much better than humans and safer in some settings, and then because their intelligence profile is different … there could be other environmental conditions in which their intelligence is quite poor, and they’re going to make mistakes that humans would never make,” he said. “They go from super smart to super dumb in an instant and quickly turn dangerously lethal.”
An example would be accidents that have happened with self-driving cars, which in some cases have had trouble distinguishing critical features in their operating environment and killed people.
“The car doesn’t understand that it is driving into a concrete barrier at 70 miles per hour. It just has no awareness of what it’s doing,” Scharre said. “Those kinds of risks are potentially very dangerous in the military context.”
The Pentagon will have to continually make decisions about which tasks to delegate to machines, he noted. A key question will be where humans will be in the decision-making loop.
While drone strikes during the post-9/11 wars have been conducted by human trigger-pullers, servicemembers may have a more hands-off management role in the employment of lethal systems in the future, officials have indicated. For example, the Air Force is developing autonomous, robotic wingmen to accompany manned fighter jets into battle.
But giving commanders oversight of robotic systems doesn’t guarantee there won’t be problems.
“Even if humans are in the loop, you could still get really faulty outcomes,” Scharre said. “You can even imagine intelligent decision aides that keep the human in the loop being problematic in a variety of ways” if they are biased or have faulty data.
Delegating the most consequential tasks to machines, such as nuclear command-and control, for example, would be “crazy,” Scharre said.
What is the likelihood that an AI-enabled robot will make a mistake and attack unintended targets?
It’s impossible to say at this point, Singer said, but it will happen eventually.
“Just because you have Moore’s Law doesn’t mean that Murphy’s Law is over,” he said. “You now have these new questions of accountability that you didn’t have before.” Patel noted that implementation policies will need to ensure that the technology has disengagement mechanisms in place in case something goes wrong.
“In many cases when we think about implementation of those principles themselves, they really speak to good engineering practices in terms of capability, in terms of reliability, in terms of governability,” she said.
Science fiction has often portrayed robots and other AI systems in a negative light, such as the Terminator turning on humans and taking over. Those fears unfortunately overshadow many conversations about AI, Singer said, and the technology is nowhere near that level of capability.
Experts say the world won’t be overrun by godless killing machines anytime soon. There are more pressing concerns, such as working through shortcomings in artificial intelligence and machine learning, and figuring out how best to conduct human-machine teaming.
“Maybe one day we’ll have to figure out whether to salute or fight our metal masters, but in your and my lifetime, that is not the question” that people need to be addressing, Singer said.
Patel noted that the nation is a long way off from having “general AI” that can think and learn as well as humans can.
“We’re still at the basic stages” she said. “We’re focused more on thinking about narrow AI in applications” to include back-office functions and logistics operations.
JAIC’s AI ethics policy recommendations to be released later this year are expected to cover a wide range of issues, from development of systems to tactical use cases. But policies will change over time.
“We will at least provide the first layer of infrastructure or the first layer of framework, recognizing that … we’re going to go back and reevaluate, readjust, pivot as necessary, iterate on it and add to it,” Patel said.
Singer said the Pentagon might someday loosen the restrictions it has placed on using artificial intelligence. An historical analogy is the U.S. Navy’s embrace of unrestricted submarine warfare in World War II. Moral opposition to the German use of the tactic against civilian vessels was a catalyst for the U.S. entry into World War I, he noted. But just a few hours after the Pearl Harbor attack in 1941, the order went out to wage unrestricted submarine warfare against Japan.
“We changed our mind,” Singer said. “Why? Because we were losing and we were pissed off. And so there might be certain limitations that we’ve placed upon ourselves that if you change the context, we might remove those limitations” on using AI-enabled systems. “
Comentários