top of page
Writer's pictureKen Larson

DOD AI Ethical Principles Offer Strength, Opportunity


NATIONAL DEFENSE MAGAZINE” By Elliot Seckler


Five AI principles — responsibility, equitability, traceability, reliability and governability have been adopted by the U.S. Department of Defense at the recommendation of The Defense Innovation Board’s October 2019 report on ethical use of artificial intelligence.


Now, the Joint Artificial Intelligence Center is tasked with leading the implementation of these ethical principles. It has since issued its first request for proposals, the Joint Warfighting National Mission Initiative, which seeks partners “to design, develop and deploy [AI] technologies for the DoD.”

______________________________________________________________________________

“Artificial intelligence will be the future, and the department … must readily and effectively adopt its best practices if the [U.S.] wants to maintain its superpower status,” says Brian Schimpf, CEO and cofounder of Anduril Industries — a defense technology company.

But how the Defense Department merges ethics and artificial intelligence with American values will have a profound impact on future military operations, industry and the acquisitions process.


Engagement with industry will be essential in operationalizing and establishing requisite evaluative criteria to meet the AI ethical principles.


It has only been months since the department adopted these ethical principles, and uncertainty remains as much can change in how these ethical principles will alter its acquisition, procurement and contract processes especially with new leadership taking up residence in the Pentagon. Specific programs the JAIC may create moving forward could dramatically impact how and what kinds of defense technologies or systems companies can and will provide to the department.


Transitioning ethical principles from policy to practice is key. In some ways, defense companies have already internalized the AI ethical principles and begun to incorporate them into how they develop and think about capabilities.


Elbit Systems of America, a maker of unmanned aerial systems, has taken AI ethics to heart. According to Scott Baum, the company’s vice president of strategy and growth and former principal director of the Pentagon’s office of industrial policy, integrating AI ethics “is what you do as being a good custodian and participant in the industrial base.”

He notes that “the most significant consideration industry must now take into account at the start of any design approach to developing AI capabilities is the attention paid to testing, simulation and training.”


Ensuring the algorithms and data sets backed by machine learning “remain reliable and traceable from the beginning of the acquisition process is fundamental to the DoD’s ethical principles,” he states. And since industry will be required to train defense personnel on their AI capabilities, companies must now view training their AI systems as part of their development cycle. This could fundamentally alter the acquisition cycle.

The adoption of AI ethical principles marks an important inflection point for the U.S. military and industry. But the fact that competitor governments in China and Russia have not incorporated similar ethics into the way their militaries operate may present challenges.


Megan Lamberth, a member of the technology and national security program at the Center for a New American Security, believes integrating ethical principles into the acquisitions process will “require tremendous speed and scale in fielding capabilities commensurate with U.S. competitors.” But she also acknowledges that the “implementation process will take time and will certainly involve changes to the DoD’s acquisition and procurement processes.”


According to Dr. Michael Horowitz, the director of Perry World House at the University of Pennsylvania, “incorporating AI principles into the defense acquisitions process should not be viewed as a constraint either for industry or U.S. military readiness.”

In fact, “AI ethics principles help reaffirm our values in a way that could strengthen our ability to develop and field capabilities with machine learning built in.” For Horowitz, “keeping human judgment at the core of decisions regarding the use of force is a strength. It allows the U.S. to maximize what people and machines each do best.”


Anduril Industries also believes strongly in AI ethical principles. According to Schimpf, “the United States needs to have the strongest seat at the table when the world is establishing ethical norms around AI and machine learning. We believe it is important that an American company helps the U.S. compete for the technological edge that will allow the U.S. to dictate ethical norms and standards instead of China or Russia.”

He also believes that creating the right kinds of incentives during the procurement process would allow the Defense Department to field and quickly integrate the best capabilities and solutions required to meet national security challenges.


Currently, “what is holding the U.S. back is the flawed procurement process that does not incentivize important innovation in AI and machine learning.” One way to address this issue might be to award longer-term contracts to successful pilot programs that also meet the ethical principles. This may also increase the number of new entrants into defense programs.


Some uncertainty remains in how industry will need to respond to unforeseen changes and new requirements by integrating the principles.

One thing does remain clear: The next few years will be critical for the Pentagon, industry and the defense acquisitions process. U.S. national security may depend on the outcome.”



ABOUT THE AUTHOR: Elliot Seckler is a junior fellow at the National Defense Industrial Association.

14 views0 comments

Comments


Post: Blog2_Post
bottom of page