As artificial intelligence continues to revolutionize warfare, the US Department of Defense released a plan in July called the Data, Analytics, and Artificial Intelligence Adoption Strategy, which aims to enable DOD and military decision-makers to use data, analytics and AI to achieve their objectives.
The strategy envisages leveraging high-quality data, analytics and AI to make rapid, well-informed decisions to address operational problems. It stresses that agile, user-focused development is essential for achieving these outcomes. It states that the DOD will adopt a continuous cycle of iteration, innovation, and improvement to ensure stability, security, and ethical use of technology.
In terms of goals, the strategy aims to strengthen governance and remove policy barriers; deliver capabilities for joint warfighting impact; improve digital foundational data management; invest in interoperable federated infrastructure; advance data, analytics, and AI ecosystem; and expand digital talent management.
The strategy states that the DOD prioritizes data as a strategic asset, adopting open architectures and a decentralized approach for better data management. It says the DOD is using an agile approach to improve data quality and employ analytics and AI to identify constraints and fill capability gaps proactively. The strategy says that to strengthen governance, the DOD will adopt open standards and robust cybersecurity.
It also says the DOD is committed to expanding and enhancing secure, interoperable infrastructure that supports data, analytics, and AI capabilities. It adds that the DOD will foster collaboration with various stakeholders and adopt a strategic “adopt-buy-create” approach to ensure rapid and responsible deployment of advanced technologies.
The strategy says that to enhance its workforce capabilities, the DOD will upskill and reskill while reforming talent acquisition and retention strategies to attract private-sector expertise and build a culture of innovation.
For implementation, the strategy mandates the Chief Digital and Artificial Intelligence Office to spearhead the strategy’s execution, coordinating with components via the CDAO Council and reporting to senior leadership while facilitating an annual review and sharing of insights across the DOD.
Further, it states that DOD components will customize their strategy implementation to their unique data maturity levels, missions and laws, designating responsible teams. At the same time, it says the CDAO provides expanded guidance and collaborates on performance measures.
The strategy says the DOD will integrate data, analytics, and AI technologies across its functions, adopting flexible resourcing and assessment tools for swift, iterative delivery and user-driven improvements while coordinating on strategy and managing security and ethical risks.
Ethics questions
However, the strategy may need to be more consistent with the United States’ stated principles for AI and its practical implementation.
In an article this month for Breaking Defense, Deputy Secretary of Defense Kathleen Hicks stressed that the US does not employ AI for censorship or repression but adheres to a value-driven, responsible AI policy that leverages the talent of its people to maintain a leading position, while also ensuring it leads AI innovation with careful consideration of its national implications.
However, Craig Martell, CDAO chief digital and artificial-intelligence officer, says the focus on developing a centralized artificial intelligence/machine learning (AI/ML) pipeline in 2018 was sensible for its time but became unnecessary by 2022 as major vendors began offering robust machine learning operations (MLOps) pipelines, leading to a policy shift toward allowing individual components to select their pipelines subject to compliance with monitoring, evaluation, and data-management standards.
In line with that, Hicks pointed out that while most commercially available systems powered by large language models do not currently meet the necessary ethical AI standards for responsible operational deployment, the DOD has identified more than 180 potential applications where AI can be beneficial under supervision, such as in software debugging and accelerating battle damage evaluations, with many of these applications being practical rather than hypothetical.
In military terms, Martell noted that the Pentagon will implement sharability, accessibility, and discoverability standards across the different branches of the US military to improve command and control capabilities through the AI and Data Acceleration initiative and Combined Joint All-Domain Command and Control (CJADC2), with an unconventional implementation plan to come.
The increasing use of AI for military applications has profound implications for warfare. In an article last month for Foreign Affairs, Michele Flournoy notes some of the military applications wherein AI has played a critical role.
She mentions examples such as predicting program and budget changes for sophisticated weapons systems such as the F-35 and identifying behavior patterns that removed Russia’s element of surprise in its February 2022 invasion of Ukraine.
Flournoy says AI could help the intelligence community predict Chinese policies and aid military operations by enabling efficient information flow and control of unmanned systems in conflicts. She notes that combining manned and unmanned systems could give the US an advantage over China in a conflict over Taiwan.
However, Flournoy notes that AI implementation can offer advantages like quicker decision-making and improved information. Still, she says that if AI implementation for military use is not carefully regulated, it could cause harm and require oversight for responsible use.
As for the ethical implications of AI in warfare, Jeremy Davis, in a September 2021 lecture at the Naval Postgraduate School, asks what can justify using AI to make decisions for killing. Davis stresses that the ethical question is whether AI provides sufficient evidence to justify its use for such purposes rather than just providing new information.
Davis noted that algorithmic systems are opaque and difficult to explain, as they are not auditable, leading to inaccurate data, with their iterative process potentially corrupting data and reproducing problems exponentially. He says predictive algorithms generate insufficient evidence for an evidence-relative justification to kill, even if fact-relatively justified.
Amid the race to get a lead in military AI technology and its ethical implications, this month, the Bletchley Declaration was signed by 29 countries, including the US, the UK and China, acknowledging the danger that advanced AI models pose and stressing the importance of international cooperation to mitigate the risks. This document is the first global statement on regulating the development of AI.