Tech giants have been wrestling with the tension between privacy rights and the U.S. Department of Defense’s use of artificial intelligence, particularly in regard to the department's notorious Project Maven imaging initiative. The issue has spilled over into areas of employee relations and brand reputation, too.
The department finally issued a set of ethical principles for AI last month, but it remains to be seen if the new guidelines provide reassurance for tech companies, their employees and their customers.
The Department of Defense and artificial intelligence
DoD released its new ethical guidelines last month, without much notice from the media outside of defense specialty publications.
The quiet reception was a bit surprising, considering the attention revolving around tech giants, AI, DoD and privacy rights over the past two years.
One reason for the media silence could be that ethical principles for AI are, in some ways, the least of DoD’s worries right now. Last year, Rand Corp. issued an absolutely scathing analysis of the current state of artificial intelligence use in the Defense Department and found a series of critical shortcomings.
The authors of the report did not mince words. In a summary, they wrote that DoD’s use of data and AI training was “fragile, artisanal and …optimized for commercial rather than DoD uses.” They also found that validation and testing procedures were “nowhere close to ensuring the performance and safety of AI applications" and that “DoD's posture in AI is significantly challenged across all dimensions.”
Among the items in a long list of deep, systematic faults, the authors noted that the agency’s newly established Joint Artificial Intelligence Center (JAIC) “lacks visibility and authorities to carry out its present role.”
“It also lacks a five-year strategic road map, and a precise objective allowing it to formulate one,” the authors concluded.
An ethical solution to systematic problems
In this context, the Defense Department’s newly articulated ethical principles simply paper over a much deeper problem.
Nevertheless, they do establish at least a minimal layer of order and predictability that is fairly consistent with corporate social responsibility principles. The new principles also rely on longstanding guideposts including the Constitution, the Law of War and various international treaties.
DoD divides the series of 12 new guidelines among five areas. Three involve the personal responsibility of DoD employees over artificial intelligence systems, in terms of exercising appropriate judgement, minimizing unintended bias, and focusing on accountability and transparency regarding data collection, methods and sources.
A fourth focus area commits DoD to deploy only AI systems with specifically defined functions that are tested and assured throughout their lifecycle. The department also committed to fail-safe systems aimed at preventing machines from running amok.
“The Department will design and engineer AI capabilities to fulfill their intended functions,” the summary states, “While possessing the ability to detect and avoid unintended consequences, and the ability to disengage or deactivate deployed systems that demonstrate unintended behavior.”
Who is steering the AI ship?
That’s a good start, but bare-bones guidelines may not be enough to reassure tech hardware companies like Apple, which need to reassure consumers that their personal devices protect their privacy.
A deeper dive into the issue reveals additional cause for caution. The guidelines were developed through the Defense Innovation Board (DIB) following a series of public meetings and listening sessions last year.
The 16-member DIB launched during the Barack Obama administration as a means of connecting the Defense Department with academia and entrepreneurs. It is chaired by Eric Schmidt, the former chair of Google and Alphabet whose company (he is currently a technical advisor to Alphabet) has found itself clashing with the Donald Trump administration over human rights issues. Google's vice president for wireless services also represents the company on the board.
That tension appears to indicate the DIB is politically neutral. However, there is a potential conflict regarding at least one other corporate member, Microsoft.
Last year, Microsoft beat Amazon out for the Defense Department’s massive JEDI cloud computing contract. Amazon does not have a DIB seat, which is not surprising in consideration of the ongoing feud between President Trump and Amazon CEO Jeff Bezos.
Another factor to consider is the newly formed American AI Initiative, which launched last year under an executive order. The new initiative is a “concerted effort to promote and protect national AI technology and innovation,” with action items like "remove barriers to AI innovation" and "train an AI-ready workforce."
Compared to this enthusiastic, aggressive pursuit of AI, the ethical principles recommended by the Defense Innovation Board seem like pretty weak tea.
In that regard, it is interesting to note that one person instrumental in promoting the American AI Initiative is Michael Kratsios, the Trump administration’s chief technology officer.
Kratsios is among several administration figures with ties to the leading government contractor Palantir, a company that has become notorious for the use of its machine learning and AI technology in enforcing U.S. immigration policy.
The takeaway: Tech companies may be wise not to take ethical assurances from DIB as a green light, and instead carefully consider employee and consumer concerns before moving forward on artificial intelligence contracts with the Department of Defense.
Image credits: Department of Defense
Tina writes frequently for TriplePundit and other websites, with a focus on military, government and corporate sustainability, clean tech research and emerging energy technologies. She is a former Deputy Director of Public Affairs of the New York City Department of Environmental Protection, and author of books and articles on recycling and other conservation themes.