The Army wants its leaders to tout the use of generative artificial intelligence to the rank and file as a means to make work easier for soldiers, according to a new memo, even as other services have been hesitant to approve those tools for regular use.
The service, not typically known for embracing the bleeding edge of new technology, appears to be the first military branch to encourage the use of commercial AI such as ChatGPT, though troops may already be leaning on it to write memos, award recommendations and, most notably, complete evaluations, among other time-consuming administrative tasks.
But services such as the Space Force and Navy have urged caution or outright barred use of the tools, citing security concerns, as AI has swept through the internet and consumer technology in the U.S. and around the world, promising to automate many tasks that have so far been performed only by people.
Read Next: NATO Chief Sidesteps Questions on Biden's Fitness to Lead Alliance Against Putin
"Commanders and senior leaders should encourage the use of Gen AI tools for their appropriate use cases," Leonel Garciga, the Army's chief information officer, wrote in a memo to the force June 27.
Garciga wrote that the tools offer "unique and exciting opportunities" for the service, but he also highlighted that commanders need to be cognizant of how their troops are using the tools and ensure that their use sticks to unclassified information.
Artificial intelligence, once the realm of science fiction, became widely available to the public in 2021 as part of a program that could generate pictures from text prompts. The so-called generative AI has continued to advance, with new programs such as ChatGPT springing up, and is capable of producing not only pictures but also writings and video using commands or requests.
The Defense Department is heavily invested in AI technologies that some believe may be critical in future conflicts. But the military has wrestled with the question of how much troops should use commercial AI tools such as Google Gemini, Dall-E and ChatGPT.
"The Army seems ahead in adopting this technology," said Jacquelyn Schneider, a fellow at the Hoover Institution, whose research has focused on technology and national security.
Generative AI can potentially be used for wargaming and planning complex missions. In Ukraine, AI is already being used on the battlefield, serving as a kind of Silicon Valley tech rush for autonomous weapons.
There are some cybersecurity risks associated with AI when it comes to the military. The data troops input teaches those tools and becomes part of the AI's lexicon.
But for the rank and file, its use would mostly be more mundane and practical -- writing emails, memos and evaluations. Much of the nonclassified information used for administrative purposes, particularly evaluations, likely wouldn't pose a security threat.
"For something like performance evaluations, they probably don't have a lot of strategic use for an adversary; we may actually seem more capable than we are," Schneider added, referring to how the evaluations can bolster a service member's record with inflated metrics.
However, the Space Force in September paused the use of AI tools, effectively saying that security risks still needed to be evaluated. Before that, Jane Rathbun, the Navy's chief information officer, said in a memo to the sea service that generative AI has "inherent security vulnerabilities," adding they "warrant a cautious approach."
The Pentagon and the services now appear to be divided on the use of generative AI, with two ideas being true at once: Those tools come with cybersecurity risks, and the quick and widespread adoption among the public means they're here to stay.
Last year, the Pentagon stood up Task Force Lima to employ generative AI in the services and assess its risks.
"As we navigate the transformative power of generative AI, our focus remains steadfast on ensuring national security, minimizing risks, and responsibly integrating these technologies," Deputy Secretary of Defense Kathleen Hicks said last year when announcing the establishment of the task force.
The Army has yet to develop clear policy and guardrails for AI use, a process that could still be years away and follow the guidance of the Pentagon's AI task force. Developing guardrails can be further complicated as AI continues to evolve.
"It would be interesting to see what the limits are," Schneider said. "What are the missions that it's still too risky to use generative AI? Where do they think the line is?"
The service has already used AI to write press releases meant to communicate its operations to the public, typically through journalists -- which may lead to ethical concerns at news outlets on whether communications from AI are acceptable.
"With governmental sources, the potential to dodge accountability also worries me," Sarah Scire, deputy editor for the Nieman Lab, which covers the journalism industry, told Military.com. "If the AI-produced press releases or posts contain lies or falsehoods -- also sometimes known as hallucinations -- who is responsible?"
Related: VA's Veteran Suicide Prevention Algorithm Favors Men