The latest updates to U.S. federal agencies' AI use case inventories reveal significant advancements and ongoing challenges. These updates, released at the end of 2024, showcase a major increase in AI use, with 1,400 more cases than in 2023—representing a 200% increase in reported use cases.
Federal agencies' commitment to transparency can be traced back to President Trump's Executive Order 13960 of December 2020 and the 2022 bipartisan Advancing American AI Act. Both directives mandate agencies to annually report and publish their AI use cases, a requirement reinforced by the Office of Management and Budget's (OMB) guidance.
An important development is the creation of a centralized, accessible repository for all agency inventories, addressing previous accessibility issues. Reporting has increased from 710 to 2,133 total use cases from the prior year. This surge reflects improved reporting guidelines under President Biden and possibly greater AI use, although it also creates an information overload that might obscure critical insights about the impacts on rights and safety.
The inventories now offer more detailed information on risks and governance, such as data use and risk management practices. However, this information lacks consistency, affecting the inventories' utility.
The 2024 updates reflect existing and new trends in federal AI use. National security, veterans' healthcare, and chatbots remain prevalent. Notably, several agencies reported using large language models and generative AI to analyze and process data, and manage public input and information requests. Agencies included the Department of Commerce, Department of Health and Human Services, and others.
The AI systems categorized as high-risk include those for law enforcement and national security, public benefits administration, and health and human services. The Department of Justice and Department of Homeland Security reported numerous high-risk AI applications, but many entries lacked critical risk mitigation and governance details. This absence is particularly concerning, given the impact of these technologies.
Intriguingly, the Department of Homeland Security omitted facial recognition's association with DMV databases in its inventory. The Social Security Administration and the Department of Veterans Affairs noted AI tools for public benefits program management, also classified as high-risk.
Despite progress, agencies still struggle with inconsistent documentation and inadequate detail on risk management compliance. For instance, only a few agencies provided explanations for risk management waivers and extensions.
OMB has the chance to improve guidance for the 2025 inventory updates to resolve these inconsistencies. OMB plans to issue detailed instructions to clarify agencies' obligations.
These inventories are vital transparency tools, fostering accountability. State and local governments are also adopting similar measures, with 12 states already implementing AI inventory requirements.
In conclusion, AI use case inventories should not just document AI use and governance; they should build public trust. By ensuring detailed, robust, and timely updates, these inventories can achieve their transparency goals.