Federal AI Use Jumps to 3,600 Cases, Brookings Finds

Brookings: federal AI use rose to more than 3,600 cases across 41 agencies in 2025; talent shortages, procurement rules and just 17% public trust limit wider deployment.

A Brookings Institution report found federal use of artificial intelligence rose to more than 3,600 documented cases across 41 agencies in 2025. The total was 69% higher than in 2024 and about five times the number reported in 2023. The analysis draws on agency AI inventories from 2023 to 2025, federal job postings, Office of Management and Budget guidance and interviews with current and former federal technologists.

The growth is concentrated in large agencies. For three consecutive years, five large agencies accounted for more than half of reported AI use cases. Large agencies made up 76% of the total inventory in 2025. Eleven small agencies that reported in 2025 together listed 60 use cases, about 2% of the total.

Agency applications span a range of functions. More than half of the Social Security Administration’s reported cases support service delivery and benefits processing. Over half of the Department of Justice’s reported cases are tied to law enforcement work. Nearly 60% of all federal AI projects in 2025 were in pilot or pre-deployment stages.

Workforce capacity is a recurring constraint. Of more than 56,000 federal technical job postings since 2016, just over 1,600-fewer than 3%—explicitly mentioned AI capabilities. The report notes a Biden-era hiring push to build AI skills and also documents workforce reductions in early 2025. At least 25% of AI-specific job postings were created in 2024 or later.

The report identifies cultural and policy barriers. It describes a risk-averse culture inside many agencies and cites signals from the prior administration linking AI deployment to workforce reductions via the Department of Government Efficiency. Procurement rules written for slower-moving software are described as ill-suited to acquiring, updating and auditing AI systems that vendors iterate quickly.

Accountability and transparency gaps persist. More than 85% of high-impact deployed AI use cases in 2025 lacked some required information about risk mitigation, despite OMB requirements intended to ensure safety and oversight.

Public skepticism is pronounced. The report cites survey data showing roughly half of Americans are more concerned than excited about AI’s growing role, and just 17% believe AI will have a positive impact on the United States over the next two decades. Separate polling in the report shows about 16% of Americans trust Washington to do the right thing most or nearly all the time.

The Brookings authors recommend expanding AI literacy and training across agencies, reforming procurement rules to account for rapidly evolving models, strengthening transparency practices around high-risk systems, and prioritizing projects that deliver clear, tangible benefits to citizens. The report calls for dedicated time and resources for experimentation and training, along with efforts to strengthen workforce pipelines and update acquisition rules to support broader deployment of safe, accountable AI across government.

The material on GNcrypto is intended solely for informational use and must not be regarded as financial advice. We make every effort to keep the content accurate and current, but we cannot warrant its precision, completeness, or reliability. GNcrypto does not take responsibility for any mistakes, omissions, or financial losses resulting from reliance on this information. Any actions you take based on this content are done at your own risk. Always conduct independent research and seek guidance from a qualified specialist. For further details, please review our Terms, Privacy Policy and Disclaimers.

Articles by this author