The Trump Administration, alongside the Department of Government Efficiency (DOGE), has been utilizing artificial intelligence (AI) to tackle fraud, waste, and abuse within the federal government. While AI's ability to identify patterns on a large scale seems efficient and promising, concerns have arisen about its effectiveness when not applied with precision.
The administration's approach has been compared to using a sledgehammer instead of a flyswatter—collecting vast amounts of data without fully considering the context or errors in historical data. This method can lead to inaccuracies and inefficiencies, such as incorrect claims like Elon Musk’s debunked statement about people over 150 years old receiving Social Security benefits. It also resulted in actions like DOGE cutting funding for health research related to 9/11 emergency responders, which was later restored by the CDC.
Understanding fraud, waste, and abuse is crucial for effectively addressing these issues with AI. Fraud involves dishonesty for personal gain; waste refers to unnecessary use of resources; and abuse is the misuse of power or authority. Each requires tailored solutions.
Government agencies have developed more targeted tools akin to flyswatters rather than sledgehammers. These include context-specific data analytics, natural language processing (NLP), and targeted anomaly detection.
Context-specific data analytics involve using AI systems that analyze large datasets within specific contexts to identify patterns and trends. For instance, the USDA uses AI combined with geographic information system (GIS) data for crop insurance fraud prevention by predicting crop yields and analyzing satellite imagery.
NLP is employed in grant application processes to detect duplicate applications or match them with suitable reviewers. This helps prevent both fraud and waste by ensuring resources are allocated efficiently.
Targeted anomaly detection involves identifying deviations from expected patterns within large datasets. Financial institutions use Suspicious Activity Reports (SARs) flagged by AI models trained on past illegal activities to prioritize investigations more effectively.
However, these systems are not without risks. Effective governance, oversight, continuous monitoring, and remediation procedures are necessary for managing potential harm caused by AI applications in this field. Michigan's experience with automated unemployment insurance fraud detection serves as a cautionary tale due to numerous false determinations leading to financial distress for many individuals.
For AI's role in mitigating fraud, waste, and abuse to be worthwhile despite inherent risks, efficacy is essential. The more focused approach of "flyswatter" AIs tends toward greater effectiveness while minimizing damage through defined scopes that allow agencies better control over performance measurement and system updates when needed.
Ultimately,"flyswatters don’t just do less damage; they also just plain work better."