Public agencies are increasingly considering the use of artificial intelligence (AI) to enhance service delivery, driven by recent advancements in generative AI. A new report titled "To AI or Not To AI: A Practice Guide for Public Agencies to Decide Whether to Proceed with Artificial Intelligence," authored by Sahana Srinivasan, explores this trend and provides a framework for decision-making.
The report highlights that while AI has been used in various applications such as chatbots and fraud detection, its implementation should not be automatic. Agencies are encouraged to engage in a thorough decision-making process before adopting AI systems. This is crucial because inappropriate use of AI can undermine individual rights and safety and lead to resource wastage.
Despite existing frameworks for responsible AI use, there is limited guidance on deciding whether AI should be used initially. The report addresses this gap by proposing a four-step framework for public administrators:
1. **Identify priority problems**: Agencies should analyze specific issues they face to ensure innovations target pressing needs.
2. **Brainstorm potential solutions**: Consider both AI-based and non-AI alternatives through research and consultation.
3. **Evaluate viability of AI solutions**: Use an "AI Fit Assessment" with criteria like evidence base, data quality, organizational readiness, and risk assessments.
4. **Document and communicate decisions**: Transparency in documenting the decision-making process builds public trust.
These steps aim to help agencies make informed decisions about using AI while protecting privacy, safety, and civil rights. The guidance applies broadly across different types of AI systems.
"Most importantly," the report states, "these action steps should assist public administrators in making informed decisions about whether the promises of AI can be realized in improving agencies’ delivery of services and benefits while still protecting individuals."
The full report is available for those interested in exploring these recommendations further.