Cities and counties advance AI governance with new policies and guidelines

Cities and counties advance AI governance with new policies and guidelines

Technology
Webp 3r2l9nmmbri3huekmox6348shtyh
Alexandra Reeve Givens President & CEO at Center for Democracy & Technology | Official website

ORGANIZATIONS IN THIS STORY

LETTER TO THE EDITOR

Have a concern or an opinion about this story? Click below to share your thoughts.
Send a message

Community Newsmaker

Know of a story that needs to be covered? Pitch your story to The Business Daily.
Community Newsmaker

City and county governments across the United States are progressively integrating artificial intelligence (AI) into their public service operations, encompassing areas like transportation, healthcare, and law enforcement. However, the adoption of AI technologies by these local governments necessitates the establishment of appropriate safeguards to mitigate potential risks to constituents.

Various organizations, such as the GovAI Coalition and the National Association of Counties, are assisting local governments in formulating and enacting AI policies. The GovAI Coalition, for example, has developed a set of template AI policies that several local agencies have adopted for their governance strategies.

An analysis of AI policy documents from 21 cities and counties reveals that these local governments often integrate guidance from federal, state, and other local sources. Many local AI policies emphasize alignment with existing legal obligations, risk mitigation strategies focusing on bias and privacy, the importance of public transparency in AI use, and the necessity of accountability and human oversight in AI-assisted decision-making.

Some cities, including New York and San Francisco, have enacted ordinances obligating agencies to maintain public inventories of their AI use cases. Cities like Miami-Dade County draw from various government resources, including those from Boston, San Jose, and Seattle, to inform their AI policies. Birmingham, Alabama, also acknowledges inspiration from Boston's AI guidelines.

Moreover, local governments prioritize mitigating AI risks, such as perpetuating bias and generating unreliable outputs. For example, guidelines in cities like Lebanon, NH, and Alameda County, CA, acknowledge various bias types and stress the need for corrective actions. Similarly, Baltimore, MD, mandates city employees to verify outputs from AI tools before incorporating them into their work.

Public transparency is highlighted in AI guidelines from several cities and counties. Boise, Idaho, advocates that "disclosure builds trust through transparency," while Seattle goes further by committing to make AI-related documents publicly available. In Santa Cruz County, CA, employees are required to notify when AI substantially contributes to a work product.

The need for human oversight and accountability in AI use is stressed in the guidance documents. For instance, Alameda County, CA, requires employees to "thoroughly review and fact check all AI-generated content," indicating that responsibility for AI outputs rests with city and county employees. Non-compliance with AI guidelines could lead to disciplinary actions, as stated by the City of Lebanon, NH.

Ultimately, AI adoption at the local level should embody principles of transparency, accountability, and equity to enhance public service delivery. Local governments are encouraged to establish public-facing AI use case inventories, implement risk management practices for high-risk AI uses, and maintain proper human oversight. Community engagement is also critical, as demonstrated by the initiatives in Long Beach, CA, to involve community members in technology use discussions.

ORGANIZATIONS IN THIS STORY

LETTER TO THE EDITOR

Have a concern or an opinion about this story? Click below to share your thoughts.
Send a message

Community Newsmaker

Know of a story that needs to be covered? Pitch your story to The Business Daily.
Community Newsmaker

MORE NEWS