Newsom working group emphasizes transparency in AI development report

Newsom working group emphasizes transparency in AI development report

Technology
Webp 3r2l9nmmbri3huekmox6348shtyh
Alexandra Reeve Givens President & CEO at Center for Democracy & Technology | Official website

ORGANIZATIONS IN THIS STORY

LETTER TO THE EDITOR

Have a concern or an opinion about this story? Click below to share your thoughts.
Send a message

Community Newsmaker

Know of a story that needs to be covered? Pitch your story to The Business Daily.
Community Newsmaker

In September of the previous year, Governor Gavin Newsom of California initiated a working group composed of esteemed academics and policy experts to construct a report on AI frontier models. This report is intended to guide regulators and shape a framework for California's approach to the use, assessment, and governance of advanced AI technology. The working group's draft report was unveiled in March for public input, and recently, feedback from the Center for Democracy & Technology (CDT) was submitted.

The CDT acknowledged the working group's emphasis on the necessity for transparency in AI governance. It is noted that transparency plays an essential role in managing AI risks and unlocking its benefits for various stakeholders. This involves enabling thorough research and informing the public about the impacts of AI systems in their lives. Transparency requirements could also create an environment where AI systems are developed responsibly and companies are accountable for any harm caused.

Currently, transparency within the AI sector largely relies on voluntary commitments from AI companies, which is deemed insufficient for managing AI risks. There have been instances where companies have rescinded transparency commitments due to business interests. An example is when Google introduced the AI model Gemini 2.5 Pro without including a promised safety report. This underscores the necessity for regulators to ensure developers provide adequate transparency for safe and responsible AI development.

Furthermore, CDT urged the working group to elaborate on its transparency recommendations. They argued the importance of precisely defined transparency measures supported by clear theories of change. They recommended that the report's proposed disclosures be supplemented with additional essential information. This would offer insight into the technical safeguards a developer utilizes and their internal governance practices for risk management. Other suggestions included obtaining clarity on developers' methods for assessing their safeguards’ effectiveness and decision-making processes regarding risk mitigation prior to model deployment. Additionally, the CDT recommended promoting awareness about how developers plan to address significant risks and incentivizing developers to allow pre-deployment access to qualified third-party evaluators when necessary.

Readers are encouraged to access the full comments from the CDT.

ORGANIZATIONS IN THIS STORY

LETTER TO THE EDITOR

Have a concern or an opinion about this story? Click below to share your thoughts.
Send a message

Community Newsmaker

Know of a story that needs to be covered? Pitch your story to The Business Daily.
Community Newsmaker

MORE NEWS