AI systems are increasingly recognized for their potential risks and impacts throughout their development and deployment. Ensuring these systems are safe, fair, and effective necessitates a range of assurance methods.
Interest in AI assurance is growing among policymakers, civil society, and industry. This set of practices aims to measure, evaluate, and communicate the trustworthiness of AI systems through methods like audits, red teaming, conformity assessments, or impact assessments.
Despite ongoing assurance activities related to AI, the field remains fragmented with largely ad hoc efforts. Advocates for AI assurance suggest that professionalizing the industry could enhance its effectiveness in promoting sound practices in AI development and adoption. Professionalization typically involves training or certification but may also include establishing codes of conduct, membership bodies, and standardized practices.
As AI usage expands, it is crucial for policymakers, industry players, and AI deployers to utilize available mechanisms to ensure high standards while minimizing undue risks. The active professionalization of AI assurance might support the creation of safe and reliable technologies that benefit society. Achieving this will require collaboration among policymakers and regulators, industry stakeholders, and organizations like standards development bodies. The report outlines important considerations to guide efforts toward this objective.