Ruchika Joshi, in her recent op-ed published on April 17, 2025, in Tech Policy Press, explores the current state and future implications of AI agents. Companies are heavily investing in these technologies, with ambitious plans for AI agents to operate with increasing autonomy and become integral to the workforce and various industries. The term "AI agents" typically refers to systems designed to autonomously perform tasks on behalf of users, unlike the more common AI systems like chatbots.
These agents differ by not only assisting with decision-making but by executing decisions themselves — whether booking flights, managing tasks, or automating approvals. Despite the promising potential of AI agents, they are currently plagued by challenges common to AI systems, including bias, hallucination, and poor real-world application, which have already led to significant mistakes and biased outcomes.
Joshi questions what measures are in place to prevent these agents from causing harm as they are developed to function with less oversight. The unsettling reality is that there is little clarity about the safety mechanisms being implemented. Documentation from developers, such as OpenAI and Anthropic, has been vague, offering assurances of safety without concrete evidence. Moreover, some developers have not released documentation at all, or have delayed doing so.
Joshi emphasizes the need for transparency and proper documentation to ensure these AI agents can be deployed safely.
Read the full op-ed for more details.