Earlier in June, Isabel Linzer from the Center for Democracy and Technology (CDT) took part in a panel discussion at the Council of Europe’s 2025 Octopus Conference on cybercrime. The session focused on "Cyber Interference with Democracy," highlighting evolving disinformation tactics such as deepfakes, AI-generated content, manipulated engagement, and hyper-specific targeted disinformation campaigns.
Linzer pointed out that artificial intelligence has significantly altered political dynamics by enhancing digital influence campaigns. This shift has led governments, companies, campaigns, and civil society to reassess their priorities. Although measuring AI's direct impact on voters is challenging, Linzer stated that “we can say that it changed the electoral landscape and therefore, we need to think about the threat landscape and democracy in elections more broadly.”
She also addressed gaps in current regulatory frameworks like the Telecommunications Consumer Protection Act. According to Linzer, this legislation “has a lot of loopholes” and lacks provisions specific to AI. However, she warned against reactionary measures since much AI political speech involves satire or art by ordinary people. Instead of seeing AI as an entirely new threat, Linzer described it as “an accelerant of existing risks.”
The discussion touched upon state-level responses with mixed reactions. Over a dozen US states have proposed election and AI laws; one example is Connecticut's proposal for criminal penalties for distributing unlabeled election deepfakes. Such measures could create a “chilling effect on political discourse,” potentially suppressing nonthreatening messages from regular people. Linzer called for balanced scrutiny across political influencers, social media transparency efforts, and company policies in areas like advertising.
As rapporteur for the session on cyber interference in democracy, Linzer summarized key concerns: commoditization of "for-hire" cyber interference tactics makes them more accessible; establishing cause-effect relationships with interference complicates policy responses; balancing free expression with human rights is essential.
Despite recent backlash against media literacy and fact-checking initiatives due to inconsistent transparency from social media platforms and AI developers, these tools remain crucial. The debate over regulation versus self-regulation underscores the urgent need to update laws governing digital spaces such as political advertising rules. Effective prevention requires collaboration among stakeholders through strengthened cybersecurity measures and improved international cooperation.