Last week, the United States Citizenship and Immigration Services (USCIS) announced its intention to implement routine screening of applicants’ social media activity for signs of alleged antisemitism. This surveillance will be part of the decision-making process for millions of immigration cases. The agency will also monitor the social media accounts of foreign students, identifying any speech that could lead to the revocation of their legal status. The Department of State is simultaneously applying artificial intelligence to enforce a “Catch and Revoke” policy, focusing on “pro-Hamas” sentiments among visa holders, especially among students involved in protests against Israel's actions in Gaza.
This marks the first time that the U.S. government has used an obscure immigration law provision to target noncitizens for their political expressions, as per the Secretary of State’s judgment of potential "serious adverse foreign policy consequences." Although USCIS has engaged in social media monitoring since 2014, the current administration's broad definitions of speech that warrant visa denials or revocation have raised constitutional concerns, exacerbated by the use of imperfect social media monitoring technologies.
USCIS's actions are part of aggressive immigration enforcement efforts seen under the current administration, extending beyond previous limits. These efforts include deportation proceedings based on legally-protected expressions, as the Immigration and Customs Enforcement agency detailed in a now-deleted social media post. Noncitizens, aware of government surveillance, are reportedly deterred from freely discussing issues such as immigration experiences, labor conditions, and domestic violence.
The use of automation to monitor social media is a calculated move by the U.S. government, targeting those who oppose official policies. The State Department has already revoked over 1,000 student visas, some for participation in First Amendment-protected activities. For instance, a Georgetown post-doctoral student lost his visa for supporting Palestine on social media, which was labeled as “spreading Hamas propaganda” by a Department of Homeland Security (DHS) spokesperson. Additionally, an incident from earlier this year involved the former President of Costa Rica, whose U.S. visa was revoked shortly after critiquing the U.S. government online.
This ideological targeting happens amid an immigration system plagued by application backlogs and inconsistent enforcement oversight, conditions worsening in the present political environment. While minimal oversight existed through entities like the Department of Homeland Security’s Office for Civil Rights and Civil Liberties, such bodies have been either weakened or closed, making error detection and due process assurances harder.
Applicants, when faced with errors, find themselves with limited options: silence, possible retaliation, or self-deportation. The intensified social media surveillance threatens to worsen existing system inequities, further stifling noncitizens' free speech and societal participation.
The U.S. government's reliance on automated tools for social media monitoring increases risks. These tools, likely based on keywords and machine learning models, including large language models like those in ChatGPT, possess notable limitations and biases. The errors inherent in these technologies may strengthen existing rights deprivations. During the first Trump administration, DHS avoided using such flawed systems due to potential enforcement errors.
Bias in training data, primarily sourced from open-access web sources, could lead models to disproportionately flag certain speeches. This bias and the administration's expansive definition of “antisemitism” risk excessive scrutiny for individuals expressing dissent on topics like Israel-Palestine.
Additionally, these tools struggle with contextual interpretation. While capable of detecting specific words, they often miss nuances like sarcasm, irony, or satire, potentially flagging non-threatening posts. This problem compounded efforts like Facebook’s Dangerous Organization policy, which mistakenly flagged benign content. Such inadequacies hinder predictive analysis claims, as they are marred by untested assumptions encroaching on fundamental rights.
Language diversity presents another challenge. Multilingual models, though improving, still rely heavily on English data, missing informal speech patterns, dialects, and slang. This limitation in non-English language analysis can lead to significant errors.
Previous misuse of digital translation tools by U.S. immigration agencies has already resulted in many errors. For instance, mistranslations led to asylum denials or wrongful detentions, illustrating the risks of these technologies. Incorporating such systems for detecting disliked ideas would only worsen the biased treatment of non-English speakers.
Overall, the use of automated technology in social media monitoring risks penalizing lawful expression while deepening systemic problems. It could suppress free speech for millions, affecting online global dialogues. Noncitizens, or even citizens interacting with them, may hesitate to speak freely or participate in constitutionally-protected activities like protesting, fearing social media surveillance repercussions.