USC Researcher Develops AI to Combat State-Sponsored Influence Campaigns on Social Media

USC researcher Luca Luceri develops AI-powered method to detect and characterize state-sponsored influence campaigns on social media. The method uses machine learning models to identify coordinated activities, such as hijacking hashtags and amplifying misleading content.

author-image
Trim Correspondents
New Update
USC Researcher Develops AI to Combat State-Sponsored Influence Campaigns on Social Media

USC Researcher Develops AI to Combat State-Sponsored Influence Campaigns on Social Media

In a groundbreaking effort to combat state-sponsored influence campaigns on social media, USC researcher Luca Luceri has developed an AI-powered method to detect and characterize these activities. The initiative, funded by the Defense Advanced Research Project Agency (DARPA), focuses on identifying coordinated misinformation spread by bots and trolls from countries like China, Cuba, and Russia.

Luceri's method employs machine learning models to identify orchestrated influence campaigns on Platform X (formerly Twitter). The AI system uses both unsupervised and supervised models to detect coordinated activities, such as hijacking hashtags, amplifying misleading content, and mass-sharing propaganda. These sophisticated techniques aim to sway public opinion during significant geopolitical events.

Why this matters: The development of AI-powered methods to combat state-sponsored influence campaigns is crucial in safeguarding democratic processes and preventing the spread of misinformation. As foreign adversaries increasingly use AI to spread disinformation, it is essential to have robust tools to detect and prevent these activities.

The research, presented at the Web Conference on May 13, 2024, in a paper titled 'Unmasking the Web of Deceit: Uncovering Coordinated Activity to Expose Information Operations on Twitter,' highlights the precision of Luceri's models. 'My team and I have worked on modeling and identifying IO drivers such as bots and trolls for the past five to ten years,' said Luceri. 'In this paper, we've advanced our methodologies to propose a suite of unsupervised and supervised machine learning models that can detect orchestrated influence campaigns from different countries within the platform X.'

Luceri's team analyzed a dataset of 49 million tweets from verified campaigns originating in six countries: China, Cuba, Egypt, Iran, Russia, and Venezuela. They identified five key sharing behaviors that IO drivers participate in, including co-retweeting, co-URL sharing, hashtag sequence, fast retweeting, and text similarity. The team constructed a unified similarity network called a Fused Network, which captures a broader range of coordinated sharing behaviors.

The importance of this research cannot be overstated, especially in the context of the upcoming 2024 U.S. elections. Top security officials have warned that foreign adversaries, including Russia, China, and Iran, will attempt to influence the elections using AI to spread disinformation and undermine trust in democracy. Advances in AI have made it easier to create lifelike images, videos, and audio that can deceive voters, posing a significant threat to election security.

Luceri's AI-powered method aims to empower social media platforms to detect and prevent the spread of misinformation and inauthentic content. By identifying influence campaigns with high precision, the technology reduces the risk of misclassifying legitimate users as IO drivers, ensuring that social media providers or regulators do not mistakenly suspend accounts.

As the 2024 U.S. elections approach, the development of such AI-powered methods is crucial in safeguarding democratic processes. Luceri's work represents a significant step forward in the fight against state-sponsored information operations, providing a robust tool to combat the sophisticated disinformation tactics employed by foreign adversaries.

Key Takeaways

  • USC researcher Luca Luceri develops AI-powered method to detect state-sponsored influence campaigns on social media.
  • The method uses machine learning models to identify coordinated activities, such as hijacking hashtags and amplifying misleading content.
  • The AI system can detect influence campaigns from countries like China, Cuba, and Russia with high precision.
  • The technology aims to empower social media platforms to prevent the spread of misinformation and inauthentic content.
  • The development of such AI-powered methods is crucial in safeguarding democratic processes, especially during the 2024 U.S. elections.