You are here

Detection of Crowd Manipulation in Social Media



OBJECTIVE: Develop information stream analysis models and analytic tools to detect, characterize, and visualize computational propaganda to detect influence campaigns and propaganda that target the emotions of anger, hate, fear, and disgust. Ensure that the proposed capability is able to indicate and analyze influence campaigns in progress and evaluate their potential impacts on target audiences.

DESCRIPTION:Operating in the information environment today is highly challenging for Navy, Marine Corps, and other military warfighters. The information environment includes multiple platforms, social communities, and topic areas that are polluted with disinformation and attempts to manipulate crowds, spread rumor, and instigate social hysteria. Polarization of crowds is a significant problem with nation-state actors conducting malicious campaigns to spread and amplify civil discontent and chaotic social dynamics—usually by manipulating the emotional mood of crowds. Hate, anger, disgust, fear, and social anxiety are heightened using computational propaganda. Current “sentiment” models are poorly suited to measuring emotional content in online media. These measures are not currently well synchronized with measurements of manipulation by information actors who are intent on subverting civil discourse and discrediting the messages of civil authorities.The current state-of-the-art in botnet detection merely identifies automated features such as identical content, identical targets, coordination of message dispersal, and similar measurable indicators. Hybrid or “cyborg” content distributors and distributors who use different types of artificial intelligence enhanced capabilities (“smart” botnets) make detecting manipulated discourses more difficult. Crowds of inflamed audiences can result from a relatively small “signal” of inflammatory texts, pictures, messages, and videos, amplified just enough to “catch hold” in an already unstable environment. This occurred in Sudan when messages about Benghazi in Libya caused mobs to attack embassies—with the British embassy set on fire, only hours after the first messaging began. Current state-of-the-art approaches rely on older algorithms (such as Lingusitc Inquiry and Word Count, LIWC) to evaluate messaging; more sophisticated models such as Ekman’s model of emotions or Russell’s circumplex model and Scherer’s update of the model [Refs 1, 2] have been used for measuring and evaluating emotions in blog posts. Natural Language Processing (NLP) models have also been used meaningfully in research settings [Ref 3]. “Feeling offended,” a complex emotional state, has also been studied by E’Errico and Poggi [Ref 4].Typically, bot-detection methods today indicate the presence or absence of bots, with some degree of accuracy [Ref 5]. Sentiment analysis capabilities are very limited to at best a three-state scale (positive, negative, neutral). In crisis situations and emergencies, sentiment analysis is of little value. It cannot tell the operator what kinds of negativity are in play or what types of emotional issues are being expressed. Anger, fear, hate, disgust, and propaganda-fueled discourses require further unpacking. Communicators need to identify the gists, the stories, and ultimately, the narratives that are in play as well as have available effective models of complex emotions in order to develop an appropriate understanding of an influence campaign.The scope of the topic is to develop tools that can find the existence of an attempt to influence/mislead people through social networks or other IT means, assess the emotions it’s triggering, and the level of the reaction in real-time and provide this info to multiple network users in a cloud environmentTechnologies proposed under this topic might include models and analytic tools for measuring the emotional content and logical fallacies such as ad hominem arguments, ground shifts, and other rhetorical devices [Ref 6] commonly used in propaganda techniques online. Capabilities to identify propaganda may be separate or synchronized with capabilities to identify, evaluate, and describe emotional content. Methods and techniques for creating baselines of emotional responses of audiences to civil authority messaging would be helpful. Initial algorithms, models, and tools are expected to use simulated data, though real-world cases can be used in development. Capabilities that include cultural aspects of crowd manipulation in non-English speaking contexts would be considered particularly responsive to this topic.

PHASE I: Develop prototype algorithms, models, and tools that use Government supplied synthetic data supplemented by case studies to demonstrate a proof of concept to identify computational propaganda content and emotional valences of messages in Twitter, including indicators of manipulation and the capability to segment actor communities (i.e., botnet, bot-enhanced, human). Integrate simple models of emotions (such as Ekman’s model)and consider using more sophisticated, finer grained models (such as Russell’s model with Scherer’s updates). Note: These models are considered to be illustrative; developers are free to use other models of emotions. Ensure that the prototype successfully identify sets of messages, gists, and stories; determine their emotional content in a general sense; estimate whether these sets are likely to represent manipulated discourse; and visualize the discourse by gist (topics) and story (such as URL). Develop a Phase II plan.

PHASE II: Develop the models of emotion and propaganda so as to be able to identify computational propaganda, its emotional valences and arousal state. Estimate the degree of artificial manipulation present in gists and stories present in live information streams from Twitter, websites, and blogs. Ensure that model results are exportable to other tools (such as social network tools, visualization tools, databases and dashboards). Make available to the Navy a user-friendly, working prototype with built-in help capabilities for testing and evaluation in a cloud-based environment by multiple users in the context of an online military virtual tabletop as the final technical demonstration of this project. Conduct and complete model development and validation prior to Phase III.

PHASE III: Apply the knowledge gained in Phase II to further develop the interface, capabilities, and training components needed to make the technologies able to transition to military customers. Make the technologies available on an existing cloud platform of the customer’s choosing (e.g., SUNNET, Navy Tactical Cloud, Amazon Cloud) working with cloud owners to deliver a subscription-based tool interoperable with other tools in enclave settings. Expand and develop the model to cope with real-time information flows and evolving information tactics.The problem of detecting influence campaigns designed to disrupt the credibility of organizations is highly needed, world-wide. Western humanitarian organizations, international brands, and civil society organizations are continually under assault in the information environment by “trolls” and other malign actors for political and apolitical purposes. Currently there is little available on the market for this capability; scientific models of emotional modeling applied to social media are relatively new.


1. Langroudi, George, Jourdanous, Anna, and Li, Ling. “Music Emotion Capture: sonifying emotions in EEG data.” Symposium on Emotion Modeling and Detection in Social Media and Online Interaction.5 April 2018, University of Liverpool.;

2. Harvey, Robert, Muncey, Andrew, and Vaughan, Neil. “Associating Colors with Emotions Detected in Social Media Tweets.” Symposium on Emotion Modeling and Detection in Social Media and Online Interaction.5 April 2018, University of Liverpool.;

3. D’Errico, Francesca and Poggi, Isabella. “The lexicon of being offended.” Symposium on Emotion Modeling and Detection in Social Media and Online Interaction. 5 April 2018, University of Liverpool.;

4. Badugu, Srinivasu and Suhasini, Matla. “Emotion Detection on Twitter Data Using Knowledge Base Approach.” International Journal of Computer Application, Volumbe 162, No 10. March 2017.;

5. Agarwal, Nitin, Al-Khateeb, Saamer, et. Al. “Examining the Use of Botnets and Their Evolution in Propaganda Dissemination.” Defense Strategic Communications. Vol 2, Spring 2017.;

6. Dijck, Jose and Poell, Thomas. “Understanding Social Media Logic.” Media and Communication, August 2013, Vol 1, Issue 1,pp. 2-4. KEYWORDS:Social Media, Computational Propaganda, Crowd Manipulation, Cocial Hysteria, Rumor

CONTACT(S):RebeccaGoolsby 7035880558 rebecca.goolsby@navy.milMartinKruger 7036965349

US Flag An Official Website of the United States Government