Journal abstract
The advent of the Internet inadvertently augmented the functioning and success of violent extremist organizations. Terrorist organizations like the Islamic State in Iraq and Syria (ISIS) use the Internet to project their message to a global audience. The majority of research and practice on web‐based terrorist propaganda uses human coders to classify content, raising serious concerns such as burnout, mental stress, and reliability of the coded data. More recently, technology platforms and researchers have started to examine the online content using automated classification procedures. However, there are questions about the robustness of automated procedures, given insufficient research comparing and contextualizing the difference between human and machine coding. This article compares output of three text analytics packages with that of human coders on a sample of one hundred nonindexed web pages associated with ISIS. We find that prevalent topics (e.g., holy war) are accurately detected by the three packages whereas nuanced concepts (Lone Wolf attacks) are generally missed. Our findings suggest that naïve approaches of standard applications do not approximate human understanding, and therefore consumption, of radicalizing content. Before radicalizing content can be automatically detected, we need a closer approximation to human understanding
Do Machines Replicate Humans? Toward a Unified Understanding of Radicalizing Content on the Open Social Web
14 October 2019
Analyzing the semantic content and persuasive composition of extremist media: A case study of texts produced during the Gaza conflict
When Arabs and Jews Watch TV Together: The Joint Effect of the Content and Context of Communication on Reducing Prejudice
Differential Online Exposure to Extremist Content and Political Violence: Testing the Relative Strength of Social Learning and Competing Perspectives
Hate Beneath The Counter Speech? A Qualitative Content Analysis Of User Comments On Youtube Related To Counter Speech Videos