Analyzing Conspiracies through Folklore, Epidemiology, and Artificial Intelligence


Digital disinformation is becoming a widely recognized threat—especially to public health—with unprecedented amounts of misinformation available online. In his first advisory, Surgeon General Vivek Murthy (2021) issued a stark warning that “Health misinformation is a serious threat to public health … we can and must confront it together.” World Health Organization Director-General Tedros Ghebreyesus (2020) concurred: “We’re not just fighting an epidemic; we’re fighting an infodemic.”  Several epidemiological models have been proposed to fight this infodemic, including one in the New England Journal of Medicine (“The Covid-19 infodemic” 2021), which notes that “medical professionals and patients are facing both a pandemic and an infodemic—the first caused by SARS-CoV-2 and the second by misinformation and disinformation.” To address this problem, the authors “believe the intertwining spreads of the virus and of mis-information and disinformation require an approach to counteracting deceptions and misconceptions that parallels epidemiologic models by focusing on three elements: real-time surveillance, accurate diagnosis, and rapid response.” Indeed, the Center for Inquiry (publisher of Skeptical Inquirer) has been active in addressing medical misinformation on many fronts, from its Coronavirus Resources webpage to ongoing lawsuits over homeopathic products.

Conspiracy theories—a perpetual and pernicious bane of skepticism and critical thinking—have traditionally been examined through several prisms, including folklore, psychology, and social psychology. More recently, the field of computational analysis has emerged to help identify, address, and mitigate rumors and misinformation. Among the field’s pioneers is Timothy Tangherlini, a professor of folklore at the University of California at Los Angeles and a fellow of the American Folklore Society. “I study narrative, how stories emerge and circulate on and across social networks; rumors, legends, conspiracy theories, and so on,” he explained in an interview (Tangherlini 2022). Tangherlini has researched storytelling in a medical context, looking for examples at the emergence of vaccine hesitancy and exemption-seeking behavior among parents on parenting websites and blogs.

I first became aware of Tangherlini’s work as a graduate student in public health at Dartmouth College, studying interdisciplinary approaches to mitigating medical misinformation. Adopting the infodemiology approach, his work also looked at actual conspiracies versus conspiracy theories, for example comparing “Bridgegate” (in which then–New Jersey governor and former presidential candidate Chris Christie allegedly conspired to close a key bridge lane in 2013 as a form of retaliation against his political enemies) and “Pizzagate” (in which Hillary Clinton allegedly participated in child trafficking at Comet Ping Pong, a Washington, D.C.–area pizza restaurant; see “Pizzagate and Beyond,” SI, November/December 2017).

Computational Folkloristics

Tangherlini researches narrative structures of conspiracy theories. He and colleagues note that “Despite the attention that conspiracy theories have drawn, little attention has been paid to their narrative structure, although numerous studies recognize that conspiracy theories rest on a strong narrative foundation or that there may be methods useful for classifying them according to certain narrative features such as topics or motifs” (Bandari et al. 2017). The team “developed a pipeline of interlocking computational methods to determine the generative narrative framework undergirding a knowledge domain or connecting several knowledge domains.”

Using research on public health concerns (specifically, anti-vaccination blog posts) and combining it with folklore legend research, the model consists of three primary components that populate the narrative structure. Just as every story has certain consistent elements (such as a setting, protagonist, conflict, and resolution), the model studies three main components: actants (people, places, and things), relationships between those actants, and a sequencing of those relationships.

Any storytelling event, such as a blog post or a news report, activates a subgraph comprising a selection of actants (nodes) and relationships (edges) from the narrative framework. The more often an actant-relation

ship is activated, the more likely it is to be activated in future tellings, with additions and deletions becoming less and less common. … As more people contribute stories or parts of stories, the narrative framework is likely to stabilize since the nodes and edges become more heavily weighted each time they are activated. (Tangherlini et al. 2020)

The result is a graphical model that allows researchers to visualize the underlying narrative patterns in a given dataset (see, for example, Figure 1).

Figure 1. Graphical model of rumor and narrative analysis from Tangherlini et al. (2020)

Tangherlini explains that “Much of the internet, on message boards and internet forums, for example, are ‘noisy’—they have a significant amount of irrelevant posts, spam, junk characters, and so on. What we’re interested in is the bias, because that bias is an outward representation of belief and worldview” (Tangherlini 2022). He and his colleagues didn’t come up with the network idea, but “we operationalized it and came up with a formalization of it so that if you can discover the actants in a space, and the relationships between those actants, you can aggregate them into context-aware domains.”

In this way, for example, researchers can trace and map the spread of misinformation and memes in the same way that epidemiologists can trace and map the spread of pathogens. And like public health surveillance that can alert officials to likely potential areas of concern (where, for example, pandemics emerge or vulnerable populations are exposed to high-risk environments), Tangherlini’s model has predictive power. Real-time data surveillance can help researchers identify those at elevated risk along pathways of misinformation. Tangherlini notes that the research isn’t limited to conspiracy theories and applies broadly to any misinformation:

If there’s a narrative framework in any of the data, then that will come out … the system we’ve devised is agnostic to input [i.e., topic]. …  Social media is really nothing other than the folkloric process. If we think of folklore as informal cultural expression circulating on and across social networks, then, well, that’s literally what Facebook and Twitter and all other social media sites are.

By viewing social media as networks of both transmission and inoculation, Tangherlini highlights the strong parallels between skepticism, folklore, and epidemiology, studying how and why things circulate in a given population. “Disease is in some ways very similar to ideas; you can get vaccinated against a disease, and you can also get inoculated with an idea. Once you get an idea, you can’t unsee it or unhear it. You might forget it after a while because it’s no longer creating meaning for you, but it’s still there” (Tangherlini 2022).

The models Tangherlini and his colleagues have developed can be applied in both top-down and bottom-up initiatives. For example, just as skeptics and folklorists can offer expertise in messaging and countering misinformation, they can also make use of existing surveillance networks. One example he gives is that “frontline workers can provide a huge service being part-time ethnographers themselves and start to collect the attitudes and expressions and the stories that they encounter during their clinical practice.” Nurses, field workers, social workers, and others could, with little additional effort, be trained in basic folklore gathering techniques. This could be as simple as paying attention to and tracking informal patient comments that contain medical misinformation and inquiring about the specific sources of that information—say, a friend, family member, or social media website.

That would be incredibly helpful because then you can start to map the narrative domains that are being activated by the storytelling and the beliefs the patients have. Once you understand the narrative domain, you can see where there may be things of low likelihood, beliefs more easily shown to be [false]. By doing so you’d also understand the signal senders, people who act as mediators between lots of other people. Those are the people to identify and then work with them to come up with communication that’s helpful. … Understanding what the beliefs are in complex communities and developing messaging that aligns with the cultural needs of the groups are in the long-term going to be more successful than “Just trust us, we’re scientists.”

Despite his successes, Tangherlini has encountered some resistance. “Using these network models is practically Epidemiology 101. With epidemiologists and applied mathematicians we get more traction, [but] with public health we often get, ‘Oh, we’d love to have you on the team but, really, we’ve already figured this out, thanks anyway.’” Tangherlini has found that “most public health groups are very protective of their stomping ground; they’re like, ‘No, we do this, we do the healthcare messaging, we have the surveys, we know what to do.’ [That’s their choice, but] I unfortunately wind up getting lots and lots of data about people not listening to your messaging, or responding to your messaging in exactly the opposite way than you wanted them to respond.”

Tangherlini’s work has attracted attention outside academia as well. The Guardian (U.K.) reported that he and other researchers used artificial intelligence tools to extract key actants from thousands of anti-vaccination social media posts—using a folkloric approach:

Tangherlini, whose specialism is Danish folklore, is interested in how conspiratorial witchcraft folklore took hold in the 16th and 17th centuries and what lessons it has for today. Whereas in the past, witches were accused of using herbs to create potions that caused miscarriages, today we see stories that [Bill] Gates is using coronavirus vaccinations to sterilize people. A version of this story that omits Gates but claims the vaccines have caused men’s testicles to swell, making them infertile, was repeated by the American rapper Nicki Minaj. The research also hints at a way of breaking through conspiracy theory logic, offering a glimmer of hope as increasing numbers of people get drawn in. (Leach and Probyn 2021)

Mapping rumors allows the broader stories to emerge, providing the opportunity to challenge parts of the story and introduce skepticism. As Tangherlini notes, “If people are looking at it and thinking ‘Wait a minute, I don’t trust at least this part of the narrative,’ you might be able to fracture those low-probability links between domains. And if you can fracture or question them, you get the potential for community level change” (Leach and Probyn 2021).

Emerging artificial intelligence research can help with these analyses; because the narrative patterns are content-neutral, the same analysis can be brought to bear on other conspiracies and rumors:

Discerning fact from fiction is difficult given the speed and intensity with which both factual and fictional accounts can spread through both recognized news channels and far more informal social media channels. Accordingly, there is a pressing need, particularly in light of events such as the COVID-19 pandemic, for methods to understand not only how stories circulate on and across these media, but also the generative narrative frameworks on which these stories rest. Recognizing that a series of stories or story fragments align with a narrative framework that has the hallmarks of a fictional conspiracy theory might help counteract the degree to which people come to believe in—and subsequently act on—conspiracy theories. … Knowledge derived from our methods can have clear and significant public safety impacts, as well as impacts on protecting democratic institutions. (Bandari et al. 2017)

This intersection between folklore, data analytics, and public health has the potential to be a powerful tool in identifying and mitigating misinformation. This could be especially useful because of the increasing prevalence of “automated bots flooding parenting conversations with posts engineered to fit the antivaccination narrative framework,” and algorithms are especially well suited to identifying content created by other algorithms (Bandari et al. 2017).

Identifying Misinformation

Other research has explored the use of logistical regression models to help public health officials distinguish between true and false health-related rumors circulating online. For example, Zili Zhang, Ziqiong Zhang, and Hengyun Li (2015) examined “the associations between the authenticity of health rumors and some indicators of the rumors themselves, including the lengths of rumor headlines and statements, the presence of certain features in rumor statements and the type of the rumor: wish or dread.”  Citing research from the field of information manipulation theory (e.g., McCornack 1992), the authors concluded that false messages and rumors tend to contain linguistic and syntactical red flags that suggest deception. For example, false messages tend to contain a higher word count than do truthful messages; they also tend to be less specific. Focusing on health rumors on a Chinese-language website, the researchers developed eight hypotheses about internet health rumors: those with longer headlines/statements are more likely to be false; those referencing the names of specific people, places, and numbers are more likely to be true; rumors that contain information about their source are more likely to be true; rumors that contain cues suggesting they originated overseas are more likely to be false; messages that contain relevant images and hyperlinks are more likely to be true than those without; and dread-inducing or fearmongering internet health rumors are both more common and more likely to be true than those referencing hoped-for and positive results.

The study concludes:

The indicators of an Internet rumor’s veracity can be gleaned from the rumors themselves. Consistent with our findings, previous studies report significant linguistic differences in communication between when people are telling the truth and lying. … The effects of false rumors thus can be reduced if users are explicitly warned how to tell what information may be false. To be effective, such warnings must specifically explain the indicators of a false rumor rather than generally mention that the rumor is false. (Zhang et al. 2015)

The researchers also reference the interdisciplinary nature of analyzing and refuting medical misinformation, for example highlighting the roles that folklorists and fact-checking skeptics (such as Skeptical Inquirer and Snopes) play.

Details of how complex systems, artificial intelligence, and machine learning can be used to detect misinformation are beyond this article’s scope, but the basics are simple. Most misinformation appears in written form and on networks. Because of that, statistical analyses can help identify relevant features and behaviors on those networks that relate to the spread of misinformation, including users and pathways (and, less often, origins; see Torabi and Taboada 2019). Adopting a similar approach to Tangherlini’s work, Yuxi Wang and colleagues (2019) note that “three major components are involved in the creation, production, distribution and re-production of misinformation—agent, message and interpreter.” The research on medical misinformation “generally employ[s] sophisticated modelling and simulation techniques to identify the rumor propagation dynamics. However, this is still in its infancy and one recent systematic review of behavioral change models found that most papers investigating spread of health-related information and behavioral changes are theoretical, failing to use real-life social media data.”

Wang’s group found that “while there have been studies of the spread of misinformation on a wide range of topics, the literature is dominated by those of infectious disease, including vaccines. Overall, existing research finds that misinformation is abundant on the internet and is often more popular than accurate information.” Furthermore, those propagating medical misinformation are, unsurprisingly, typically people without formal institutional affiliations, along the lines of “citizen scientists” or “expert patients” whose “narratives of misinformation are dominated by personal, negative and opinionated tones, which often induce fear, anxiety and mistrust in institutions” (Wang et al. 2019).

A 2019 editorial in the British Medical Journal also noted that a study of vaccine-related posts on Twitter found three categories of accounts most likely to spread misinformation were Russian trolls, bots, and “‘content polluters’ devised to spread malware or unsolicited commercial content and to direct readers to sites that generate income” (McKee and Middleton 2019). A 2020 Issue Briefs from Syracuse University’s Lerner Center for Public Health Promotion titled “Digital Disinformation Is a Threat to Public Health” also identified bots as a recent entrant into the misinformation field. Citing a Carnegie Melon University study, “Researchers have determined that nearly half of all the Twitter accounts promoting the reopening of America [during the pandemic] were likely bots. The bots are one element of coordinated and partially automated disinformation campaigns, which may be responsible for achieving political agendas and sowing divide. Bots can be deployed in staggering numbers to promote conspiracy theories with disturbing real-world consequences” (McNeill 2020). The study found that of 200 million tweets discussing COVID-19, 82 percent of the top fifty most influential Twitter accounts were bots, and 62 percent of the top 1,000 retweets were shared by bots (McNeill 2020).

Although the literature on rumor research in the public health field is not extensive, significant progress has been made. For example, a 2016 study examined misinformation about Ebola on Chinese microblog platforms during the 2014–2015 outbreak (Fung et al. 2016). Six years later, during the COVID-19 pandemic, a team of Indian researchers identified improved methods for identifying medical misinformation using machine learning models, resulting in a 97 percent accuracy rate (Biradar et al. 2023).

Finally, although social media is a primary source of misinformation, it can also be used to fight the problem. In India, for example, community health workers called accredited social health activists (ASHAs) work in rural areas responding to COVID-19 concerns. In addition to directing the public to vaccination locations, counseling, and other duties, they are charged with monitoring social media posts containing misinformation. For example,

Several leaders of the elected far-right Bharatiya Janata Party have been vocal about drinking cow urine to prevent COVID, with a few even making videos of it. Last year, its leaders organized a gaumutra (cow urine) drinking event. [ASHA worker Bharti] Kamble came across several such messages. “What do you even say to something like that? A lot of people in fact did try it.” She started sourcing scientific messages from doctors and began messaging about COVID treatments that actually worked. “If you directly counter misinformation saying it’s wrong, then people don’t listen and start provoking you,” she explains. Instead, her antidote is disseminating scientific information in the easiest possible way. Eventually, she says, people realized cow urine is no cure for COVID … “Even if people disagree with our messages, they do read and discuss. We spend at least three hours every day countering such misinformation.” (Jain 2021)

Responding to Rumors

The measures and methods discussed to identify and analyze medical misinformation are of course only half the battle. In these pages, Mick West (2022) offered advice gleaned from years of personal experience trying to refute misinformation online. He urges a quick response (“A viral bunk video is like a runaway nuclear reactor. You need to get the cooling rods in there as soon as possible”); fighting fire with fire (that is, using the same medium and techniques where the misinformation is first encountered; “People who just watched an entertaining video on a topic of interest to them are unlikely to read an article explaining why that video was wrong. But they might watch an interesting looking video response”); and brevity (keep it short and punchy, addressing main points upfront and offering links to additional information). West notes that “We cannot always stop a falsehood from going around the world. But if we get our boots on quickly enough, we can travel along and minimize its effect.”

A quick response requires skeptics and public health agencies to be proactive, ideally monitoring rumors in real time to stay current on what’s circulating, where, and among which people. Understanding the nature of the misinformation is also vital. To take a recent example, reasons for COVID-19 vaccine hesitancy were often misunderstood and misrepresented in the news and on social media. It’s a form of misinformation about misinformation.

It may be of little consequence whether the average person understands the nuances of why some people refused the vaccine, but it’s paramount for public health officials. Much of the reporting about the low vaccination rates among African Americans, for example, centered on historical distrust of doctors and the medical establishment. But this is not the whole picture, and focusing on that too heavily risks not only victim-blaming an already marginalized community but also missing opportunities to mitigate misinformation.

Health education campaigns tend to be written for a general audience and provide accurate (if generic) information about, for example, the benefits of weight loss, recognizing signs of a stroke, and so on. Campaigns that target and address misinformation, on the other hand, benefit from a more targeted approach, because misinformation claims themselves tend to be specific. Corrective information that’s too specific may be impractical (unless it’s widespread, such as the false claim that COVID-19 vaccinations induce infertility), and information that’s too broad will be dismissed as irrelevant.

Because of this, the corrective information should when possible match the level of generalization of the misinformation. Take, for example, the following concerns that have been expressed regarding COVID-19 vaccine hesitancy: “Vaccines are dangerous”; “Vaccines are untested”; “Vaccines were rushed”; “Vaccines contain aborted fetal material”; “Vaccines cause miscarriages”; and “Vaccines cause menstrual cycle disruption.” Each of these has a larger message custom tailored to a specific audience and thus gain traction with different people. The claim that vaccines are linked to abortions, for example, would give pause to many pro-life religious conservatives, and the rumor that vaccines cause miscarriages may be relevant to women of childbearing inclination, whereas the claim that vaccines were “rushed” (that is, of unproven safety) would have traction among a much broader audience.

The response should therefore be roughly proportional to the specificity and prevalence of the original claim. It’s impossible and impractical to launch a full-scale campaign addressing all forms of misinformation on all levels. Folklore, like wildfire, is notoriously difficult to contain. But a proactive approach can pre-identify likely variants of these rumors based on past experience and predictive patterns, allowing for public health to quickly adapt preexisting templates for a rapid response once detected via surveillance.

Though I have focused on the public health–related consequences of medical misinformation, it’s important to keep in mind that rumors and conspiracies have broader consequences, including economic, political, and social ones. In the end, Tangherlini acknowledges, “You’re never going to reach the ‘true believers,’ but you would be able to have a much more nuanced understanding of storytelling in these communities.” Referencing the then-prominent news stories of low vaccination rates among police officers, he asked, “For example: Why are cops so against vaccines? … I suspect in their storytelling you’d find the answer pretty quickly” (Tangherlini 2022). Skeptics and epidemiologists could of course conduct surveys and focus groups to determine the answer, but implementing folklore-inspired IT to the task looking at existing narratives could be much faster, cheaper, and effective at both diagnosing the key objections and mitigating them. Harnessing cutting-edge technology, including AI, to stem age-old myths and misinformation may be a task for the next generation of skeptics.

 

References

Bandari, R., Z. Zhou, T. Qian, et al. 2017. A resistant strain: Revealing the online grassroots rise of the antivaccination movement. Computer 50(11) (Report #11): 60–67. Online at http://dx.doi.org/10.1109/MC.2017.4041354.

Biradar, S., S. Saumya, and A. Chauhan. 2023. Combating the infodemic: COVID-19 induced fake news recognition in social media networks. Complex & Intelligent Systems 9: 2879–2891. Online at https://doi.org/10.1007/s40747-022-00672-2.

The Covid-19 infodemic: Applying the epidemiologic model to counter misinformation. 2021. The New England Journal of Medicine (August 19).

Fung, I.C., K.W. Fu, C.H. Chan, et al. 2016. Social media’s initial reaction to information and misinformation on Ebola, August 2014: Facts and rumors. Public Health Reports 131(3): 461–473. Online at https://doi.org/10.1177/003335491613100312.

Ghebreyesus, T. 2020. Speech at Munich Security Conference. World Health Organization (February 12). Online at https://www.who.int/director-general/speeches/detail/munich-security-conference.

Jain, S. 2021. India’s healthcare workers are busting misinformation on WhatsApp. The Verge (June 17). Online at https://www.theverge.com/22535642/covid-misinformation-india-asha-whatsapp.

Leach, A., and M. Probyn. 2021. Why people believe Covid conspiracy theories: Could folklore hold the answer? The Guardian (October 26). Online at https://www.theguardian.com/world/ng-interactive/2021/oct/26/why-people-believe-covid-conspiracy-theories-could-folklore-hold-the-answer.

McCornack, Steven. 1992. Information manipulation theory. Communication Monographs 59: 1–16. Online at https://doi.org/10.1080/03637759209376245.

McNeill, A. 2020. Digital disinformation is a threat to public health. Lerner Center for Public Health Promotion: Population Health Research Brief Series 31. Online at https://surface.syr.edu/lerner/31.

McKee, M., and J. Middleton. 2019. Information wars: Tackling the threat from disinformation on vaccines British Medical Journal 365: l2144. Online at https://doi.org/10.1136/bmj.l2144.

Murthy, Vivek. 2021. Confronting Health Misinformation: The U.S. Surgeon General’s Advisory on Building a Healthy Information Environment. Office of the Surgeon General. Washington (DC): US Department of Health and Human Services. Online at https://www.ncbi.nlm.nih.gov/books/NBK572168/.

Tangherlini, Timothy. 2022. Interview with Benjamin Radford.

Tangherlini, Timothy, Shadi Shahsavari, Behnam Shahbazi, et al. 2020. An automated pipeline for the discovery of conspiracy and conspiracy theory narrative frameworks: Bridgegate, Pizzagate and storytelling on the web. PLoS ONE 15(6): e0233879. Online at https://doi.org/ 10.1371/journal.pone.0233879.

Torabi, Asr F., and M. Taboada. 2019. Big Data and quality data for fake news and misinformation detection. Big Data & Society (January). Online at https://doi.org/10.1177/2053951719843310.

Wang, Y., M. McKee, A. Torbica, et al. 2019. Systematic literature review on the spread of health-related misinformation on social media. Social Science & Medicine 240: 112552. Online at https://doi.org/10.1016/j.socscimed.112552.

West, M. 2022. Truth gets its boots on. Skeptical Inquirer 46(2) (March/April): 38.

Zhang, Z., Z. Zhang, and H. Li. 2015. Predictors of the authenticity of internet health rumours. Health Information & Libraries Journal 32: 195–205. Online at https://doi.org/10.1111/hir.12115.

Benjamin Radford

Benjamin Radford, M.Ed., is a scientific paranormal investigator, a research fellow at the Committee for Skeptical Inquiry, deputy editor of the Skeptical Inquirer, and author, co-author, contributor, or editor of twenty books and over a thousand articles on skepticism, critical thinking, and science literacy. His newest book is America the Fearful.





Source link

Add a comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Keep Up to Date with the Most Important News

By pressing the Subscribe button, you confirm that you have read and are agreeing to our Privacy Policy and Terms of Use
Advertisement