“In this election, the troll farm is in Phoenix.”
Over the past few months, tweets and Facebook messages echoing Trump campaign talking points have been pouring in across social media platforms.
But these aren’t your usual random bots mass-releasing messages like “Don’t trust Dr. Fauci,” or falsely claiming mail-in ballots are bound to lead to election fraud.
A thorough investigation carried out by the Washington Post has uncovered these tweets and posts are being plastered by teens and young adults—some of whom are under the age of 18—hired by Turning Point Action (TPA), closely working with Turning Point USA, to spam their social media feeds with these types of messages.
Based in Phoenix, Turning Point USA is a conservative and outwardly pro-Trump youth organization, known for its outspoken young leader Charlie Kirk.
The posts in question include blatantly false information—such as claiming 28 million ballots have gone missing in the past four elections, or that the CDC is inflating the number of COVID deaths (In reality, the CDC count, if anything, is likely an underestimation).
Many parties have confirmed the legitimacy of the operation—from participating teens and their parents, to the friends of these teens who used Turning Point’s gig as a summer job.
Turning Point Action Pays Teens to Spread Lies on Social Media
So why pay young people to spread campaign propaganda? And how?
Most social media companies have guardrails in place to curtail the spread of misinformation on their platforms (like the kind Russia used during the 2016 campaign). For instance, Facebook prevents users from operating multiple accounts.
Prior to a rebuke from Twitter and Facebook, TPA’s underground social media campaign had found a secretive way to carefully navigate around these platforms’ rules by employing humans (not bots) using their own personal accounts.
The campaign’s structure was simple: Participants pulled the post’s basic language from a shared online document. Before posting, TPA instructed the teens to tweak the language to make them seem more authentic and “posted the same lines a limited number of times to avoid automated detection by the technology companies.”
In response to the Post investigation, Twitter suspended at least 20 accounts involved with the campaign, citing “platform manipulation and spam.”
You may not use Twitter’s services in a manner intended to artificially amplify or suppress information or engage in behavior that manipulates or disrupts people’s experience on Twitter.
Facebook too removed a number of accounts; the company says an investigation is ongoing.
Graham Brookie, director of the Atlantic Council’s Digital Forensic Research Lab, classified this type of behavior as a troll farming initiative.
“The scale and scope of domestic disinformation is far greater than anything a foreign adversary could do to us,” said Brookie. “In this election, the troll farm is in Phoenix.”
Despite the acknowledgement from those directly involved in the spread of disinformation, as well as experts and the social media platforms themselves, TPA maintains the efforts are simply a result of youth activism. Charlie Kirk has defended the group’s behavior, stating any comparison to a troll farm is a serious “mischaracterization.”
Here at Ms., our team is continuing to report through this global health crisis—doing what we can to keep you informed and up-to-date on some of the most underreported issues of this pandemic. We ask that you consider supporting our work to bring you substantive, unique reporting—we can’t do it without you. Support our independent reporting and truth-telling for as little as $5 per month.
What Can Be Done to Counteract this Misinformation Effort?
Ms. spoke with Pik-Mai Hui, an Indiana University PhD student working with the Observatory on Social Media, aimed at curbing the spread of misinformation online and the manipulation of social media. His research played a large role in identifying TPA’s coordinated, inauthentic network behavior.
We asked him the best ways to combat trolling and misinformation on social media.
“For general users on social media, develop news literacy and critical thinking skills for evaluating the reliability of any information on social media, so that they do not participate (by resharing) in any part of these malicious activities unconsciously,” Hui told Ms. “If they see any such malicious activities, they should report them to the platforms, rather than just skipping over them.”
Hui also said parents can play a large role by teaching their children responsible social media behavior.
“Parents need to educate their children to use social media properly and morally,” he said. “Paid trolling is among the things that one should not do on social media; it can have a huge implication on democracy. There are other problematic behaviors from the young social media users as well, such as cyber-bullying.”
And while social media platforms say they’re hard at work to combat trolling in the run-up to the November elections, “it’s always been a cat-and-mouse game between researchers—both at the social media companies and in academia—and the malicious actors,” Hui said.
“We find patterns and build detectors, and then they develop new patterns to escape our detection. Despite the long history of this game, I am optimistic that the recent rise of awareness of society will eventually bring us closer to its end.”
But in the meantime, one thing is for sure: The truth speaks for itself. And regardless of the intention behind the posts, this social media blitz has given us all a timely reminder for this election cycle: Don’t believe everything you see on the internet—and remember to check the facts.
You may also like:
The coronavirus pandemic and the response by federal, state and local authorities is fast-moving. During this time, Ms. is keeping a focus on aspects of the crisis—especially as it impacts women and their families—often not reported by mainstream media. If you found this article helpful, please consider supporting our independent reporting and truth-telling for as little as $5 per month.