I turn my attention to Dublin, where I find Fabian (not his real name), a former moderator who, until earlier this year, worked for Twitter via the outsourcing company CPL. Unlike another moderator I’ve spoken to who worked for Twitter, he remembers seeing accounts the company had internally labeled as “suspicious.” He doesn’t know who—or what—labeled those tweets. “I guess the system used to do it,” he told me. “Or maybe there is a team dedicated to it.” Talking to him, I’m sure that someone, somewhere, is moderating Twitter bots and that this effort is new, starting only in the past few years.
Another clue arrives in late August, while I’m on summer vacation. Peiter Zatko (or Mudge), Twitter’s former head of security, has decided to turn whistleblower and submit a report he commissioned from an independent company to US Congress. The document names the internal Twitter teams in charge of spam and other attempts to “manipulate” the platform at the time—one is called Site Integrity, the other Health and Twitter Services (two teams which Twitter has since merged). Another line in the report jumps out. It reads: “Content moderation is outsourced to vendors, most of whom are located in Manila.”
Twitter has long used outsourcing firms to hire people in the Philippines to remove violence and sexual abuse material from the site. But could they be moderating spam too? In the industry, there are the big, recognizable names such as Accenture and Cognizant. But there are the lesser-known companies too, such as Texas-based TaskUs. Eventually I come across a company I haven’t heard of: a New Jersey-based business called Innodata. And for the first time, I start hearing the job description “spam moderator.”
I speak to one Innodata employee who confirms the company is moderating spam for Twitter, although he has been working on another team. Another says he has been involved in “categorizing” fake accounts, some of them masquerading as famous sports teams. Both ask that their names and locations not be published for fear of losing their jobs. According to a recent job posting, Innodata has around 4,000 employees in Canada, Germany, India, Israel, the Philippines, Sri Lanka, the United States and the United Kingdom.
By searching specifically for moderators at Innodata, I finally find John, the employee who shares the picture of the woman in the swimsuit. He explains there are 33 full-time staff moderating spam for Twitter and more than 50 freelancers. He believes Innodata didn’t start moderating spam until March 2021.
Every day, John says he looks at up to 600 Twitter posts and accounts in a third-party app called Appen, before flagging them either as “spam” or “safe.” (Appen is an Australian company that uses a global workforce to train artificial intelligence used by major technology firms.) The majority of John’s team are based in either India or the Philippines, he says. He believes the tweets he’s sent are selected by artificial intelligence trained to look for Twitter spam before they are sent on to a team of human moderators.
For each tweet he is sent, John is asked two questions by Appen: “Would you consider the above tweet to be content spam?” and “Would you consider the user account to be violating content spam policy?” He marks a post as spam if it falls into one of nine categories: Is it advertising counterfeit products, unauthorized pharmaceuticals, or trying to buy or sell user profiles for services such as Netflix? Is it trying to phish or scam others, sharing suspicious links or making unrelated replies to a conversation thread?