Bot or not?

Erin Gallagher
7 min readFeb 18, 2020

--

The word “bot” has become a catch-all to describe anything on social media that people perceive as suspicious or contrived, but overuse of the term has rendered it meaningless and current methods for “bot identification” are… lacking.

A funny Mexican meme about sycophantic accounts that supported former Mexican senator Omar Fayad

One thing I’ve noticed in nearly 5 years of tracking Twitter bots is how in every country where they are active, people not only recognize what they are but they also give them nicknames. Some examples:

Mexico: peñabots
Honduras: JOH-bots
Peru: Fujitrolls
Turkey: AK-trolls or Army of 6000
Egypt: legan electronya or لجان الكترونية (electronic committee)
Guatemala: Net centers
Argentina: Call centers
Puerto Rico: Fotutos51

In Mexico, the most common term used has been “peñabots,” named for the former president Enrique Peña Nieto. Sometimes bots are named after other politicians too. When former Mexican senator Omar Fayad tried to pass a terrible internet bill in 2015, fake accounts filled his mentions with sycophantic praise of the bill. Those accounts were called “FayadBots” and people made hilarious memes (see image above).

This has also happened in the US. In 2018 some obvious sock puppets supporting the campaign of Doug Ducey in Arizona were dubbed “Duceybots.”

I don’t believe anyone thought that former Mexican president Peña Nieto, or former Mexican senator Fayad, or current Arizona governor Doug Ducey were running actual botnets or that the accounts labeled as “bots” were even automated. The term “bot” in these instances was just a general term that was used to call out what observers perceived as astroturf.

An unfortunate trend has developed in the US over the past 4 years where any suspicious accounts are called “Russian bots” or “Russian trolls,” usually with zero evidence that the accounts are automated or linked to the Russian government. “Russian bots” is just the term that‘s been used (and abused) in US media to explain social media manipulation during the 2016 election. It’s the general label people in the US first learned to describe this activity, and it stuck.

Many of us who research disinformation know that attribution is difficult if not impossible and even identifying automated activity is not so simple.

Various splainers about how average people can tell if an account is a bot or not have been published over the past 4 years, and several tools have been deployed for bot detection. All of these guides and tools have shortcomings, but those guides and tools are what have been provided to the public to use, so the current discourse about bots, well… it is what it is.

Updating definitions and detection methods

One of the most common ways to identify bots on Twitter is by calculating an account’s tweets per day — by taking the total number of tweets an account has tweeted divided by number of days since the account was created. Researchers have varying thresholds for how many tweets per day indicate an account might be automated. In 2016 The Oxford Internet Institute considered 50+ tweets per day suspicious but have since revised their terminology. DFRLab views 72 tweets per day as suspicious. In 2018, University of Washington researcher Ahmer Arif told Mother Jones “if an account has more than 50 to 60 tweets a day, that suggests automation.”

The 72 tweets per day threshold was cited recently by Nir Hauser, CTO at VineSight, a tech company that according to their website fights misinformation on social media, in a December 2019 CNBC article about an account called @Surfermom77 that got retweeted by Trump.

Source: CNBC

I used to think 100 tweets per day was a reasonable threshold, but I changed my mind.

I’ve personally made an effort to stop using the term “bot” in my research unless I can prove that an account is using a script or a custom app to tweet. If a Twitter account is tweeting a lot I usually call them a “high-volume user” or sometimes a “political spammer” but I try add clear caveats that high-volume does not equal automation/bot.

In fact, I’ve encountered several high-volume accounts that I know are operated by flesh and blood humans.

The Associated Press tracked down an account that was averaging 72 tweets per day. Turned out it was a 70-year-old grandmother who spent up to 14 hours per day tweeting. Not a bot.

An activist in Portland allegedly operates multiple Twitter accounts, and at least one of those accounts, before it was suspended, was averaging 138 tweets per day. The suspended account had an anonymous profile and amplified political tweets and hashtags, which checks off several of the bot boxes in DFRLab’s #BotSpot guide. But other activists in Portland told me there’s a real person behind the anonymous avi so I know that account was not a bot.

Last month I stumbled across an account that was tweeting 300–400 times per day in recent weeks. Mostly retweets, but a lot of self-authored tweets and replies too. The account also had a link to Amazon in their bio to buy a book they wrote. A lot of their replies were one word answers or emojis, strings of simple tweets in full conversations they were having with other users. Probably not a bot.

Awhile back I chatted with a Resistance activist who didn’t realize he was averaging 85 tweets per day. The activist uses a pseudonymous profile name and avi but has posted livestreams from protests in the town where he lives. Not a bot.

A Mexican researcher once told me that an activist there can average 50–70 tweets per day if they’re engaged in a digital protest or tweeting about some breaking news event. I’ve encountered all these examples and many more of high-volume, power users that should not be labeled as automated activity, yet according to current guidelines these examples would be labeled as bots.

Experts say there are other factors that we need to consider, like alpha-numeric usernames or anonymous profiles, and that all of these elements need to be taken in to consideration in bot identification. I don’t disagree with that, but I think we need to revise our “bot spot” guides and detection methods and I think we need to be clear about that so average social media users understand that current standards need to be updated.

So if these high-volume accounts are not automated bots what else could be going on? I have a theory.

Addictive design & emotional marketing tactics

Tristan Harris wrote a blog in 2016 titled “How Technology is Hijacking Your Mind” and it could not be more relevant. Disinformation campaigns are bad, but the reason they are effective is because bad actors know entire populations of people are glued to their screens, and that’s by design. Social media websites are engineered to keep us clicking and scrolling and to never put down the slot machines in our pockets. These addictive features are baked into the design of social media websites and apps.

One simple example: we immediately check new notifications because Facebook designers designed notification icons to grab our attention. Our eyes are instantly drawn to the little red flags and we’re compelled to click them.

How does the addictive design of social media platforms affect political activists? Does it compel them to be more active? Does it create political power users? What’s normal activity for a political activist/power user as opposed to a journalist, or a K-Pop stan, or your Uncle Bob the super-Trump fanatic? I’m not sure if that’s even been studied, certainly not as much as we’ve studied the looming bot menace.

Now imagine a political activist power user whose addictively designed newsfeeds are filled with emotional marketing tactics and fear-bait headlines.

What are the effects of recommendation engines that are designed for endless scrolling of content that’s been engineered to trigger emotional responses, posted on platforms that are engineered to keep us extremely online?

I don’t know but I feel like this is all very unhealthy.

Sample of Breitbart ALL CAPS fear-bait

We already know there are limits as to what content social media platforms are realistically going to moderate during election cycles. But they’re not going to fix the design of their products, that would impact their entire business model.

So I don’t really know how to end this blog except to say I’m very frustrated. Frustrated with the current discourse about disinformation, with the flawed guides and tools the public has been told to use, with the limited levels of digital literacy that have been achieved so far and with the largely mediocre reporting on “bots” over the past 4 years. And I’m not sure anymore what the actual “threats to democracy” even are. But what’s the point of fighting disinformation if the battlefield we’re fighting on is fundamentally broken?

--

--

Erin Gallagher

Social media researcher, multimedia artist, currently research assistant with the Technology and Social Change Project