from the something's-happening dept
Over the weekend, the group Guardians.ai released a fascinating report detailing what appears to be a massive influence campaign taking shape on Twitter. By way of disclosure, one of the three key authors of the report, Brett Horvath, is also one of the key people behind the election simulation game that we helped create and run, though I have nothing to do with this new report. The report is fascinating, and if you don't feel like reading the whole thing, Bloomberg also has a write up.
The key to the report is that they have identified some truly fascinating patterns that they've spotted among a cluster of users on Twitter, who, at the very least appear to be acting in a manner that suggests some attempt to influence others. I should note that unlike other such reports that jump to conclusions, the authors of this report are very, very, very clear that they're not saying these are "bots." Nor are they saying these are Russian trolls. Nor are they saying that a single source is controlling them. Nor are they saying that everyone engaged in the activity they spotted is officially part of whatever is happening. They note it is entirely possible that some very real people are a part of what's happening and might not even know it.
However, what they uncovered does appear strange and notable. It certainly looks like coordinated behavior, at least in part, and it appears to be designed to boost certain messages. The report specifically looks at statements on Twitter about voter fraud using the hashtag #voterfraud, but it appears that this "network" is targeting much more than that. What made the report's authors take notice is that in analyzing instances of the use of the tag #voterfraud, they noticed that it appeared to have a "heartbeat." That is, it would spike up and down on a semi-regular basis, based on nothing in particular. There wasn't a specific news hook why this entire network would suddenly talk about #voterfraud, and they wouldn't talk about it all the time. But... every month or so there would suddenly be a spike.
From there, they started digging into the accounts involved in this particular activity. And they found a very noticeable pattern:
We wanted to know how these accounts were coming onto Twitter and gaining mentions at such a high velocity — what was leading accounts to gain influence, so quickly? So we took a sample set of accounts from a group of suspicious Voter Fraud accounts and started looking at their activity day-by-day, starting at day one. What we began to notice is a pattern for how the influence machine might be working, and how coordination could be happening.
Here's the consistent network pattern we saw:
- User signs up for an account.
- User starts replying to multiple accounts—some known verified Twitter users and many other accounts that are also on our list of actors, or that fit a similar profile.
- The replies tend to contain: text, memes, hashtags, and @mentions of other accounts, building on common themes.
- At some point the pattern shifts from being all replies to original tweets. Those original tweets contain the same types of content as their replies do.
- It appears that this pattern cycles and repeats when the next batch of new accounts come online. The next batch starts replying to the existing, newly influential accounts, and carry on with the same sequence of events for gaining influence.
The report highlights this pattern with a few example accounts, though the full study looked at (and continues to look at) many, many more. What you see over and over again are Twitter feeds of people who seem to do little other than constantly tweet pro-Trump memes and disinformation, and yet magically get thousands and tens of thousands of retweets, often coming out of nowhere. Here's one example:
The gray line at the bottom is the number of tweets. The black line is the number of mentions from others. Notice how it goes from nothing to around 10,000 in no time? Sometimes the accounts are more or less dormant for a while, before suddenly becoming massively popular for no clear reason at all:
Again, as the report makes clear, these aren't necessarily bots (though, they may be). They aren't necessarily even aware that they're a part of something. But the patterns seen over and over and over and over again are uncanny. And it certainly provides strong circumstantial evidence of some sort of influence operation -- and it's one that appears to continue to grow and grow.
As the report notes:
We don’t know why this activity is occurring, or who is behind it. However, the best we can do is look at the data around what’s actually happening. What we've discovered along the way is that there are overlapping patterns of behavior, demonstrating some form of coordination.
We think it's possible that some of these accounts don't realize that they're coordinating or part of a larger influence network. For example, one of these sample accounts might genuinely care about Voter Fraud. A bad actor, coordinating large numbers of accounts could find this person’s tweets useful, then amplify those tweets through thousands of @mentions and replies.
By focusing on the hard data around coordination, we can better understand how public conservations are being distorted and how it affects society. Whatever your views are on Voter Fraud, these accounts and the accounts that amplify them are rapidly accelerating their activity in the lead-up to Election Day.
Similarly, of course, it's not clear that this is actually having any impact on anyone's views. But it's at least worth looking at what happens when there is what appears to be massively coordinated activity, mostly focused around spreading disinformation regarding the election and more. The full methodology of the report is available on the site, as are the names of 200 of the accounts studied.
What's fascinating, of course, is the sheer size of what's happening, and the level of coordination necessary to make it happen. Twitter's response to the report (as noted in the Bloomberg article) is pretty much what you'd expect Twitter to say:
“While we prohibit coordinated malicious behavior, and enforce accordingly, we’ve also seen real people who share the same views organize using Twitter,” the company’s statement said. “This report effectively captures what often happens when hot button issues gain attention and traction in active groups.”
Indeed, that's part of what's so tricky here. Could this kind of thing happen organically? Well, certainly much of it can. Lots of people who share the same views on any particular subject often will see surges in conversations around those topics, including lots of retweets, mentions and replies. But the pattern here definitely looks different. When these things happen organically, they tend to have a fairly different rhythm, either a lot more sustained, or the spikes are much more spread out and explainable (e.g., there was some news event that tied to the topic). Similarly, it is hard to see how so many pseudononymous people, who no one else really knows, magically all jump up to thousands or tens of thousands of mentions with no clear explanation for their sudden and sustained fame.
But this is also why Twitter is put in an impossible position if it's expected to spot all of this. Even with so much evidence, it's still possible that what Guardians.ai spotted was organically formed. It may seem unlikely, but how can you tell? And you can bet that there are some with less than virtuous intent, who are actively figuring out ways to increasingly make all of this activity look organic. Expecting that Twitter, or any company, can always magically determine what is and what is not "authentic" behavior online, is an impossible task. And the very fact that it might sweep up some perfectly innocuous accounts in the process also makes it troubling to expect that the platform should be in charge of sorting out who's who and who's real in these kinds of situations. But, then again, if these kinds of disinformation campaigns truly are having an impact on influencing the public, that too should be a concern. Either way, as the report highlights, there is still much work to be done in analyzing how social networks are being used to influence the public.
Filed Under: bots, campaigns, coordination, influence, politics, social media
Companies: twitter