Can A Community Approach To Disinformation Help Twitter?
from the experiments dept
A few weeks ago Twitter announced Birdwatch as a new experimental approach to dealing with disinformation on its platform. Obviously, disinformation is a huge challenge online, and one that doesn't have any easy answers. Too many people seem to think that you can just "ban disinformation" without recognizing that everyone has a different definition of what is, and what is not disinformation. It's easy to claim that you would know, but it's much harder to put in place rules that can be applied consistently by a large team of people, dealing with hundreds of millions of pieces of content every day.
Facebook has tried things like partnering with fact checkers, but most companies just put in place their own rules and try to stick with it. Birdwatch, on the other hand, is an attempt to use the community to help. In some ways it's taking a page from (1) what Twitter does best (enabling lots of people to weigh in on any particular subject), and (2) Wikipedia, which has always had a community-as-moderators setup.
Birdwatch allows people to identify information in Tweets they believe is misleading and write notes that provide informative context. We believe this approach has the potential to respond quickly when misleading information spreads, adding context that people trust and find valuable. Eventually we aim to make notes visible directly on Tweets for the global Twitter audience, when there is consensus from a broad and diverse set of contributors.
In this first phase of the pilot, notes will only be visible on a separate Birdwatch site. On this site, pilot participants can also rate the helpfulness of notes added by other contributors. These notes are being intentionally kept separate from Twitter for now, while we build Birdwatch and gain confidence that it produces context people find helpful and appropriate. Additionally, notes will not have an effect on the way people see Tweets or our system recommendations.
Will this work? There are many, many reasons why it might not. Wikipedia itself has spent years dealing with these kinds of questions, and had to build a kind of shared culture and informal and formal rules about what kind of content belongs on the site. It's a lot harder to retrofit that kind of thinking back onto a platform like Twitter where pretty much anything goes. There is, also, of course, the risk of brigading and mobs -- whereby a crew of people might attack a certain tweet or type of information with the goal of getting accurate information being declared "fake news" or something along those lines.
Twitter, I'm sure, recognizes these challenges. The details of how Birdwatch is set up certainly suggests that it's going to watch and iterate as it goes, but the company recognizes that if it can get this right, it could be quite useful. That's why, even if there's a high risk of failure, I still think it's an interesting and worthwhile experiment.
Some of the initial results, however... don't look great. A bunch of clueless Trumpists have been trying to minimize the traumatic experience that Alexandria Ocasio-Cortez recently described as her experience during the insurrection at the Capitol on January 6th. Because these foolish people don't understand that the Capitol complex is a set of interconnected buildings, they are arguing that AOC was "lying" when she talked about the fear she felt while initially hiding in her office during the raid -- since her office is in the connected Cannon Building, and not in the domed part of the Capitol complex. It turned out that some of the fear came from a Capitol police officer yelling "where is she?" and barging into the office. AOC, not realizing it was a Capitol police officer, recently spoke movingly about how afraid she was that it was an insurrectionist.
Since they started trying to make this argument on social media, AOC responded, pointing out that the entire Capitol complex was under attack (even if it wasn't, the fact that you're in a building across the street from a riotous mob that clearly wouldn't mind killing you, is a perfectly good reason to be afraid). She also mentioned the two pipe bombs that were found near the Capitol, which were not far from the Congressional office buildings.
However, if you go to Birdwatch, it shows a bunch of disingenuous people trying to present AOC's statements as disinformation.
Of course, this just shows exactly the problem of trying to deal with "disinformation." It is often used as a weapon against people you disagree with, where you might nitpick or argue technicalities, rather than the actual point.
I am hopeful that this experiment gets better at handling these situations, but I recognize the huge difficulty in doing this with any sort of consistency at scale, when you're always going to be dealing with disingenuous and dishonest actors trying to game the system to their own advantage.
Filed Under: birdwatch, content moderation, crowdsourcing, disinformation, misinformation
Companies: twitter