On social media, stories are currency and attention is the prize. Companies that want to get noticed on these platforms are tasked with crafting engaging content that slows down the viewer’s scroll and gets them to seriously consider taking action.
One problem with this dynamic is that it leaves a wide opening for misinformation to spread quickly. Some studies show that disinformation actually circulates faster than truth on certain platforms. “The top 1 percent of news “cascades” – the researchers’ word for widely spread tweets – reached between 1,000 and 100,000 people, while true information rarely reached more than 1,000 people.”
While this trend is alarming and we’ve seen many negative consequences stem from this problem, it’s important to give ourselves some context for this phenomenon.
I have often seen misinformation spread in my own feed. Sometimes the poster shares it believing it to be true, but more often, I see people sharing false information with commentary about what is wrong in the story. When people in my feed do share false information, and when I have inadvertently shared something false in the past, I see friends and acquaintances correcting the wrong impression and sharing links in the comments to give the reader better information.

Elon Musk regularly shares alarmingly counterfactual information without apology or correction. The reason these interactions end up in the news is because the Twitter/X community will correct him and argue with him about his views.
This puts news organizations in a tight spot – trying to determine whether it’s more prosocial to report on the story and make the public aware of the bad, odd, or ignorant behavior, or whether reporting on these interactions actually increases the legitimacy of the original source of bad information. The human ability to parse lies from truth is patchy and inconsistent at best.
A small DC paper ran a story about a conspiracy called Pizzagate back in 2015. At the time, the false story was everywhere on social media and the story people were telling about Democratic politicians abusing children in the basement of a pizza place called Comet Ping Pong in Washington DC. People were starting to take photos of children at the restaurant and post them on Twitter, Instagram, Facebook, and Reddit and spin more and more fantastic stories about what was happening behind the scenes at this family restaurant.
The article in The Washington City Reporter seemed to just give the story momentum and it got to the point where the owner of Comet Ping Pong hired an attorney to help his customers and staff combat their photos and photos of their children taken off of social media platforms.

Sadly the outcome in this case was that a gunman entered the restaurant in early December of 2016 and fired his AR-15 in the restaurant. He claimed he was investigating crimes he believed had occurred in the restaurant.
The truth is that these situations are really impossible to handle perfectly. When mob action can be spurred in individual homes all across the country and even the world, we need new tools and possibly new patience for each other as we figure out, communally, how we deal with an evolving pattern of radicalization and acting out that can’t be blamed on any one tool or issue.
In heartening news, hopefully in part as a reaction to events like the violence at Comet Ping Pong, one study shows that Reddit users are extremely good at downvoting misinformation and giving more attention to factual accounts.
“Regardless of veracity, fact-checked posts had larger and longer lasting conversations than claims that were not fact-checked. Among those that were fact-checked, posts rated as false were discussed less and for shorter periods of time than claims that were rated as true. We also observe that fact-checks of posts rated as false tend to happen more quickly than fact-checks of posts rated as true.”
The user push back on Twitter and the mass exodus from that platform as it fills up with racist, misogynist, and other bigoted content, are both good signs that our societal ability to deal with bad actors on our platforms might already be growing stronger.


Leave a comment