In the wake of pandemic conspiracy theories and false information about coronavirus cures, Twitter announced on May 11 in a blog post that it will begin warning users when Tweets contain misinformation about the coronavirus. The company plans on carefully surveying user posts and labeling those that it deems to contain misleading information, disputed claims, or unverified claims — even if it comes from world leaders.
Along with Twitter, Google, Facebook, and YouTube have all been instating policies to help them better respond to the spread of coronavirus misinformation that users have been pushing through their platforms. They've focused on posts that counter the instructions of public health officials, fabricate miracle cures for the virus, and incite "mass violence or civil unrest." And it isn't the first time Twitter has introduced regulations surrounding the spread of misinformation on the app. The company has been removing posts it considers harmful during the pandemic since mid-March.
But Twitter will not simply be removing every single Tweet that its team deems to be inaccurate. This would place the platform in a position of gatekeeper in regard to deciding what is "accurate" or "inaccurate," which is something the company told The New York Times that it has no interest in doing. Twitter will first determine the category of misinformation and its propensity for harm to determine if it gets a warning label, removed, or it's left alone.
Misleading information is defined by Twitter as any statement made in posts that have been confirmed to be false by experts in the field. According to Twitter's blog post, Tweets with misleading information could result in either a label or removal.
When a misleading post gets labeled, it will also provide a link that redirects users to more information about the virus. This would include tweets that suggest that older, immunocompromised people are the only ones who may have severe cases of COVID-19. This post claims something that the World Health Organization (WHO) has disputed, but doesn't state anything that would put people in immediate danger.
Misleading posts that would be removed include ones that suggest that wearing a mask doesn't stop the spread of coronavirus — a claim that directly counters the advice of the WHO and could put people at risk of exposure, should they listen to it.
Disputed claims include posts that carry information with credibility that can be contested or might be unknown. Posts that suggest rinsing your nose in saline can rid of the virus or that the virus is killed in cold weather or climates fit this criteria. These posts don't necessarily carry high chances of being harmful, so Twitter would render a label to them with links to more accurate information.
If the disputed claim is more dangerous, such as claims that taking extremely hot baths can eliminate the virus (carrying possible consequences of people burning themselves), it would receive a warning label that reads: "Some or all of the content shared in this tweet conflict with guidance from public health experts regarding COVID-19.”
Things get a little tricky when it comes to unverified claims. According to Twitter's blog post, an unverified claim includes "information (which could be true or false) that is unconfirmed at the time it is shared." Whether these claims have moderate or severe propensity for harm, as of right now Twitter will not take any action. But the company does note that it will be working to eventually introduce new labels that provide better context around Tweets with unverified claims.
Gauging New Information As It Comes
Since the global outbreak of COVID-19, guidelines and information about the virus — from wearing masks to symptoms of the virus — is constantly being updated often as public health experts continue to learn more about the novel coronavirus. While social platforms are adding new features and policies to fight the influx of misinformation, it's more important now more than ever to stay updated from reliable sources.