Should social media remove content they don’t consider factual?
The First Amendment states, 'Congress shall make no law … abridging the freedom of speech or of the press.' As the coronavirus continues to dominate the headlines, social media outlets like Facebook and YouTube have announced that they will ban postings they decide are not factual. This could bring social media censorship to a whole new level.
Terms like 'factual' or 'unsubstantiated' can be a judgment call, and even smart people with the best of intentions can make mistakes. Recent examples in the coronavirus response are typical: On one hand, the CDC 'recommends that everyone—sick or healthy—wear a cloth face mask in places … like grocery stores.' On the other hand, a senior WHO executive had stated, 'There is no specific evidence to suggest that the wearing of masks by the mass population has any potential benefit.' Unfortunately, people can look at the same facts and draw different conclusions.
Publishing the truth (as they saw it) helped America's Founding Fathers win freedom for the country, and the presidential election in 1800 proved that they even valued an utterly out-of-control press. They trusted the American people to separate truth from fiction. They probably never guessed that every single American might someday publish through social media, but as Thomas Jefferson wrote, 'If a nation expects to be ignorant and free … it expects what never was and never will be … where the press is free … all is safe.'
It isn't about truth or error; it's about control. If you never hear opposing views, you can never oppose the views you hear.
False content is a huge issue. It tends to mislead the masses, spread wrong or hateful content, and influence peoples' opinions. A 2017 study revealed that 9 out of 10 Americans don't fact-check news that they read on social media. Another study showed that false information spreads faster than the truth.
Some claim that during the 2016 US elections, several sponsored advertisements running against specific individuals and party stances were spreading disinformation. In an effort to preserve the trust of their users, social media sites have started to abandon potential revenue from these ads in favor of limiting the spread of false news.
Trust is an important theme, as a 2018 survey revealed that 29% of respondents think that social media sites are the main culprit in the spread of fake news. More alarmingly, 69% of individuals in the same survey believe that social media platforms are not doing enough to counter fake news.
It's vital to note that Facebook has been using 3rd-party fact-checkers since 2016. Once users flag a post, it is then reviewed by the fact-checker, who may determine that it is indeed false, in which case, the post's reach is 'downgraded.'
However, is downgrading a post's reach enough? It may still be shared by the limited number of users it reaches through other mediums such as WhatsApp--invariably rendering the company's efforts to be in vain. For this reason, social media sites should remove content they don't think is factual to restrict the spread of false information completely.
- Lawsuits alleging social media have violated the first amendment by removing content have failed in the past because courts ruled that the first amendment applies to state action and not private companies.
- Disinformation is defined as, “...false information deliberately and often covertly spread (as by the planting of rumors) in order to influence public opinion or obscure the truth.”
- A 2018 Media Research Center/McLaughlin & Associates poll revealed that 66% of respondents felt that “...they do not trust Facebook to treat all of its users equally regardless of their political beliefs.”
- During the COVID-19 pandemic, Facebook has relied more heavily on artificial intelligence to detect content that goes against its community standards because human moderators are not legally allowed to access sensitive Facebook data from home computers.