A Blindingly Obvious Solution to the Fake News Dilemma

Facebook and Twitter treat their users like children who need parental control software to protect them from scary content on the Internet. There’s a better solution.

Steve McConnell
6 min readJan 22, 2021

--

Like most Americans, I am appalled by the hate speech, extremist messages, foreign disinformation, and fake news posted on Twitter, Facebook, and other social media sites. The proliferation of false and dangerous content is a huge problem.

But the responses from Twitter, Facebook, and other social media companies have been equally appalling. They have implemented the equivalent of parental control software that you can’t turn off and you can’t uninstall. The “parents” controlling the software are 20-somethings whose life experience consists of playing Call of Duty and binge watching The Office.

Some of us are pretty sure that we are able to determine what’s safe for ourselves at least as well as the parental control czars at Twitter and Facebook.

Fortunately, there’s an easy and — frankly — shockingly obvious solution to the problem of how to prevent wildfire dissemination of questionable content while simultaneously shifting the content filtering to parties that are unarguably neutral. The solution is: shift the decision about what to filter and how to filter it to you and me.

Give Users Control Over Content Filtering

The first step is to take some of the power out of the hands of Twitter and Facebook and put decision making into your hands.

The following discussion uses examples from Twitter, but the user control settings on Facebook and other social media platforms would be the same.

The figure below shows settings in Twitter that would give users control over content filtering. You get to select what kind of content is filtered, how the filtered content is handled, and whose filter is used — Twitter’s filter or someone else’s.

Twitter should offer revised settings like these that give users control over content filtering.

People have different sensitivities to objectionable content, so the “What to filter” settings would let you decide what type of content is filtered. These settings put power to determine what is seen and what is not seen into your hands, not Twitter’s.

After content is determined to be objectionable, people also have different ideas about how they want that objectionable content to be handled. You should be able to choose just to see a warning label that says “filtered content” on any questionable content. Alternatively, you should be able to cover up questionable content entirely, but uncover it if you want to. You should be able to block the content from your feed entirely, as is done today. And you should be able to see the content without treating it in any special way at all.

Twitter CEO Jack Dorsey was quoted as saying, “Our role is to protect the integrity of that conversation and do what we can to make sure that no one is being harmed based off that.” No. Twitter’s role is to provide the town square where people meet and dialog; it is not to limit what people can say — or hear — in that town square.

Do you really want 20-somethings deciding what’s safe for you to see? Don’t get me wrong: I love 20-somethings. I even used to be one myself. But I wouldn’t put my 20-something self in charge of deciding what my older self can see. You should be the one who determines how much you want to be protected and who provides that protection.

Advantages of Polyculture Filtering

When megacompanies holding de facto monopolies over their market niches exercise sole control over content filtering, there’s no room for diverse perspectives, competition in filtering technologies, or the improvements that come from a competitive playing field.

To support a diversity of perspectives and more competition, you should be able to choose Twitter to filter your content, or a filter from another company. Conceptually , you could choose a filter from Snopes, Politifact, WaPo, MSNBC, FoxNews, or all the above. You could also choose a crowd-source filter, maybe even one from the Wikimedia Foundation.

For real diversity, you should be able to choose combinations of filters, and further be able to choose what happens when the filters disagree. To filter content that has any question of being fake, you could choose to filter news when either MSNBC or FoxNews flags it as questionable.

Alternately, if you want to err on the side of not filtering very much, you could choose to filter news only when MSNBC and FoxNews both identify the news as questionable (which could end up filtering nothing at all!).

The overriding principle is to decentralize the decisions about which content is acceptable. Having only one or two companies make all the decisions is like having a monoculture crop that gets completely wiped out the first time a new strain of grasshoppers appears. Polyculture crops are more resistant to attacks. Part of the crop still gets wiped out, but most does not.

What Twitter and Facebook offer now is monoculture information filtering that is susceptible to single lines of attack from foreign powers or other bad actors, or just to slip ups from their internal staff. Polyculture information filtering is more effective against disinformation and fake news because it’s more diverse, and it can’t be defeated by a single line of attack.

We Have Been Down This Road Before — Many Times — and the Answer has Always Been the Same

The tech world has dealt with the tradeoff between individual freedom and safety numerous times, and it has decided the issue the same way each time.

In the late 1990s the Justice Department sued Microsoft because combining Internet Explorer with Windows gave one company monopolistic influence over the internet browser market. Separating Windows from Internet Explorer opened up room that allowed Chrome, Firefox, and other browsers to prosper. The operating system platform (Windows) and the information appliance (the browser) needed to be allowed to be provided by separate companies.

The pattern repeated with the browser. For Chrome users, the Google search engine seemed natural because Chrome is a Google product. However, Chrome allows you to select your search engine of choice: Google, Bing, Yahoo!, and so on. The internet platform (the browser) and the information exploration tool (the search engine) did not need to be tightly coupled — and should not be. Users need to be allowed to select their browsers and search engines separately.

Even within the search engine, users are allowed to set the level of safety they feel comfortable with, including turning off most or all safeguards altogether. Google or Bing can recommend a level of safeguards, but individual users make the final decision. The parental control option is there, but you get to be the parent.

I find extremists’ messages appalling, but Twitter executives who believe they need to do my critical thinking for me are just as bad.

Microsoft separated Internet Explorer from Windows, and Google separated Chrome from Google search. Twitter and Facebook need to separate their social media platforms from their social media content filtering. Their users will be happier, and the world will be a better place.

--

--

Steve McConnell

Author of Code Complete and More Effective Agile, CEO at Construx Software, Dog Walker, Motorcyclist, Cinephile, DIYer, Rotarian. See stevemcconnell.com.