From Fahrenheit 451 to ‘censortech’

Fahrenheit 451 is the temperature at which book-paper catches fire and burns. It symbolises the decay of democracy and human rights because there is nothing more indicative of totalitarianism than the suppression of “dangerous ideas” through the act of burning books.

Censortech, as we would like to coin it, is the skilful art of deploying algorithms to suppress “fake news”. Could it be we are at the stage that this tech is becoming the modern-day equivalent of book burning? (We must be careful to differentiate censortech from Censortec, which is — we kid you not — a company specialising in “herd management systems”. Of the bovine variety though.)

These messages are largely a result of the content-policing strategies adopted by major social media platforms in the wake of the fake-news election-interference fiasco of 2016. Back then, the platforms were deemed guilty of behaving like neutral platforms instead of editorial outlets whose job it is to filter, prioritise, scrutinise and fact-check the news. In response, platforms introduced everything from editorial oversight boards to human adjudicators, in order to help neutralise content and make it more acceptable to consensus thinkers*. Never again would inaccurate (even if financially motivated) politicised content be allowed to destroy democracy.

With the coronavirus, however, it seems some of these measures have been extended to Soviet-style mastheads reminding the population that only official government advice is the pathway to truth. (Regardless of how convoluted, contradictory or unclear that advice and information is. WAR IS PEACE. FREEDOM IS SLAVERY. IGNORANCE IS STRENGTH. GO TO WORK, DON’T GO TO WORK. GO OUTSIDE, DON’T GO OUTSIDE.)…

In that vein, in a bid to do its bit for fake-news prevention, WhatsApp (aka Facebook) decided a month ago it would limit the forwarding of messages that were being heavily shared — or going viral — to slow the dissemination of fake news.

In such circumstances users are limited to only sharing the disputed content with one other person at a time (what you might call the digital equivalent of reducing the reproduction number to below 1). An algorithm does the work.

But when an algorithm such as this ends up flagging a mainstream media clip featuring a dissenting yet extremely credible voice, just because it says something critical about western responses to the virus, it’s clear we may have gone way too far the other way in our battle against fake news.