Patrick Collison, writing on Twitter:
These platforms have tough jobs, no doubt. But I’m worried that the embrace of “misinformation” as a newly illegitimate category may have costs that are considerably greater than what’s apparent at the outset.
It’s dangerous for platforms to categorize content as “misinformation”, label it as such, and/or suppress its reach. What if they get it wrong? What if a commonly held opinion is the exact opposite of the truth and the people that are trying to share the evidence are being suppressed?
Perhaps you trust the current team in charge of classification, but what happens when those members are filtered out and a new group with more nefarious motives take over?
How can you be sure that you’re getting accurate information when it’s being filtered by a company that’s primarily motivated by “engagement”?