I question the motives behind this recent push to moderate “misinformation” online. I don’t think it’s a game that these companies should be in the business of playing. YouTube, Twitter, Facebook, etc. shouldn’t be in the position of determining what is and isn’t true. It’s one thing to remove categories of content that aren’t allowed on the platform it’s another thing to remove some content in a given category based on what the platform decides is accurate information.
The onus should be on the viewer/reader to make that determination themselves. It isn’t perfect, for sure, but I haven’t seen anything proposed that’s a net positive by comparison. Everything that’s been tried has unintended consequences and seems to always result in the stifling of speech that is later found to be true.
Many of these companies have also decided to outsource these decisions to third-party fact checkers or base it on government recommendations, both of which can be abused for malicious gain. And the latter of which feels a bit too close to infringing on first amendment rights, since it’s based on government decree.
But YouTube appears to be exploring other options, including the absurd notion of disabling the share button or breaking links to videos.
From a recent piece on their weblog:
Another challenge is the spread of borderline videos outside of YouTube – these are videos that don’t quite cross the line of our policies for removal but that we don’t necessarily want to recommend to people. […]
One possible way to address this is to disable the share button or break the link on videos that we’re already limiting in recommendations. That effectively means you couldn’t embed or link to a borderline video on another site. But we grapple with whether preventing shares may go too far in restricting a viewer’s freedoms. Our systems reduce borderline content in recommendations, but sharing a link is an active choice a person can make, distinct from a more passive action like watching a recommended video.
This sounds like such a terrible idea — the kind of idea that might really hurt them long-term if they use it too aggressively and tick off enough creators along the way. I’m happy to hear that it’s something they’re “grappling” with, but publishing the idea alone is an indication of which way they’re leaning.
But it’s not even clear what exactly they mean by “disable the share button”. Do they simply mean the share button in the YouTube app? If you view the video in a web browser, you could just copy the link from your browser’s address bar, right? Or is that where the idea of breaking the link comes in?
And how would breaking the link even work? Would they just periodically change the URL for the video? It’s not like you could break all existing and future links to the video — the video needs to have a URL in order for it to be viewable in the browser and that URL could be shared. Unless, I guess, they cycled through URLs for a given video so frequently that those links wouldn’t resolve by the time someone clicked on them. But at that point, isn’t the video functionally removed anyway? So what’s the point?
Taking a step back to look at the bigger picture, I think Matt Baer has a good take on this:
Seems to me that a space made up of humans is always going to have (very human) lying and deception, and the spread of misinformation in the form of simply not having all the facts straight. It’s a fact of life, and one you can never totally design or regulate out of existence.
I think the closest “solution” to misinformation (incidental) and disinformation (intentional) online is always going to be a widespread understanding that, as a user, you should be inherently skeptical of what you see and hear digitally.
During my teenage years, the advice of “don’t believe everything you read on the internet” was repeated ad nauseam — from teachers, parents, relatives, and so on. Yet somehow, we managed to forget that. We would all be wise to move forward with a healthier dose of skepticism by default.