Tag Archive for ‘Censorship’

Stop France From Forcing Browsers to Censor Websites ➝

Mozilla:

The French government is working on a law that could threaten the free internet. The so-called SREN bill (‘Projet de loi visant à sécuriser et réguler l’espace numérique’) would require web browsers – like Mozilla’s Firefox – to block websites in the browsers themselves. It would set a dangerous precedent, providing a playbook for other governments to also turn browsers like Firefox into censorship tools.

I’d like to see a lot more of this version of Mozilla and a lot less of the other.

➝ Source: foundation.mozilla.org

DHS Pauses Disinformation Governance Board ➝

Tom Parker, writing for Reclaim the Net:

After being threatened with legal action, accused of being a violation of the First Amendment, and facing mass condemnation, the US Department of Homeland Security’s (DHS’s) “Disinformation Governance Board” has been paused and its head, Nina Jankowicz, has resigned.

The board was an absolutely abhorrent idea and I expect it would have been dismantled in the courts if it ever got off the ground. Hopefully this “pause” is permanent.

➝ Source: reclaimthenet.org

Department of Homeland Security Introduces ‘Disinformation Governance Board’ ➝

Dan Frieth, writing for Reclaim the Net:

DHS Secretary Alejandro Mayorkas announced the formation of the board to the House Appropriations DHS Subcommittee on Wednesday, saying: “Our Undersecretary for Policy, Rob Silvers is co-chair with our Principal Deputy General Counsel, Jennifer Gaskell, in leading a just recently constituted misinformation disinformation governance board. So we’re bringing – the goal is to bring the resources of the department together to address this threat.”

Mayorkas said that the new board would fall under the Biden’s Center for Prevention Programs and Partnerships and it would have no authority to crack down on disinformation directly and will instead funnel funds to various causes it thinks are impacted by disinformation.

Nina Jankovicz, the woman named as head of the board, made public statements in 2020 that indicate that she “would never want to see the executive branch have these sorts of powers.” And yet, here we are.

We should never give our government powers that we wouldn’t want our opposition party to have access to. I always try to think of government initiatives through that lens because your preferred representative isn’t always going to win elections.

The bottom line is, this is an absolutely awful idea. And feels like a not-so-thinly-veiled attempt to subvert the first amendment.

➝ Source: reclaimthenet.org

DuckDuckGo Down-Ranking Sites ‘Associated With Russian Disinformation’ ➝

Tom Parker, writing for Reclaim the Net:

The founder of DuckDuckGo, a Google-alternative search engine that has touted its “unbiased” search results for years, has announced that it has started down-ranking sites based on whether they’re deemed to be associated with Russian disinformation.

This piece does a pretty good job of pointing out the concern with this change where most others have missed the point. We want search engines to rank results based on relevancy and that can be determined by numerous factors. That’s literally what search engines do. But they’re making a determination as to whether or not a piece of content is “disinformation” and then down-ranking content based on that. That’s an editorial decision.

And what if they’re wrong? What if they down-rank content that is later found to be true? What if someone is specifically looking for “disinformation” content for research purposes — to see what the opposing perspective has to say in order to better form their opinions or to point at the absurdity?

While this certainly isn’t the end of DuckDuckGo, I wouldn’t consider it to be a positive change. It’s one of the reasons (along with privacy concerns and a dislike of centralization of power/influence) that I moved away from Google so many years ago. I would prefer to make my own decisions on these types of matters.

I don’t use DuckDuckGo directly anymore, that changed last year when I started self-hosting SearX. But I still use DuckDuckGo as one of the search engines powering SearX’s results. I don’t expect this news will change that because the results I get are still much better with DuckDuckGo included than without them.

I still have my eye on Brave Search, though. 92% of their results across all users comes from their own index and they don’t filter results for editorial reasons. I’m not sure if I’m ready to switch to them, but I like a lot of what they’re doing in this space.

➝ Source: reclaimthenet.org

Skepticism by Default

I question the motives behind this recent push to moderate “misinformation” online. I don’t think it’s a game that these companies should be in the business of playing. YouTube, Twitter, Facebook, etc. shouldn’t be in the position of determining what is and isn’t true. It’s one thing to remove categories of content that aren’t allowed on the platform it’s another thing to remove some content in a given category based on what the platform decides is accurate information.

The onus should be on the viewer/reader to make that determination themselves. It isn’t perfect, for sure, but I haven’t seen anything proposed that’s a net positive by comparison. Everything that’s been tried has unintended consequences and seems to always result in the stifling of speech that is later found to be true.

Many of these companies have also decided to outsource these decisions to third-party fact checkers or base it on government recommendations, both of which can be abused for malicious gain. And the latter of which feels a bit too close to infringing on first amendment rights, since it’s based on government decree.

But YouTube appears to be exploring other options, including the absurd notion of disabling the share button or breaking links to videos.

From a recent piece on their weblog:

Another challenge is the spread of borderline videos outside of YouTube – these are videos that don’t quite cross the line of our policies for removal but that we don’t necessarily want to recommend to people. […]

One possible way to address this is to disable the share button or break the link on videos that we’re already limiting in recommendations. That effectively means you couldn’t embed or link to a borderline video on another site. But we grapple with whether preventing shares may go too far in restricting a viewer’s freedoms. Our systems reduce borderline content in recommendations, but sharing a link is an active choice a person can make, distinct from a more passive action like watching a recommended video.

This sounds like such a terrible idea — the kind of idea that might really hurt them long-term if they use it too aggressively and tick off enough creators along the way. I’m happy to hear that it’s something they’re “grappling” with, but publishing the idea alone is an indication of which way they’re leaning.

But it’s not even clear what exactly they mean by “disable the share button”. Do they simply mean the share button in the YouTube app? If you view the video in a web browser, you could just copy the link from your browser’s address bar, right? Or is that where the idea of breaking the link comes in?

And how would breaking the link even work? Would they just periodically change the URL for the video? It’s not like you could break all existing and future links to the video — the video needs to have a URL in order for it to be viewable in the browser and that URL could be shared. Unless, I guess, they cycled through URLs for a given video so frequently that those links wouldn’t resolve by the time someone clicked on them. But at that point, isn’t the video functionally removed anyway? So what’s the point?

Taking a step back to look at the bigger picture, I think Matt Baer has a good take on this:

Seems to me that a space made up of humans is always going to have (very human) lying and deception, and the spread of misinformation in the form of simply not having all the facts straight. It’s a fact of life, and one you can never totally design or regulate out of existence.

I think the closest “solution” to misinformation (incidental) and disinformation (intentional) online is always going to be a widespread understanding that, as a user, you should be inherently skeptical of what you see and hear digitally.

During my teenage years, the advice of “don’t believe everything you read on the internet” was repeated ad nauseam — from teachers, parents, relatives, and so on. Yet somehow, we managed to forget that. We would all be wise to move forward with a healthier dose of skepticism by default.

Facebook and ‘Fact Checkers’ ➝

John Stossel, writing in the New York Post:

Recently, I sued [Facebook] because they defamed me. They, along with one of their “fact-checkers,” a group called Science Feedback, lied about me and continue to lie about me.

Now Facebook has responded to my lawsuit in court.

Amazingly, their lawyers now claim that Facebook’s “fact-checks” are merely “opinion” and therefore immune from defamation.

So Facebook has admitted that they are citing opinion articles for their “fact checks”. But it gets worse when you consider that Facebook is the one applying the labels:

Facebook’s warning was created by Facebook and posted in Facebook’s voice.

As Facebook’s own website says: “We … apply a warning label …”

I brought Facebook’s defamation to their attention a year ago, and they did nothing to correct it.

I did not want to sue Facebook. I hate lawsuits. But after they defamed me, I felt I had no choice.

The whole idea of social media companies applying “fact checking” to posts always felt like a terrible idea. And if cases like this are successful, it’s really going to bite them in the end.

➝ Source: nypost.com

Twitter Expanding Private Information Policy to Include Media ➝

From Twitter’s weblog, regarding their new policy on “sharing private media”:

When we are notified by individuals depicted, or by an authorized representative, that they did not consent to having their private image or video shared, we will remove it. This policy is not applicable to media featuring public figures or individuals when media and accompanying Tweet text are shared in the public interest or add value to public discourse.

However, if the purpose of the dissemination of private images of public figures or individuals who are part of public conversations is to harass, intimidate, or use fear to silence them, we may remove the content in line with our policy against abusive behavior.

There are perfectly good reasons for a policy like this. But I worry it will be selectively enforced and some independent journalists will be caught in the crosshairs. Consider this bit near the end, emphasis added:

We will always try to assess the context in which the content is shared and, in such cases, we may allow the images or videos to remain on the service. For instance, we would take into consideration whether the image is publicly available and/or is being covered by mainstream/traditional media (newspapers, TV channels, online news sites), or if a particular image and the accompanying tweet text adds value to the public discourse, is being shared in public interest, or is relevant to the community.

It feels like this gives corporate outlets the ability to essentially launder media — publishing it on their site first and then to Twitter, meeting the conditions of the emphasized text.

➝ Source: blog.twitter.com

Popular Quran App Removed by Apple in China ➝

Ben Lovejoy, writing for 9 to 5 Mac:

A popular Quran app has been removed from Apple’s Chinese App Store at the request of the government, according to a new report today.

Wouldn’t it be cool if you could install apps from outside of the App Store so it wasn’t the single point of failure for censorship?

➝ Source: 9to5mac.com