Mike Becky

Tag Archive for ‘FaceBook’

Thread Count ➝

There’s a lot of hype around Threads and they’re piggybacking off of their existing account system, which makes sign up very easy for most people.

But the user numbers we’re seeing right now are irrelevant. There’s no way of knowing how many of these users are actually going to stick around beyond the initial launch timeframe.

I’ll wait six months to see what their monthly active user numbers look like then before coming to any conclusions.

➝ Source: theverge.com

Don’t Join Threads, Make Threads Join You ➝

I logged in to Threads to poke around, but don’t plan to use it for much beyond that. I’ll wait until ActivityPub support is added and then I’ll interact with Threads users from my Mastodon account.

➝ Source: wired.com

Premature Thoughts on Threads ➝

Chris Hannah:

The main rumour was that this new app would support ActivityPub, which is an open protocol that Mastodon is based on. This has led to all sorts of reactions. From people completely opposed to anything from Meta connecting to the Fediverse, and wanting to block it from their instance. To people that are excited about the potential of the new users that it would bring to the degenerated social network world.

I’d say I’m somewhere near the more optimistic end. Because, there is clearly space in the market for a new short-form text-based social network, and if it’s backed by Instagram, then it stands a good chance of surviving. Or at least gaining enough attention to make it viable in the short term.

If Threads supports ActivityPub, as the rumors suggest, it’ll be the coolest product announcement in the history of Meta.

It also has the potential to push the mainstream toward a future of social networking that I’ve had in the back of my mind for over a decade. A future where each person can pick and choose the types of services they want to use — video, photos, short-form text, longer articles, and others — and everything connects together because of a common, interoperable, and open protocol.

➝ Source: chrishannah.me

‘It Has Become QVC 2.0’ ➝

Om Malik:

Instagram’s co-founders, Kevin Systrom and Mike Krieger created a mobile social network based on visual storytelling. The impetus provided by the early photography-centric approach turned it into a fast-growing phenomenon. For Facebook, it was an existential threat. And it was worth spending nearly a billion dollars to own, control, and eventually subsume. And that’s precisely what Facebook has done.

What’s left is a constantly mutating product that copies features from “whomever is popular now” service — Snapchat, TikTok, or whatever. It is all about marketing and selling substandard products and mediocre services by influencers with less depth than a sheet of paper.

I still publish photos to Instagram because that’s where my family is, but Pixelfed is my home for photography on the web now — you can follow me @mike@libertynode.cam. I only check Instagram when I’m posting and only because the service lacks the APIs to do so through a third-party automation.

➝ Source: om.co

Skepticism by Default

I question the motives behind this recent push to moderate “misinformation” online. I don’t think it’s a game that these companies should be in the business of playing. YouTube, Twitter, Facebook, etc. shouldn’t be in the position of determining what is and isn’t true. It’s one thing to remove categories of content that aren’t allowed on the platform it’s another thing to remove some content in a given category based on what the platform decides is accurate information.

The onus should be on the viewer/reader to make that determination themselves. It isn’t perfect, for sure, but I haven’t seen anything proposed that’s a net positive by comparison. Everything that’s been tried has unintended consequences and seems to always result in the stifling of speech that is later found to be true.

Many of these companies have also decided to outsource these decisions to third-party fact checkers or base it on government recommendations, both of which can be abused for malicious gain. And the latter of which feels a bit too close to infringing on first amendment rights, since it’s based on government decree.

But YouTube appears to be exploring other options, including the absurd notion of disabling the share button or breaking links to videos.

From a recent piece on their weblog:

Another challenge is the spread of borderline videos outside of YouTube – these are videos that don’t quite cross the line of our policies for removal but that we don’t necessarily want to recommend to people. […]

One possible way to address this is to disable the share button or break the link on videos that we’re already limiting in recommendations. That effectively means you couldn’t embed or link to a borderline video on another site. But we grapple with whether preventing shares may go too far in restricting a viewer’s freedoms. Our systems reduce borderline content in recommendations, but sharing a link is an active choice a person can make, distinct from a more passive action like watching a recommended video.

This sounds like such a terrible idea — the kind of idea that might really hurt them long-term if they use it too aggressively and tick off enough creators along the way. I’m happy to hear that it’s something they’re “grappling” with, but publishing the idea alone is an indication of which way they’re leaning.

But it’s not even clear what exactly they mean by “disable the share button”. Do they simply mean the share button in the YouTube app? If you view the video in a web browser, you could just copy the link from your browser’s address bar, right? Or is that where the idea of breaking the link comes in?

And how would breaking the link even work? Would they just periodically change the URL for the video? It’s not like you could break all existing and future links to the video — the video needs to have a URL in order for it to be viewable in the browser and that URL could be shared. Unless, I guess, they cycled through URLs for a given video so frequently that those links wouldn’t resolve by the time someone clicked on them. But at that point, isn’t the video functionally removed anyway? So what’s the point?

Taking a step back to look at the bigger picture, I think Matt Baer has a good take on this:

Seems to me that a space made up of humans is always going to have (very human) lying and deception, and the spread of misinformation in the form of simply not having all the facts straight. It’s a fact of life, and one you can never totally design or regulate out of existence.

I think the closest “solution” to misinformation (incidental) and disinformation (intentional) online is always going to be a widespread understanding that, as a user, you should be inherently skeptical of what you see and hear digitally.

During my teenage years, the advice of “don’t believe everything you read on the internet” was repeated ad nauseam — from teachers, parents, relatives, and so on. Yet somehow, we managed to forget that. We would all be wise to move forward with a healthier dose of skepticism by default.

Mozilla Works With Meta on ‘Privacy Preserving Attribution for Advertising’ ➝

Martin Thomson, writing on Mozilla’s weblog:

Attribution is how advertisers know if their advertising campaigns are working. Attribution generates metrics that allow advertisers to understand how their advertising campaigns are performing. Related measurement techniques also help publishers understand how they are helping advertisers. Though attribution is crucial to advertising, current attribution practices have terrible privacy properties.

For the last few months we have been working with a team from Meta (formerly Facebook) on a new proposal that aims to enable conversion measurement – or attribution – for advertising called Interoperable Private Attribution, or IPA.

I’m glad I switched to Brave. It just saddens me that it’s built on Chromium. I wish Mozilla was a better steward of privacy and freedom online, but that doesn’t appear to be who they are anymore. So, Brave it is.

➝ Source: blog.mozilla.org

Facebook and ‘Fact Checkers’ ➝

John Stossel, writing in the New York Post:

Recently, I sued [Facebook] because they defamed me. They, along with one of their “fact-checkers,” a group called Science Feedback, lied about me and continue to lie about me.

Now Facebook has responded to my lawsuit in court.

Amazingly, their lawyers now claim that Facebook’s “fact-checks” are merely “opinion” and therefore immune from defamation.

So Facebook has admitted that they are citing opinion articles for their “fact checks”. But it gets worse when you consider that Facebook is the one applying the labels:

Facebook’s warning was created by Facebook and posted in Facebook’s voice.

As Facebook’s own website says: “We … apply a warning label …”

I brought Facebook’s defamation to their attention a year ago, and they did nothing to correct it.

I did not want to sue Facebook. I hate lawsuits. But after they defamed me, I felt I had no choice.

The whole idea of social media companies applying “fact checking” to posts always felt like a terrible idea. And if cases like this are successful, it’s really going to bite them in the end.

➝ Source: nypost.com

Facebook Changes Name to ‘Meta’ ➝

The name is fine. The logo is fine. And, as you’d expect, the company is just as toxic as ever.

➝ Source: about.facebook.com