We all know how broken automated copyright bots are. But that isn’t stopping Congress from investigating a new law called the “SMART” Copyright Act to have them work across the Internet. If passed a corporation can not only remove content but prevent anyone, anywhere from seeing it by blacklisting them from every ISP in the USA.
As any YouTube creator knows, false ContentID copyright claims are a very common occurrence. Sometimes a small piece of music playing in the background will trigger one. Other times big media organizations fail to apply the proper duty of care and automatically claim ownership of public domain footage or other media they don’t own.
Now just imagine all of those issues getting applied to the Internet at large. That’s exactly what this proposed law will do. It will have the Library of Congress deploy an army of copyright bots that will not only pull down content from websites but also require ISPs to block traffic going to those sites. A master blacklist with no recourse for a creator to appeal.
YouTube creator PushingUpRoses often does funny commentaries on classic TV shows. Each time she makes a video about “Murder She Wrote” for YouTube it’s immediately flagged and blocked by NBC Universal even before she hits the publish button. She has to file an appeal – essentially asking for permission – to publish a piece that is clearly within fair use.
Now think about this example applied to the rest of the Internet. Want to criticize a big corporate media outlet? You need to ask permission first. And they’ll have the power to effectively take you off the entire Internet – not just YouTube – if they don’t like what you’re saying.
So far the bill hasn’t made much progress but I expect the big money corporate interests behind this to quietly push it along. Big media sees an opportunity to silence independent creators now that big tech companies are not as popular among members of Congress as they were a decade ago during the SOPA/PIPA debate.
As part of our ongoing effort to make sure we’re spending our time and energy in ways that best serve our awesome user community, we’ve made the decision to end support for podcasts within Plex. We recognize this impacts several of you greatly, and we apologize for the inconvenience it will cause. You can continue to access your podcasts within Plex until next Friday, April 15th, 2022, at which point they will no longer be available.
I did a video recently about how nobody controls podcasting due to its decentralized nature. Check it out learn why so many social media companies struggle integrating podcasting into the apps.
Twitter may be under new ownership soon if a mammoth $44 billion purchase by Elon Musk goes through. For this week’s Weekly Wrapup video I offer 7 ideas that I think would help make Twitter work better and perhaps even address how free speech can work on social platforms.
Here’s what I think Elon should do:
Eliminate “Blue Check” Elitism Twitter has two classes of citizens: ones with a blue check and ones without. Blue checks are reserved mostly for people who belong to major media organizations or have enormous followings. They can upload much longer videos, filter out those of us without the checkmarks, and get other privileges. It’s time to level the playing field so every user has a chance.
Balance Political Content Recommendations Social platforms have algorithms that could very easily provide viewers with multiple perspectives on hot button issues. But because they value attention and engagement more than responsible discourse they tend to only put things in front of viewers that they already agree with.
For nearly a century broadcast media has been required to follow an “equal time” rule. The way it works is that if I as a candidate for public office get interviewed for a news story, the broadcast station has to offer the same opportunity (and air time) to my opponent. The same rules apply to purchasing advertising – my opponent gets the same deal and time that I was offered. And a candidate’s advertisement cannot be censored – a political candidate can say anything they want in an advertisement.
There also used to be a “fairness doctrine” in the United States that required broadcasters to cover controversial topics and offer ample opportunities for opposing viewpoints.
So how would the algorithm determine what to recommend? Perhaps instead of topics they should look at behavior.
Moderate on Behavior – Not Topics As the chairman of my local board of education one of my responsibilities is to ensure the public has an opportunity to be heard. We have an “audience of citizens” at our regular meetings where any citizen can come and address the board and share whatever they wish.
But there are limits to speech – and those limits typically involve the behavior of the speaker. For example shouting obscenities, inciting violence, and other behaviors that disturb the peace or regular order of a meeting could result in that person being asked to leave. Unfortunately modern social platforms tend to amplify and even promote bad behavior – rewarding conduct that does not contribute to constructive dialog.
Every citizen may freely speak, write and publish his sentiments on all subjects, being responsible for the abuse of that liberty.
Every right has responsibilities. If social platforms focus on both the RIGHT and the RESPONSIBILITY moderation could be done much more effectively – especially if it focuses on the behaviors of speakers vs. what it is they are trying to say.
There’s a great Twitter thread from Yishan Wong, the former CEO of Reddit, on this topic. It’s a great read that unpacks where Elon Musk is coming from related to free speech and how challenging it is to create a true online public square when everyone acts like imbeciles.
One place social platforms could look is how computer bulletin board systems (BBS) governed themselves. FidoNet, one of the largest international BBS networks in the 80’s and 90’s, spent a lot of time focusing on this problem. Their moderation rules focused almost entirely on the conduct & actions of users – not the messages they were trying to convey. There’s some wisdom in that.
Require Verification But Allow Anonymous Speech Musk wants to “authenticate all real humans” in an effort to cut down on bots. But at the same time he should look at protecting anonymous speech – an important protected right here in the United States. This would also protect parody accounts which add a lot of value to discourse.
Twitter Blue Should Get Rid of Ads Twitter Blue is a $3 monthly subscription plan that offers some additional features to the Twitter app. While it does offer some news content ad free, most of Twitter still includes advertising both as in-line tweets and as pre-roll videos.
I think Twitter Blue should work more like YouTube Premium and offer an ad free experience.
Yes, We Need an Edit Button It’s a running joke at this point that Twitter does not allow users to edit a tweet after publishing. While Twitter Blue does have a “recall” function for a few minutes after posting generally the only way to edit a tweet is to delete it and do it again.
There are some legitimate concerns that editable tweets would allow someone to accumulate a ton of RT’s and Likes and then change the content to something different (and possibly offensive). But that could be easily mitigated by clearing them. In most cases the only time I want to edit a tweet is shortly after I post it.
Open Source Twitter’s Software & Federate Content Finally I think Elon should go a step further than just open sourcing the algorithm. He should open source the entire codebase and give users the option to install their own self hosted Twitter application. Those self installs should be able to federate content with Twitter.com and other self-hosted users. This would be something similar to how WordPress makes their software available for free at WordPress.org but hosted at WordPress.com.