Music Labels Lose a Big Piracy Case at the Supreme Court

A twelve year legal battle about piracy between the music industry and internet service providers has finally come to an end by the US Supreme Court. The court overturned a $1 billion verdict against Cox Communications, a decision that has significant implications for how we understand copyright liability and the responsibilities of those who provide our internet access.

See more in my latest video!

The history of this conflict dates back to the early 2000s when the music and film industries struggled to adapt to the rise of digital file sharing. Initially the music industry started suing their own customers, hitting them with federal lawsuits. One instance involved a 12-year-old girl having to cough up $2,000 for a settlement and another where a woman was held liable for hundreds of thousands of dollars for sharing 24 songs.

At the time, piracy was often driven by a lack of convenient, legal digital options. Physical media sales were declining, and digital purchases were often restricted by digital rights management, or DRM, which limited how and where consumers could listen to their music.

When the strategy of suing individual users failed to curb piracy or improve the industry’s public image, the focus shifted toward where the money is: internet service providers. Organizations representing the record and motion picture industries established the Copyright Alert System, partnering with major ISPs to issue warnings to users who were sharing copyright material.

Cox Communications did not participate in this program and that put a target on their back. A lawsuit was filed in 2014 against the ISP with music label BMG arguing that Cox should be held liable for the infringement occurring on its network. BMG claimed that because Cox did not adequately respond to infringement notices, it lost the “safe harbor” protections usually granted to service providers under the Digital Millennium Copyright Act.

A federal jury originally sided with BMG, awarding a billion-dollar verdict against Cox. However, the Supreme Court’s recent reversal of this decision centered on a specific interpretation of federal copyright law. Justice Clarence Thomas, who authored the decision, noted that while Cox may not have met the requirements for DMCA safe harbor protection, other aspects of federal law do provide for an adequate defense. The ruling clarifies that a service provider is only liable if it intended for its service to be used for infringement or if it marketed itself specifically for that purpose. Because Cox provides a general-use internet service and did not induce its users to pirate material, the court found they could not be held responsible for the specific copyrights violated by their subscribers.

This development changes the landscape for other ISPs as well. They now have a defense beyond the safe harbor provisions, meaning they may not feel the same pressure to react to every automated infringement notice they receive. I suspect this will lead to a decrease in the haphazard distribution of warnings to account holders. While direct lawsuits against individuals may still occur, particularly in cases involving large volumes of distribution, the era of trying to hold the entire infrastructure of the internet accountable for individual user behavior seems to be shifting.

It should be noted that the music industry eventually found success not through litigation, but by listening to consumer demand. When they removed DRM from digital music purchases and embraced affordable streaming services, revenues skyrocketed. It is a reminder that market accessibility often addresses the root causes of piracy more effectively than legal threats.

As other industries, such as broadcasting, consider implementing new restrictions on content, the industry changes that have taken place since this case was filed suggests that focusing on what the customer wants is a more sustainable path than pursuing multi-billion dollar judgments against service providers. This ruling brings a level of technical and legal sanity back to the conversation regarding how we use and access the internet.

ATSC 3 Update: Dueling Surveys & Contact Your Congressperson!

In my latest ATSC 3.0 update video, I take a look a dueling consumer surveys from the Consumer Technology Association (CTA) opposing TV tuner mandates and another from broadcasters suggesting consumers will be more than happy to buy expensive hardware when the rug is pulled out from under us.

Pearl TV, an organization representing broadcasters, recently published a survey indicating that most viewers would be willing to purchase a low-cost converter box, estimated at around $60, rather than lose access to free television. When looking at current market behavior on platforms like Amazon, consumers are choosing tuners priced as low as $30 that include recording capabilities—a feature the proposed $60 DRM-compatible basic boxes would lack according to Pearl.

Pearl’s survey results released so far lack the “cross-tabs” that would reveal all of the questions asked and answered. Only a small amount of data appears in the Pearl TV slide deck, yet the methodology slide reveals the median time to complete the survey was 16 minutes. Clearly they are holding a lot of data back.

On the other side of the issue, the Consumer Technology Association (CTA), which represents electronics manufacturers, argues against government mandates that would force the inclusion of expensive ATSC 3.0 tuners in every television. Their research suggests that while antenna usage has seen a slight uptick to about 15% of households, awareness of the NextGen TV brand remains low. Only 5% of respondents claimed to be familiar with the term, and the vast majority had never seen the official logo. This matches my own observations in retail environments, where the technology is rarely a primary concern for consumers compared to the availability of streaming applications on a particular device.

As the National Association of Broadcasters (NAB) prepares for its annual trade show, the lobbying effort has intensified. Recently, 91 members of the House of Representatives signed a letter pressuring the FCC to move forward with the transition. This indicates that congressional offices are hearing primarily from broadcast interests. My review of the signers shows a bipartisan group of representatives from across the country, many of whom may not be fully briefed on the technical limitations and costs these encryption standards impose on their constituents.

My suggestion? It’s time to reach out to your member of Congress. My suggestion would be to forward along what you’ve already filed with the FCC. Short of that you can use some sample language that I put together here. If you’re looking for a one stop shop for finding and contacting your representatives, Democracy.io has a helpful utility for doing so.

The FCC remains cautious. Currently, Commissioner Olivia Trusty is the only official scheduled to appear; she is set to deliver a brief 10-minute presentation on ATSC 3.0 at the Las Vegas Convention Center.

With consumer adoption stuck in neutral, thanks to a complicated DRM encryption scheme, broadcasters are now going to rest their hopes on political pressure to try and force their private regulatory regime on the American people. That’s why it’s important for all of us to educate our representatives on what is really going on.

US Effectively Bans All New Router Products

The U.S. government has effectively implemented a ban on most new routers entering the domestic market, a move driven by a national security determination regarding risks posed by networking equipment produced overseas. While the order is broad, it is important to note that existing models already approved by the FCC—such as those currently found on retail shelves—are not prohibited from being sold or imported. The restriction specifically targets new products that have not yet received FCC certification.

I dive into the order and what it might mean in my latest video.

This action follows long-standing concerns from both the Biden and Trump administrations regarding vulnerabilities in consumer networking hardware.

Specifically, federal authorities pointed to prior sophisticated cyberattacks, such as those the Vault, Flax, and Salt Typhoon attacks, which utilized botnets of small office and home office (SOHO) routers to conceal the origin of attacks against U.S. critical infrastructure. In many cases, these attacks exploited “end-of-life” routers that no longer received security firmware updates from their manufacturers.

To gain authorization for new products, manufacturers must now apply for a conditional approval from the DoW/DOD or DHS. This process requires an extensive disclosure of the company’s supply chain, including a detailed bill of materials, the country of origin for all components and software, and an identification of any single points of failure in the manufacturing process.

Beyond security audits, the government is requiring a commitment to domestic production. Applicants must submit a time-bound plan to establish manufacturing and assembly operations within the United States. This includes detailing planned capital expenditures and providing progress reports on onshoring efforts. Currently, the list of compliant router manufacturers remains empty, as drone makers are the only technology to have successfully navigated a similar regulatory process thus far.

The definition of a “router” under this regulation is tied to NIST standards, focusing on devices marketed for residential use and customer installation. This creates a technical distinction for hardware such as small-form-factor computers; while these devices can be configured to function as routers using open-source software like pfSense, they are not currently subject to the ban because their primary marketed purpose is as a general-use computer.

Industry reactions have been varied according to a report in PC Magazine. TP-Link, which had previously been a specific focus of government scrutiny, expressed confidence in its supply chain and stated it welcomed an evaluation that applies to the entire industry. U.S.-based Netgear commended the action, suggesting that the regulations could lead to a more secure digital future. Both companies will likely benefit from the action – TP-Link gets to survive and Netgear has the capacity to comply with the domestic onshoring when many of their competitors may not.

I will be monitoring the FCC’s exception list to see which manufacturers are the first to successfully onshore their operations and return new hardware to the pipeline. In the meantime, the focus remains on whether these requirements will effectively eliminate orphaned firmware and provide the level of transparency the government is seeking.

Did Microsoft Admit Windows 11 is Too Bloated?

Microsoft is beginning to acknowledge the growing concerns regarding bloatware and performance issues within Windows 11. Windows head Pavan Davaluri recently published a blog post committing to a new standard of Windows quality. In my latest analysis piece, I dive into what Microsoft thinks the problem is and I offer some of my own experiences.

Check it out here!

While Davaluri’s official roadmap highlights specific improvements like increased taskbar customization and a more dependable File Explorer, many of the everyday frustrations experienced by power users and system reviewers remain unaddressed.

The current onboarding process for a new Windows 11 PC takes over an hour, largely due to a gauntlet of updates and forced configuration screens. Even after the initial setup, users frequently encounter a secondary wave of background updates that can lead to audible fan noise and noticeable performance degradation on a brand-new machine.

Beyond the updates, the operating system’s interface is increasingly defined by a series of prompts designed to funnel users into subscription services and cloud storage. These “upsell” screens often prioritize the “Next” or “Accept” buttons, while the options to decline or keep files stored locally are presented in smaller, less prominent text.

OneDrive integration remains a primary point of friction. Even when a user expresses a preference to store files only on their local device, the system defaults to cloud syncing and backup, requiring a manual and repetitive process to disable individual folders. This persistent nudging extends to the Start menu and taskbar, which are frequently populated with icons for features like Copilot, Recall, and the Edge browser immediately following an update. The Start menu itself has become more cluttered, making it increasingly difficult to find what you’re looking for amidst a sea of promotional icons and unhelpful recommendations.

Even basic utility applications are not immune to this expansion of features. Notepad, a tool that remained virtually unchanged for decades, now includes tabbed windows, cloud synchronization requirements tied to a Microsoft account, and integrated co-pilot AI writing assistance. These additions, while intended to modernize the app, introduce new complexities and annoyances for something that doesn’t need any features. Similarly, background processes like the Xbox overlay continue to run by default, regardless of whether the user intends to use the computer for gaming.

While Microsoft’s new commitment to quality is a positive step, the current state of the operating system has led some to rely on third-party debloating utilities to reclaim system performance. There is also a growing awareness of the increasing user-friendliness of Linux distributions, which may be placing additional pressure on Microsoft to streamline its experience. As the company moves forward with its debloating efforts, the true measure of success will be whether it can reduce the constant stream of distractions and return to a more focused, efficient production environment.

I’m curious to see if these promised updates will actually thin out the layers of advertisements and background services, or if the primary goal remains centered on revenue extraction through service nudges.

ATSC 3.0 Update: More DRM Nonsense Filed with the FCC

The broadcast industry’s ongoing effort to encrypt the public airwaves is currently awaiting a decision from the Federal Communications Commission. In a recent ex-parte letter to the FCC, broadcasters cited the US Trade Representative’s 2025 Review of Notorious Markets for Counterfeiting and Piracy report to support their push for the ATSC 3.0 encryption standard. The report focuses heavily on live sports and the revenue lost to global piracy – but none of it indicates broadcast TV signals are being stolen.

See more in my latest ATSC 3.0 update video!

The report’s introduction references the NFL’s broadcasting agreements with networks like CBS, Fox, and NBC, which run through 2033. These contracts were signed without any provisions or assurances requiring future signal encryption, suggesting the league does not view over-the-air broadcasting as a primary piracy vulnerability.

The report provides three specific instances of piracy, including the FIFA World Cup, a mention of European soccer games being pirated and the 2017 Mayweather-McGregor fight. While the FIFA World Cup game was broadcast on television stations here in the USA, it is likely that it was pirated off of encrypted sources along with the other European soccer matches. And the Mayweather-McGregor fight was an encrypted Pay Per View event.

The government’s report cites data from Irdeto, a European company specializing in signal encryption for satellite and streaming providers. A review of their technical literature shows that modern piracy relies on methods like stealing session tokens, purchasing compromised account credentials on the dark web, or utilizing a technique known as CDN leeching.

These methods bypass the physical complexities of installing antennas to intercept local signals, demonstrating that for pirates encrypted content is easy to pirate than the unencrypted broadcast signals.

Furthermore, Irdeto’s guidance emphasizes the necessity of multi-DRM systems to ensure a frictionless viewing experience across different platforms. Currently, ATSC 3.0 DRM only supports Widevine, a Google technology. This single-DRM approach limits compatibility, leaving devices like Apple TV, Roku, Xbox, and standard computers unable to decode the encrypted broadcasts.

The push for encryption appears closely tied to the economics of broadcast retransmission fees. In Connecticut, for example, cable subscribers currently pay around $48.30 a month strictly for local channel access. Encrypting the over-the-air signals forces consumers to either maintain these cable subscriptions or purchase new, proprietary decoding hardware. Ahead of the upcoming NAB show, industry executives have discussed a proposed $60 tuner box. However, this device is expected to function solely as a tuner without DVR or gateway capabilities and cost three times as much as current tuning devices that do include DVR functions.

Broadcasters also point to the A3SA encoding rules, which currently permit time-shifting and recording. But these allowances apply only to content that is actively simulcast with the older ATSC 1.0 standard. Once the simulcast requirement expires, broadcasters provide are not committing to restricting or disabling recording capabilities entirely, shifting control of public airwave usage to a private entity.

The FCC is presently collecting public feedback on a separate but related sports broadcasting docket (26-45), which examines the impact of broadcasting practices on consumers and local market obligations. The comment period for this specific docket remains open for roughly another week, offering another venue for the public to submit their observations regarding how signal encryption may affect access to local sports broadcasts.

California Law to Require Age Verification on All Operating Systems (Including Linux)

Recently, a new California law signed by Governor Gavin Newsom caught my attention due to its potential impact on the open-source community, specifically Linux users. The legislation mandates that operating systems for PCs and other general computing devices like tablets and phones must implement a form of age verification during the initial account setup process.

I take a look at the implications of this law in my latest video.

While California is not the only state pursuing such measures—Texas recently faced legal hurdles over a similar law—this development raises questions about how open-source organizations, rather than traditional corporate entities, will comply.

The text of the California bill, which was signed on October 13, 2025, and takes effect on January 1, 2027, calls for an interface that requires the account holder to provide their birth date or age. This information generates a signal regarding the user’s age bracket—categorized as under 13, 13 to 16, 16 to 18, or over 18—to be read and enforced by applications within a covered app store.

The legislation defines an operating system provider broadly enough to include independent developers creating Linux distributions. Furthermore, a covered application store is defined as a publicly available online service, which could encompass command-line package managers used daily by Linux administrators.

From a practical standpoint, the current requirement relies entirely on self-reporting. Users are asked to volunteer their age, meaning anyone could input inaccurate information to bypass restrictions. Despite this, the penalties for non-compliance are clearly defined. Operating system makers face civil penalties ranging from $2,500 for negligent violations to $7,500 for intentional violations per “affected child.” If a developer has internal data showing a user’s actual age differs from the self-reported signal, they are legally obligated to act on that information or face action from the California Attorney General.

The implications for Linux distributions are notable. Commercial entities with a business nexus in California, such as the organizations behind Ubuntu and Fedora, will likely implement the necessary prompts to comply.

However, smaller projects face a different reality. Many distributions are maintained by volunteer groups without the financial resources or organizational structures to shield them from liability. Midnight BSD has already modified its software license to exclude California residents, but this legal maneuver may not satisfy California regulators if the software remains accessible for download within the state’s borders.

This legislative push is not confined to the West Coast. My home state of Connecticut is currently evaluating controls for minors on the internet, and Colorado is exploring operating system-level age verification. Texas attempted to regulate app stores before a federal court blocked the law, citing First Amendment concerns regarding its broad application. The absence of a unified federal privacy law has resulted in a fragmented regulatory landscape across different regions.

Historically, some internet users have responded to localized regulations by migrating to decentralized platforms. When Discord faced scrutiny over its age verification methods that included video selfies and government IDs, users began exploring open-source alternatives like Revolt and Matrix. These self-hosted and federated platforms demonstrate how technical communities can circumvent centralized data collection and restrictive legal mandates.

As the 2027 deadline approaches, it is likely that many Linux distributions will simply integrate a birth date or age prompt into their installation screens to mitigate legal risks. The technical challenge of passing that age signal consistently to various package managers and standalone applications remains a logistical hurdle. The coming months will test how far state authorities are willing to go in enforcing these mandates on the broader open-source software ecosystem.

ATSC 3.0 TV Encryption Update: The Final Arguments are In..

The final arguments regarding the encryption of over-the-air television have been filed with the FCC, and now it’s in the Commission’s hands. In my latest ATSC 3.0 analysis video, we take a look at how broadcasters responded to encryption concerns.

After reviewing hundreds of pages of documents, it appears the industry’s rebuttal to consumer concerns relies heavily on dismissing documented technical failures as mere anecdotes while asserting that encryption is necessary for the future of broadcast media.

The National Association of Broadcasters (NAB) has characterized reports of DRM failure—such as devices refusing to tune channels—as “early deployment friction” that does not justify stalling a national transition. They argue that individual complaints do not reflect systemic flaws. Yet, this stance contradicts the experience of users who have found that encryption often breaks the basic functionality of a television.

For instance, the A3SA, the body managing the encryption keys, argues that software-based devices require internet-based updates for bug fixes. This requirement introduces a significant dependency on internet connectivity for a medium that is marketed as being free and accessible over the air.

I recently demonstrated this vulnerability when an ADTH set-top box, which marketing materials claimed did not require an internet connection, failed to tune encrypted channels during a snowstorm. This inability to access weather information during an emergency challenges the industry’s assurance that content protection does not impede public safety messaging.

Beyond technical reliability, the industry posits that DRM is essential to combat piracy and secure content for sports broadcasting. The A3SA cited a media report claiming billions in losses due to piracy, yet the article in question focused on cable and streaming theft rather than the unauthorized capture of over-the-air signals.

Historically, DRM has been less about stopping piracy—which remains rampant despite encryption—and more about siloing users into specific hardware and software platforms. By making free over-the-air reception more difficult, broadcasters may be incentivizing consumers to stick with paid cable or streaming packages. Furthermore, claims that major sports leagues will withhold content without encryption are not supported by the current landscape, where broadcast contracts are being renewed for extended periods without such mandates being public.

There is also a significant question regarding the neutrality of the A3SA, which acts as the sole gatekeeper for approving tuning devices. While the organization claims to be neutral, it is comprised of major broadcast entities. This structure effectively allows the industry to pick winners and losers in the hardware market.

Manufacturers of popular gateway devices, such as Silicon Dust’s HDHomeRun, have been unable to secure certification under the current regime. The A3SA’s standards remain opaque and protected by non-disclosure agreements, preventing independent verification by even the FCC and effectively locking out devices that distribute signals across a home network to non-Android devices.

Ironically, while the industry argues that DRM protects consumers from the security risks of illicit streaming, the approved hardware itself presents security concerns. The ADTH box mentioned earlier was found to be running an Android security patch level from 2021, leaving it vulnerable to years of known exploits.

It seems unlikely the FCC will mandate a hard transition to ATSC 3.0 in the near term given the abysmal consumer adoption rates. The current ecosystem is too fragmented, and the cost and complexity of encryption have slowed adoption to a crawl.

And ultimately for consumers, they’re really not getting as much as they did during the prior transition. Back in the early 2000s TV viewers went from analog standard definition signals to digital high definition ones – a huge jump in visual fidelity. While ATSC 3.0’s HEVC video encoding is certainly noticeable for enthusiasts, I doubt most mainstream consumers will notice much of change.

I believe a probable outcome is a “frozen conflict” where the FCC ends the simulcast mandate, allowing stations to voluntarily switch to 3.0 if they choose, while potentially authorizing more efficient video codecs like MPEG-4 for the existing ATSC 1.0 standard.

This would allow the legacy standard to improve and remain viable, effectively leaving ATSC 3.0 to succeed or fail on its own merits without a government mandate forcing consumers to upgrade. We may end up with a better-looking version of the television service we already have, while the next-generation standard struggles to find its footing.

Your ISP Is Spying On You..

Recently, I reviewed a 2021 Federal Trade Commission report detailing the data collection practices of six internet service providers. The report examined AT&T, Verizon, T-Mobile, Google Fiber, Comcast Xfinity, and Charter Spectrum Communications. It found that standard consumer privacy measures, such as web browser tracking protections, are ineffective against ISPs because many utilize a “supercookie” to persistently track network activity.

In my latest video, I dive into this topic and look at what you can do to stop this data collection.

Because households share a single internet connection, this tracking encompasses all users on the network, including children. ISPs gather information by observing the websites a household visits, the frequency and duration of those visits, and the amount of data transferred. Providers can send a user’s IP address to an ad affiliate, who then passes it to a data broker to build an informational profile. This data extends beyond basic demographics, categorizing users by religious affiliation, ethnicity, and political leanings.

The sale of this information presents distinct privacy risks. Beyond targeted advertising, the FTC report indicates that scammers can purchase access to these profiles. Additionally, a 2019 Motherboard report revealed that bounty hunters were able to buy customer location data originating from AT&T, T-Mobile, and Sprint phones. Despite these practices, consumer engagement with ISP privacy policies remains low. The FTC found that the provider with the highest engagement saw only 6.7 percent of subscribers look at their privacy pages.

I examined my own provider, Comcast Xfinity, to understand their specific policies. Comcast stated in a 2017 blog post and on their current privacy pages that they do not sell personal information without affirmative opt-in consent. However, agreeing to their terms of service during the initial account sign-up functions as that consent.

Navigating Comcast’s privacy section reveals numerous documents and a complex process for managing data disclosures. Users can opt out of certain disclosures, such as participation in audience measurement or personalized ads, but the application of these settings to broader tracking methods is ambiguous.

The ability to view, change, or delete the specific data an ISP holds depends heavily on state laws. For residents in states with applicable laws, Comcast provides a form to request a download of stored data, which includes account information, behavioral inferences, and details about telecommunication usage.

I submitted a data download request over a week ago, a process Comcast notes can take up to 30 days to fulfill. Until comprehensive federal regulations are established, the responsibility remains on the individual subscriber to navigate these varied settings and actively opt out of data collection.

I will be back with an update once Comcast hands over my data. Stay tuned!

ATSC 3.0 DRM Opponents Make Their Case to the FCC

The transition from the current over-the-air television standard to NextGenTV, or ATSC 3.0, continues to generate significant debate, particularly regarding the decision by many broadcasters to encrypt their signals.

In my latest video, I take a look at the filings from organizations and individuals opposing the implementation of Digital Rights Management (DRM) on the public airwaves.

This issue moved from theoretical to practical for me recently during the Super Bowl. I was unable to tune into the game over the air because my local NBC affiliate had encrypted their channel, and the legacy ATSC 1.0 signal was unreliable at my location, forcing me to stream the event instead.

I submitted my own filing to the FCC docket, effectively mirroring the arguments I raised in my prior video on this topic regarding the industry’s justification for encryption. To circumvent file size limitations on the docket, I attached a PowerPoint presentation with embedded video evidence, a method that allows for the submission of multimedia documentation under the 100-megabyte limit. This approach is useful for anyone wishing to demonstrate the real-world impact of these restrictions, such as devices failing to decrypt channels they are theoretically certified to receive.

One of the most comprehensive filings came from Public Knowledge, a consumer advocacy group. They commended the FCC for scrutinizing the issue but raised substantial concerns about the A3SA, the authority managing the encryption program. Public Knowledge argued that the A3SA operates without meaningful external oversight, maintaining confidential licensing terms and opaque decision-making processes. They contend this entity acts as a private gatekeeper to the public airwaves without accountability to consumers or public interest stakeholders.

Public Knowledge also highlighted the potential for consumer confusion arising from the current certification regime. There are now two distinct logos for consumers to navigate: the NextGenTV logo and the A3SA logo. A device might carry the NextGen TV certification, like the HDHomeRun gateway I use, yet lack the ability to decrypt content. Conversely, a device like the Zapperbox may have A3SA certification for decryption but lack the NextGenTV designation. During a recent visit to a major electronics retailer, I observed that neither logo was displayed on television sets that support the new standard, suggesting that this certification system has yet to effectively reach the consumer marketplace.

Furthermore, Public Knowledge drew a parallel between the current situation and the “broadcast flag” rule from the previous digital transition. They argued that the A3SA certification requirements essentially function as a new, more sophisticated broadcast flag, allowing broadcasters to dictate which devices can receive programming and potentially restricting recording capabilities. They also reminded the Commission that the FCC’s 2017 order to begin the ATSC 3.0 transition emphasized that encrypted programming should not require special equipment supplied by the broadcaster, a standard the current regime may be failing to meet.

Opposition also came from within the broadcast industry itself. Weigel Broadcasting, which operates stations reaching a vast majority of US households, filed comments expressing concern over the direction taken by larger broadcasting consortiums. Weigel presented evidence suggesting that some competitors view the new standard primarily as a vehicle for monetization, such as integrating gambling platforms or treating the spectrum as a financial asset rather than a public service. They acknowledged that the current implementation of DRM has created adoption hurdles and suggested that if encryption must exist, it should not require a persistent internet connection—a requirement that has already caused functionality issues with some commercially available tuners as noted in my prior video.

The Consumer Technology Association (CTA), which represents device manufacturers, also weighed in. While their filing focused largely on opposing a mandate for ATSC 3.0 tuners in all televisions, they acknowledged the friction caused by DRM. This is a complex position for the CTA, as the encryption technology being used is owned by Google, a major industry player and CTA member, yet the implementation is harming member companies like SiliconDust (also a member). Their filing recommends that the Commission continue to monitor the intersection of DRM and the new standard, a notable admission from an organization that typically advocates against government intervention in their industry.

Similarly, the NCTA, representing cable and internet providers, cited encryption as a complicating factor that adds cost and technical challenges to the transition. They argued that these complexities support their stance against a forced transition to the new standard, noting that the need to support new audio and interactive formats is already a heavy burden without the additional layer of decryption requirements.

For those who have experienced issues with encrypted channels or malfunctioning hardware, the opportunity to place these experiences on the record is closing. The reply deadline for this docket is February 18. Under FCC rules, new filings at this stage must be in direct response to arguments already present in the record. This provides a narrow window for consumers to submit evidence countering the claims made by broadcasters, such as documenting instances where “offline” DRM failed to function as advertised. The record is currently being shaped by these final arguments, and the volume and specificity of these replies may influence the Commission’s next steps.

You can get more information about how to file here. I also did a video on the topic here.

ADTH’s ATSC 3.0 Box Woes Kill the Industry’s Arguments Regarding Over the Air TV Encryption

I’ve been spending the last few days reading through the filings in the FCC’s ATSC 3.0 docket now that the comment period has closed, trying to understand how broadcasters, device makers, and industry groups are framing the next phase of the over-the-air television transition.

While I was doing that, I went upstairs to check on my own ADTH tuner, a device that’s supposed to handle encrypted ATSC 3.0 channels without needing an internet connection. It wasn’t working. Encrypted channels wouldn’t tune at all, and the box was throwing content protection errors that hadn’t been there before.

In my latest analysis piece, I talk about how widespread problems with this box tuning encrypted channels popped up just as the industry was saying there were no concerns with DRM.

That problem sent me down a familiar path. ATSC 3.0 is the planned successor to today’s ATSC 1.0 broadcast standard, and on paper it brings technical improvements. In practice, the transition has been complicated by broadcasters choosing to encrypt free, over-the-air signals. That decision has narrowed consumer choice and added layers of complexity that simply didn’t exist before. The industry’s assurances that this system is mature and reliable don’t line up with what I’m seeing in my own home.

One of the filings I reviewed came from ADTH itself. The company strongly supports the transition and argues that there are no real technical barriers to consumer devices receiving encrypted broadcasts. Encryption and digital rights management, they say, are routine in modern electronics.

That’s hard to square with my experience. After repeated errors, I tried a factory reset. Instead of fixing anything, the device dropped into a boot loop, endlessly scanning channels and rebooting. Even with an internet connection restored, it refused to recover. At that point it stopped being a TV tuner and effectively became a brick.

What made this more than a minor inconvenience was timing. We were in the middle of a significant snowstorm, the kind of situation where over-the-air television has historically been a reliable source of local information. Because the encrypted channels wouldn’t tune, that information simply wasn’t available on this device. And this doesn’t appear to be an isolated issue. I’ve heard from viewers and seen reports on Reddit and AVS Forum from people around the country whose boxes stopped working around the same time. Some even reported that disconnecting the internet made their tuners work again, which raises uncomfortable questions about how these systems are actually operating.

At the same moment consumer devices were failing, the group that oversees the encryption system, the A3SA, told the FCC it has seen no evidence of approved devices failing to work with encryption. They also suggested that any reported issues are generally resolved with firmware updates. That response glosses over a basic problem: firmware updates require an internet connection. Requiring internet access just to watch free, over-the-air television undermines one of broadcast TV’s core purposes, while adding cost and fragility.

The A3SA also describes itself as a “neutral, standards-based administrator.” From what I’ve seen, that neutrality is questionable. The group is made up of major broadcasters and has effectively decided which manufacturers can and can’t participate. SiliconDust’s HDHomeRun, a widely used network tuner, has been denied approval, while other devices with similar technical characteristics have been cleared.

Another theme running through the filings is piracy. Broadcasters cite tens of billions of dollars in losses and argue that encryption is necessary to protect their content. When you dig into the examples they reference, though, the picture changes. One high-profile piracy case they cite involved stealing encrypted signals from cable and satellite providers, not rebroadcasting free over-the-air signals.

Encryption, it appears, inconveniences only those who are viewing content lawfully – not the pirates.

Broadcasters also warn that without encryption they risk losing premium sports programming. Yet recent rights deals tell a different story. The NFL, NBA, MLB, NASCAR, and major college conferences have all committed to long-term agreements that keep marquee events on broadcast television for years to come. These deals were struck without any guarantee that over-the-air signals would be encrypted, which undercuts the argument that encryption is essential to retaining top-tier content.

The FCC has also raised questions in this filing round about consumer rights, particularly the long-standing right for consumers to record broadcasts at home for personal use. That right was established decades ago, but encryption complicates it. Circumventing DRM, even for lawful personal recording, can be illegal. The A3SA argues that internal rules already protect home recording, but those assurances are tied to current simulcasting requirements that may disappear. Once they do, the only remaining safeguards would be voluntary commitments from broadcasters whose financial incentives don’t necessarily align with consumer flexibility.

Underlying all of this is a business reality that the National Association of Broadcasters acknowledged more directly in its own filing. Encryption is about protecting retransmission fees, the charges cable and streaming providers pay to carry broadcast channels. Those fees have risen sharply over the years, and making free reception less convenient creates pressure to return to paid services. That strategy may make sense from an industry perspective, but it runs counter to the idea of broadcast spectrum as a public resource.

There’s also nothing in the current framework that limits encryption to a single system. The ATSC admits in their filing that multiple, incompatible schemes could emerge, adding yet another layer of confusion for viewers and device makers alike. At that point, the promise of ATSC 3.0 as a straightforward upgrade starts to look like something else entirely.

After reading the docket and dealing with a tuner that worked one day and failed the next, I’m left with the sense that encryption over the public airwaves is creating problems faster than it’s solving them. Broadcasters were granted access to spectrum at no cost, with the understanding that they would serve the public interest. Turning free television into a fragile, tightly controlled experience doesn’t seem consistent with that mission. I plan to file a reply in the FCC proceeding during the response window, and there’s more in these filings worth unpacking.

Stay tuned for more and see my full ATSC playlist here!

The RAM Crisis Explained: An Interview with Framework’s Nirav Patel

The price of memory is climbing, and it’s not just a problem for people building a new PC. RAM for laptops, desktops, phones, and tablets is getting more expensive as AI data centers absorb an increasing share of global supply. To better understand what’s happening behind the scenes, I called up Nirav Patel, CEO of PC maker Framework, to talk through how this shortage developed and what it means for consumers over the next several months.

Check out the interview in my latest video!

Patel described the current situation as a classic supply-and-demand imbalance, but on a scale the consumer market hasn’t seen before. Only a handful of companies—Micron, SK hynix, and Samsung—manufacture most of the world’s DRAM, and expanding capacity requires massive capital investment.

“What we’re seeing right now is just a massive excess of demand relative to the supply available,” Patel said.

With AI servers commanding higher margins, manufacturers are prioritizing those customers, leaving consumer products with tighter allocations. That imbalance has been building quietly for years, but it became much more visible when Micron announced it was shutting down its Crucial consumer memory brand last month. For PC builders, Crucial had long been a reliable option. Patel said the decision made sense given current conditions.

“When memory is in allocation, it doesn’t make sense to compete with your own customers,” he explained, noting that Micron supplies chips not only to large OEMs like Dell and HP, but also to other consumer memory brands.

One reason Framework has been able to navigate repeated supply disruptions—from pandemic shortages to GPU crunches and now memory—is its modular design philosophy. Patel credited flexibility as a survival tool.

“We built the product to be modular, and that gives us a lot of flexibility to navigate these kinds of environments,” he said.

Because many Framework systems are sold as DIY editions, customers can source their own memory and storage when shortages hit, sharing some of the burden rather than leaving the company entirely exposed.

The uncertainty isn’t limited to pricing. Patel described a market filled with overlapping orders, canceled allocations, and even hoarding.

“It is actually very unclear to anyone what the true ground truth is in the market when it comes to the supply and the demand,” he said.

Companies are placing duplicate orders with multiple suppliers, unsure which ones will be fulfilled. That behavior, he noted, can make shortages appear worse than they ultimately are, at least until reality catches up.

Geopolitics are also playing a role. Chinese memory maker CXMT has historically been avoided by many U.S. companies due to sanctions and long-term sourcing concerns, but Patel said that’s starting to change. “If you’re not sure where you’re going to be able to get your memory in two months, you better go and qualify every possible source,” he said, adding that some major OEMs are now testing and approving parts they previously wouldn’t have considered.

For consumers, the immediate concern is quality as prices rise and supply tightens. Patel’s advice was straightforward: stick with established brands. He doesn’t expect major manufacturers to compromise their reputations to chase short-term gains.

“Those brands are not going to torch all of their credibility in this short window of time,” he said, though he acknowledged that lesser-known vendors may try to take advantage of the situation.

While memory is the biggest constraint right now, Patel doesn’t believe every component will remain scarce long term. If memory remains the bottleneck, other parts like GPUs and storage should eventually stabilize because they can’t be deployed without sufficient RAM. In the near term, however, he expects continued volatility as the market works through excess orders and misaligned expectations.

Looking further ahead, Patel pushed back on the idea that soldered or unified memory is a solution to shortages. Even systems that place memory on the same package as the processor often rely on separately sourced components. For Framework, modular memory remains central to its roadmap, especially during periods like this. “Buy what you can afford today,” he said, “and buy solutions that let you upgrade in the future.”

Patel emphasized uncertainty as the defining market feature of the moment. AI demand has reshaped how memory is allocated, and the consumer market is now competing in a space it no longer dominates.

Curious about Framework? Check out my Framework videos here and my other interviews here!

My First Cord Cutting / ATSC 3 Update of 2026

In the days leading up to the CES show and throughout the week in Las Vegas, several cord cutting news items related to the ATSC 3.0 over the air TV standard were announced. In my latest video, I provide a more in-depth overview of these developments that I touched on briefly during my CES Dispatch series.

Watch the update here!

As a recap, a central issue remains DRM encryption over the new ATSC 3.0 broadcast standard. Broadcasters are pushing to lock down over-the-air signals, limiting how viewers can receive and use content that has traditionally been freely accessible. While they say this is to prevent piracy, the real outcome is that it pushes consumers to expensive cable and streaming plans to maintain recording and time shifting features they enjoy today.

This matters because retransmission fees charged by broadcasters continue to rise at almost an exponential rate. In my area, the Broadcast TV fees are now $48.30 per month – and that’s before other cable charges. Even the most basic cable subscription will now cost consumers more than $60 monthly. Of course using an antenna to receive television is completely free.

Shortly after I began asking viewers to download and share their Comcast rate cards, Comcast removed the broadcast TV fee line item from their published rates entirely. The company says this was done to simplify pricing, but the effect is reduced transparency. The costs haven’t disappeared; they’ve simply been folded into higher base prices.

At CES, Pearl TV announced what it described as an affordable ATSC 3.0 converter box program. This is positioned as a way to lower the barrier to entry for consumers and manufacturers, but it closely resembles a similar failed effort announced in 2022 that never materialized.

The underlying root cause of Pearl’s troubles with consumer adoption hasn’t changed. Encryption and certification requirements add cost and complexity in a market that is already small. Even the proposed “affordable” devices are expected to cost under sixty dollars, still roughly double the price of many ATSC 1.0 tuners (compensated affiliate link) that include DVR functionality.

The certification process itself remains a problem. Pearl TV and the A3SA encryption body are private entities made up of the same major broadcasters, effectively controlling which devices are allowed to receive encrypted signals and ultimately be sold to consumers. This introduces a layer of private regulation on top of what has traditionally been governed by FCC standards alone.

Another announcement hinted at some movement on gateway devices, which take an antenna signal and distribute it across a home network. After years of delays, A3SA says encrypted gateway functionality is now working on a limited number of products, including the ZapperBox and an upcoming ADTH device. These solutions, however, are expensive and tightly constrained. ZapperBox requires multiple expensive proprietary devices for multi-TV households, and the ADTH approach is limited to Android and Fire TV platforms, excluding market leader Roku.

Visiting the ATSC booth at the CES show made it clear how confusing this ecosystem has become. Devices carried different combinations of NextGen TV and A3SA certifications, each with different implications for compatibility and functionality. By contrast, current ATSC 1.0 devices work simply because they can receive the signal, without needing approval from a private consortium.

There may be signs of easing tensions. An interview with SiliconDust CEO Nick Kelsey suggested that support for encrypted ATSC 3.0 signals could eventually come to HDHomeRun devices without additional hardware. That would be a meaningful shift, though it still leaves unanswered questions about support on non-Android platforms and the broader role of DRM on public airwaves.

FCC Chairman Brendan Carr addressed these issues during a CES discussion, emphasizing the public interest obligations tied to broadcast licenses. He noted that broadcasters unwilling to meet those obligations have other distribution options, from cable to online platforms, and raised the possibility of revisiting how spectrum is allocated if public interest standards are not upheld. Those comments echo questions raised by the FCC in its current ATSC 3.0 docket, particularly around whether encryption serves consumers or primarily protects broadcaster revenue.

That docket remains open for public comment, with additional opportunities to respond once broadcasters file their answers. The outcome is still uncertain, but it’s clear the FCC has heard our concerns and is waiting for the broadcasters to make their case as to why restricting access to the public airwaves will better serve the public.

Texas Files Suit Against Smart TV Makers Over Spying Features

Back in October, I started a video series looking at “Automatic Content Recognition” (ACR) which is a technology that modern smart televisions use to collect data about what people are watching. The televisions take actual visual and audio snapshots of what is on the screen several times a second.

In my latest video on this topic, I look at a new set of lawsuits filed by the Texas Attorney General against five major TV manufacturers—Sony, Samsung, LG, Hisense, and TCL—over their use of this technology. The Texas AG even scored an early victory, requiring Hisense to turn off their ACR systems for Texas residents.

The lawsuits were all filed in a Texas state court, which means any outcomes would apply only to Texas residents. Still, they offer a detailed look at how this technology works and how aggressively it is being deployed.

The frequency of ACR sampling varies by manufacturer and model, but in some cases the sampling happens multiple times per second. That information is converted into a digital fingerprint and sent to a remote server, where it is matched against a massive database of media. Once identified, the viewing data can be sold to advertisers and data brokers.

As I noted in my earlier videos, ACR can also analyze anything coming into the television through HDMI, including cable boxes, streaming devices, and video game consoles. The lawsuits allege that video games are a big area of data collection for TV manufacturers and data brokers, which raises questions about whether they are illegally capturing data from children under the age of 13.

Marketing materials cited by the Texas Attorney General in the lawsuits suggest that some companies use this data to track users across devices and platforms, following them from their TV screens to social media sites and other parts of the internet. In one example cited in the filings, LG is accused of collecting screen data as frequently as every 10 milliseconds and building detailed consumer profiles that may include political interests, religious viewing habits, and other personal characteristics.

Another major issue raised in the lawsuits is informed consent when users are asked to opt into these features. While most smart TVs technically require users to opt in before data collection begins, the Attorney General argues that the process is designed to push users toward agreement. Opting in is often a single click, while opting out can require navigating dozens of menu options spread across multiple screens. In some cases, declining data collection disables core smart TV features altogether, effectively forcing users to choose between privacy and functionality. Screens shown in the lawsuits for brands like TCL and Hisense often lack a clear “disagree” option, while others rely on confusing layouts that make refusal unintuitive.

Samsung is also accused of misleading customers by claiming it does not collect video or screen content. The state argues that even if the company only transmits hashed fingerprints rather than raw images, the end result is the same because the system can still identify exactly what is being watched.

The Attorney General is seeking jury trials, civil penalties of up to $10,000 per violation, and additional restraining orders against the manufacturers. Beyond the legal details, the lawsuits highlight how valuable viewing data has become. It helps explain how large televisions can be sold so cheaply: the hardware is often subsidized by ongoing data collection.

For viewers who are concerned about this practice, the most reliable option remains disconnecting the television from the internet entirely and using an Apple TV that has stronger privacy controls. Even then, avoiding tracking altogether is difficult, but these cases shed light on just how much data smart TVs can collect—and how little most users may realize about what is happening in the background.

I’ll continue watching how these lawsuits develop, since they may signal whether other states are willing to challenge an industry practice that has largely operated out of public view until now.

2025 Toyota Sienna Recall : A Tale of Betrayal by a Once Trusted Brand..

I bought a new Toyota Sienna with my wife in January of 2025, a Woodland Edition that replaced our 2019 Sienna. It was our third Toyota, following a Highlander and a previous 2019 Sienna. Until recently I had no reason to question the brand. The vehicle itself has been solid, and nothing about the driving experience suggested there was a serious issue lurking beneath the surface.

That changed when a recall notice arrived in the mail in mid-December. The letter explained that Siennas manufactured between January and July of 2025 may have defective middle-row seat rails. In certain high-speed collisions, those seats could lose structural integrity if occupied, increasing the risk of injury. Toyota’s guidance was blunt: no one should sit in the second-row seats while the vehicle is moving until a fix is implemented. At the time of the notice, no remedy had been defined.

Explore more in my most recent commentary video.

What troubled me was not just the defect, but the timeline. According to Toyota’s own filings, the company became aware by September that the seats could dislodge in a crash. A voluntary safety campaign decision was made on October 1, and the National Highway Traffic Safety Administration was notified shortly thereafter. Dealers were also informed at that time and instructed not to sell affected vehicles. Yet as a customer, I did not learn about the issue until roughly two months later, despite continuing to drive my family in a vehicle Toyota already knew had a potentially serious safety problem.

Toyota did issue a press release when the recall was filed, but it was easy to miss if you are not actively following automotive news. When I asked Toyota’s PR department why customers were not contacted sooner, I was told that assembling mailing lists takes time and that federal regulations allow up to 60 days for notification by first-class mail. I was also told there is no comprehensive digital database of owner contact information. That explanation rang hollow, especially after customer service was able to pull up my details immediately using the VIN when I called them.

There is also the role of the dealership. I purchased this vehicle locally, from a dealer that has sold me multiple cars over the years. They had the same information Toyota had in early October, yet there was no proactive outreach to customers who had recently driven off the lot in affected vehicles. A phone call warning families about a seating restriction would not have required a regulatory mandate, only a basic sense of responsibility and duty of care for customers.

Seeking a workaround introduced a second layer of frustration. The service bulletin indicated that impacted customers were eligible for a loaner or a rental vehicle with a daily allowance. When I contacted the dealer, I was told there were no loaners available and that any replacement vehicle would be “whatever was on hand.” The option of a rental was initially dismissed, despite being clearly outlined in Toyota’s own documentation. It took several calls between the dealership and corporate support before a rental was finally arranged.

For now, we will be driving a rented minivan on Toyota’s dime while waiting for the company to determine how it will address the defect. The inconvenience is manageable, but the experience has shaken my confidence.

This was not a minor oversight or a cosmetic issue. It involved seating where children ride, and it carried acknowledged safety risks. Knowing that both the manufacturer and dealers were aware of the problem months before customers were directly notified is difficult to reconcile with the trust that brand loyalty is built on.

I still like the vehicle, and I still want this to be resolved properly. But this episode raises broader questions about how companies communicate with customers when safety is at stake, and whether meeting the minimum regulatory requirement is an adequate substitute for timely, direct warnings.

The FCC Vote on ATSC 3.0 Opens a New Comment Period on DRM, Tuner Mandates

For the past couple of years, viewers like us have been urging the FCC to rein in broadcasters who want to lock down free antenna signals with encryption. These broadcasters would prefer you watch through paid services that generate retransmission fees, but many of us have been pushing back to preserve the ability to view and record free local TV as we always have.

In my latest video, I talk about a recent vote the FCC took on moving to the next step of the process which includes a significant focus on DRM.

Back in August, Tyler the Antenna Man and I visited the FCC to deliver those concerns in person. A few weeks ago, the commission released a draft order that reflected much of what we presented. The document included serious questions for the industry about how they’ve been handling DRM under ATSC 3.0 and whether their current encryption practices even comply with the Communications Act. The FCC also asked whether regulation of DRM should fall under their authority rather than a private group like A3SA as it does now, and if privacy protections and fair-use rights need to be written into formal rules rather than left to voluntary standards.

Two commissioners, Republican Olivia Trusty and Democrat Anna Gomez, acknowledged the discontent members of the general public are feeling about the ATSC 3.0 transition and committed to ensuring the public interest is a priority in future decision making.

The commissioners voted unanimously to move the process forward. While no new rules are in place yet, the order proposes ending the simulcast requirement that forces stations to broadcast in both ATSC 1.0 and 3.0, and it opens another round of public comment. Once it’s published in the Federal Register, there will be 60 days to file comments and another 30 for replies. That’s our opportunity to make sure the record reflects real-world experience—what it’s actually like trying to tune encrypted 3.0 channels when current devices can’t play them back.

I plan to continue submitting evidence that counters misleading claims from the broadcast lobby. For example, a Sinclair executive recently asserted on LinkedIn that ATSC 3.0 works on phones, tablets, and gateway devices. It doesn’t. I tested every configuration he mentioned—USB-C tuners, set-top boxes, network gateways—and none could decrypt the DRM-protected broadcasts. SiliconDust’s HDHomeRun, which he cited as compatible, has been locked out entirely from A3SA’s system. The president of Silicondust even appealed directly to the FCC for relief. When industry talking points like that appear, I post photographic proof of what consumers actually encounter: a black screen where free TV used to be.

One other example occurred on the official docket. In a filing, broadcasters reversed their position on tuner mandates. Just a few years ago they told the FCC to stay out of hardware requirements. Now they’re asking for mandatory ATSC 3.0 tuners, even though DRM complexity has made manufacturing affordable devices nearly impossible.

As the next comment window opens, I’ll share updates through an email list at lon.tv/rapidresponse and a set of instructions at lon.tv/fccinstructions for anyone who wants to participate. This FCC seems more receptive to the public than prior FCC’s, but the chairman is moving quickly, so timing will matter. When broadcasters spread misinformation, the best response is data—photos, test results, and honest firsthand accounts. That’s how we keep the record straight and make sure free, open access to local TV doesn’t quietly disappear behind a paywall.

The Disney vs. YouTube / Google Dispute Gets Even Worse..

I’ve been following the latest corporate clash between Disney and YouTube, and what’s striking is how much it mirrors the cable disputes of the past—except now it’s happening in the streaming world.

I dive into what’s going on in my latest video.

If you subscribe to YouTube TV, you’ve likely noticed the fallout firsthand. Disney’s channels—including ESPN and local ABC affiliates—have vanished due to a carriage dispute. In addition to losing live television, anything recorded on the YouTube DVR has disappeared too. Those recordings were effectively part of the licensing agreement, not owned by the user doing the recording, and that license is now suspended.

The tension doesn’t stop at television. Disney has also pulled all of its movies from Google’s digital stores, including YouTube and Google Play. That means you can’t buy or rent new Disney titles there anymore. Meanwhile, Google has withdrawn from the Movies Anywhere service, a consumer-friendly platform that let users sync digital movie purchases across multiple services like Apple TV, Prime Video, and (formerly) Google Play. I’ve always appreciated that system—it offered rare flexibility in a digital landscape full of restrictions—but now, for Google users at least, it’s no longer working the way it used to.

Underneath these disputes is a deeper problem: the TV industry’s outdated economic model. There was a time when networks competed on content quality and ad revenue. Now, they rely heavily on retransmission fees—payments from cable or streaming services that carry their channels. As customers cut the cord to escape rising costs, networks have responded by hiking prices even more, a cycle that keeps pushing people away.

I saw it myself before I canceled cable; I was paying $35 a month just for local TV channels. Those fees have crept into streaming too—YouTube TV’s base plan has climbed from $35 in 2017 to $83 last year, and more increases are likely if these negotiations continue to go badly for streamers.

Broadcasters, rather than adapting, are lobbying for rule changes that would let them negotiate retransmission deals station by station instead of through national networks. That would almost certainly mean higher prices and more blackouts, similar to what legacy cable customers face. They’ve packaged the effort under the guise of supporting local news, but the real motive is to extract more revenue from platforms like YouTube TV. Consumers end up paying the price, both figuratively and literally.

At the same time, the broadcast industry is making over-the-air viewing less accessible. With the rollout of the ATSC 3.0 standard—also called NextGen TV—broadcasters are adding encryption that limits what viewers can record or stream inside their own homes. It’s another way of nudging people back toward paid streaming, where networks can charge retransmission fees and control access.

All of this paints a bleak picture for consumers. The fight between Disney and Google is about who gets to collect your subscription dollars, not about improving the viewing experience. While they posture in the media against each other, viewers lose access to channels, movies, and services that once worked seamlessly. I still buy physical media for that reason—Blu-rays with digital codes I can redeem independently of these shifting corporate agreements. Those discs can’t be taken away from me in a dispute.

Eventually, Disney and Google will almost certainly strike a new deal. But when they do, the outcome is easy to predict: everything will return, and it will cost more. In the meantime, it’s another reminder of how little control consumers actually have in the streaming age, and how quickly “your” digital library can turn into theirs again.

Is Smart TV HDMI Spying Legal?

After last week’s video about how smart TVs spy on users, I wanted to take a deeper look at the legalities around allowing TV manufacturers to spy on everything we watch – including what’s connected to our TVs via the HDMI port.

Check it out in my latest video!

As a recap, most televisions don’t just track what apps you use—they can identify what’s on the screen or what’s coming through the speakers, then send that data off to advertisers and data brokers. It’s all done through automatic content recognition, or ACR, and it’s completely legal because users consent to it, often without understanding they have.

When I factory-reset my Roku TV, the setup process gave me two options in regards to ACR: “Agree” or “Manage Preferences.” There was no simple “Yes” or “No.” Most people, eager to get started, are going to hit “Agree.”

If you do click through to “Manage Preferences,” you can then opt out, and Roku will still let you use its smart features. That’s more than I can say for my LG TV, which shut down all its smart functions when I declined a new privacy policy after a firmware update. I could still use connected devices, but the built-in apps were locked out until I accepted the new terms. Roku’s approach at least lets you continue using the interface, but I doubt many users go through the trouble to opt out. A real opt-in should offer a clear yes-or-no choice, not bury “no” under layers of menus.

Roku’s privacy policy itself is over a hundred pages long printed out, and scrolling through it takes several minutes. Buried in that text are all the details about how the company collects and sells data. The numbers make it clear why this is so central to their business—Roku’s recent quarterly report showed more than a billion dollars in gross profit from its platform, compared to only about $146 million from hardware. The TVs are just the delivery mechanism; you and your data are the product.

Apple has taken the opposite approach by asking users directly whether they want to be tracked across apps. The first choice shown is “Ask App Not to Track,” followed by “Allow.” When Apple rolled this out, 96 percent of U.S. users opted out, and even now most people still refuse tracking when given a clear choice. Reports from analytics firms put the current opt-in rate somewhere between 15 and 30 percent.

Looking ahead, I’m concerned about where this technology might go as AI becomes more powerful. Right now, companies say they’re only sending “fingerprints” of screen images, not the images themselves, but even small local models that can run on smartphones analyze photos in surprising detail. It’s easy to imagine a manufacturer deciding that full-image uploads could make targeting more precise and profitable.

Many viewers told me the simple answer is to keep TVs offline. I agree—that’s the easiest fix. Unplug the Ethernet cable, disable Wi-Fi, and use an external device like an Apple TV or a computer if you want streaming apps. But most consumers don’t do that. When I stopped by Best Buy recently, the salesperson said people mainly care whether their new TV supports the apps they use most. They’re connecting their sets because they want convenience, not because they’ve read a privacy policy.

If regulations ever catch up, maybe they’ll require true opt-in choices instead of manipulative prompts. Until then, the safest move is still to disconnect your television from the internet and think carefully about what you’re agreeing to.

For a good resource on taking back control, my friend Veronica over at Veronica Explained has a video on cutting these services out entirely and running everything with open-source tools. She’s got some solid ideas for handling your own streaming setup without giving away your data.

Your TV’s HDMI Port is Spying on You…

When I bought my LG OLED TV about eight years ago, I never imagined it would one day be spying on everything I watched. Like most people, I was aware that smart TVs track viewing habits for marketing purposes, but what I didn’t realize until recently is just how deep that surveillance goes. These devices actually capture images and audio from anything connected to the TV, whether it’s a game console, a streaming box, or even a home movie streamed from your phone. That information gets packaged up and sent to data brokers or used to target ads across the web.

In my latest analysis video, we dive into this issue and see how many popular brands implement it.

This kind of tracking happens through something called Automatic Content Recognition, or ACR. It works by sampling what’s on the screen, matching it against a database, and then building a profile around what your household watches. This data is also used to help marketers see how many viewers actually see their ads.

When I went through the privacy settings on my LG set after a firmware update, I discovered the TV was monitoring all HDMI inputs, not just built-in apps. And when I tried to opt out, the TV refused to let me use any of its “smart” features unless I agreed to those terms.

Other manufacturers handle it differently, though not necessarily better. Samsung buries its ACR disclosure deep in its privacy statements, and while there’s an option to disable “SyncPlus and Interactive Functions,” it’s not clear how complete that shut-off really is.

Amazon’s Fire TV–powered televisions create digital fingerprints from the shows and ads you watch, saying the goal is to verify ad impressions and “reduce repetition,” but that still means every pixel and sound might be analyzed.

Roku is the most open about its practices – and even brags about winning an Emmy for their TV spying technology – mostly because it uses that transparency to sell advertisers on the value of its data. The company even boasts about its ability to track what games are being played on connected consoles and for how long people play them.

Google TV is the biggest mystery of the bunch. There’s little public information about whether Google itself runs ACR or leaves it to each manufacturer. HiSense, for instance, admits to collecting both audio and video data through its Google TV sets. I couldn’t find any comparable details from Sony (a larger maker of Google TV sets), which suggests the fine print may only appear on the TVs themselves, hidden behind those long on-screen agreements few people read before clicking “accept.”

For anyone worried about this kind of data collection, the best defense is to treat your TV as just a display. Disconnect it from the internet and use a separate streaming box instead. I use an Apple TV for that reason—it isn’t perfect, but it’s far less aggressive about data sharing than the others. Consumer Reports maintains a useful guide explaining how to disable tracking features across most major brands, which I’d recommend checking out.

After reading through my LG’s privacy policy line by line, I was startled to realize how much of my personal life could be analyzed simply because it passes through an HDMI cable or streamed to it over my local network. The notion of “the privacy of your own home” is quickly becoming eroded by our “smart” technologies.

See more analysis pieces on my YouTube channel!

What’s Going on With Fire TV?

Amazon’s new “Select” 4k streaming stick with the new Vega OS has not been well received – especially by enthusiasts. In my latest video, we take a look at what’s going with the FireTV and why Amazon is moving away from the Android player we’ve come to know and mostly love over the last decade.

When I started covering tech on YouTube more than a decade ago, one of the earliest products I reviewed was the original Amazon Fire TV. It was a time when streaming boxes were still new and fragmented. Roku was around, but like today it was very limited in capabilities, and Apple’s TV box didn’t yet have apps. Amazon’s entry in 2014 was a surprise — an Android-based device with an interface built for television. It even beat Google’s Nexus Player, the first official Android TV device, to market by a few months.

Back then, the Fire TV felt like a meaningful step forward. Amazon had invested in game development studios and the box had decent graphics performance for casual play. You could sideload Android apps, and it was fast at launching video, caching streams so they started almost instantly. The platform was flexible, and the company was building a product that appealed to both mainstream users and enthusiasts.

Fast forward eleven years, and Amazon’s latest Fire TV device, the 4K Select, runs something entirely different. The operating system, called Vega OS, has replaced Android under the hood, but Amazon isn’t marketing it openly. It’s not mentioned on the box or in promotional materials. What’s more, this new system limits what the device can do. Apps now need to be rewritten for Vega OS, and many haven’t made the jump yet. In some cases, Amazon is actually streaming apps from the cloud to make them run on the new hardware, a workaround that shows how much compatibility has changed.

This move appears to be a shift in priorities. Vega OS likely helps Amazon build cheaper hardware with lower overhead, targeting the low-end streaming stick segment rather than the higher-performance devices that used to appeal to enthusiasts. Developers can build in React Native, which is cross-platform, but that still means maintaining another version of their app specifically for Vega. Whether streaming app makers will see that as worth the effort remains to be seen.

According to AFTVNews, Amazon is keeping Vega OS confined to the entry-level devices for now, while higher-end Fire TVs and smart TVs may move to a different system based on Android 14.

The timing of this change may have something to do with where Amazon stands in the streaming device market. Data from Pixalate shows Roku leading with about 36 percent of U.S. market share, far ahead of Fire TV’s 14 percent. Roku focuses almost entirely on delivering video streaming with a simple interface. Consumers seem to prefer that over devices that try to do more. Fire TV’s more advanced features don’t appear to be helping it compete.

Roku’s financials tell a similar story. They’ve been selling hardware at little or no profit but making nearly a billion dollars a quarter in gross profit from their platform business — most of it advertising. These devices aren’t meant to be powerful computers anymore; they’re ad platforms with remotes attached. Amazon seems to be trying that model, prioritizing simplicity and scale over capability.

Google is reportedly rethinking its own TV strategy as well, possibly moving away from its current Google TV platform. For users who enjoyed the flexibility of older devices like the NVIDIA Shield (compensated affiliate link), there may not be many options left. The Shield still offers features like sideloading, local media playback, and advanced home theater support with Dolby Vision and lossless ATMOS, but it’s starting to look like an artifact of a different era.

I find it telling that Amazon, a company that once encouraged experimentation on its Fire TV line, is now quietly locking it down. For people who use these boxes just to stream Netflix or Prime Video, that may not matter. But for those who like to tinker — to run emulators, custom apps, or personal media servers — this marks the end of an era. The industry seems to be moving toward simpler, more disposable devices designed to serve ads and stream content, not extend functionality.

My advice? Buy as many NVIDIA Shield devices as you can while they’re still for sale.

Windows 10 Is Dead – What Are Your Upgrade Options?

The end of Windows 10 is coming up, with Microsoft planning to stop support on October 14, 2025. I’ve been seeing the same warnings you probably have — those pop-ups telling you to upgrade to Windows 11 — and I wanted to take a closer look at what that really means for people still using perfectly good older computers.

Check it out in my latest video!

Windows 10 has had a long run, and I’ve always liked how well it performed even on lower-end hardware. The problem now is that Windows 11 has stricter requirements, mainly the need for a TPM 2.0 security chip and newer processors. If you’ve got an Intel 8th Gen or newer, or an AMD Ryzen 2000 or newer, you’re likely ok to upgrade.

Anything older isn’t officially supported, though there are ways around it. Microsoft doesn’t recommend circumventing the TPM chip requirement, and if they make a change assuming everyone has TPM 2.0, it could cause problems later. Business and government users also have to meet compliance standards, so running an unsupported version isn’t an option for them.

To see how this plays out in the real world, I fired up one of my older PCs — a small Shuttle box with a Celeron processor — and ran Microsoft’s PC Health Check app. It said I could upgrade for free, meaning this one squeaks by. Once Windows Update offers it, I can upgrade to Windows 11 in place. As always, it’s smart to back up first, but the process should be straightforward.

If your machine doesn’t qualify or you’re not ready to move on, Microsoft has something called the Windows 10 Extended Security Update (ESU) program. It’s available to consumers for another year, through October 2026. You can join it for free if you sync your PC settings with Microsoft, trade some Microsoft reward points, or pay $30. It’s not a long-term fix, but it buys more time for hardware that’s still working fine.

For people who’d rather try something new, Linux is worth a look. I tested Linux Mint on that same Shuttle PC, running the XFCE Edition since it’s lightweight and good for older systems. It’s surprisingly easy to get going, with a “live boot” option that lets you try it out without installing anything. Everything worked on my demo machine, and once installed, Mint has most of what you’d need — a web browser, office software, and access to more apps through its software manager. It uses about 1.2 GB of RAM sitting idle, so a 4 GB system runs comfortably.

Installing Linux does mean wiping the drive, so backups are essential, but if you’re done fighting with Windows upgrades, it’s a practical way to keep an older PC useful. I’ve noticed Linux often feels faster on aging machines than Windows 11 does, and since it’s supported well past 2029 for Mint’s current version, it’s a stable alternative.

Whether you stick with Windows 10 a bit longer, move to Windows 11, or jump to Linux, you’ve still got options. It’s interesting that after all these years, some of the oldest PCs still have life left in them — they just need a new OS to keep going.