Multiple Studies Show DRM Encourages Rather than Restricts Piracy!

I recently observed the National Association of Broadcasters criticizing Major League Baseball and Netflix for placing sports content behind paywalls. This critique is a notable contradiction when considering the broadcast industry’s current efforts to encrypt public airwaves. While broadcasters claim to be the center of community connection by delivering free games to millions, their recent actions suggest a shift toward a business model more closely resembling the streaming platforms they criticize – including locking down over the air content with DRM.

In my latest video, we take a look at whether or not DRM actually works in stopping piracy. Spoiler alert: it doesn’t – in fact there’s strong evidence to suggest it actually increases piracy.

In my home state of Connecticut, for instance, broadcast TV fees for cable subscribers have risen from $8 in 2018 to over $48 per month today. This cost exceeds a standard Netflix subscription and reflects the price consumers are paying for access to local stations via cable. While an antenna remains a traditional method for receiving these signals at no cost, the industry is moving toward a new standard known as NextGen TV. This transition involves digital rights management, or DRM, which requires consumers to purchase specific high-end televisions or expensive external (and barely functional) tuning boxes. This shift also restricts the use of gateway devices that currently allow viewers to watch over-the-air television on various screens throughout their homes.

I find the current trajectory of the broadcast industry mirrors the mistakes made by the music industry two decades ago. During the early 2000s, record labels were on the ropes with a huge decline in revenue as consumers desired digital music that simply wasn’t available. Eventually the labels were strong-armed into selling music online but insisted on DRM to restrict how and where consumers could play purchased music. This lack of interoperability led many consumers to favor piracy for its convenience. It was only after the industry moved toward DRM-free audio that its financial health improved. Today, the music industry sees record revenues because it no longer restricts the devices or platforms consumers use to listen to their products.

Research supports the idea that restrictive encryption often backfires. A 2003 study conducted by HP in partnership with MIT concluded that DRM features were not effective at combating piracy. The researchers noted that content must eventually be converted into an unprotected form, such as sound waves or light, to be consumed—a vulnerability often called the analog hole which is easily exploited. Furthermore, data from the University of Toronto in 2013 showed that removing DRM led to a 10% increase in music sales and a 30% increase for back-catalog items. A 2010 study from Seoul National University similarly found that the inconveniences of DRM reduced legal demand and increased piracy.

The broadcast industry’s current approach to DRM lacks ubiquity. At present, the encryption used for NextGen TV only functions on Android-based devices, leaving users of Roku, Apple TVs, PCs, iPhones, iPads and Xbox devices unable to decode the content. This is a significant departure from successful platforms like Netflix or Spotify, which ensure their encrypted content works across nearly every available device. By narrowing the range of compatible hardware, broadcasters risk alienating their remaining audience.

The Federal Communications Commission is currently weighing the implementation of these encryption standards. I believe it is important for the public to communicate the potential inconveniences of this technology to their congressional representatives. While the industry highlights the technical benefits of the new standard, the restrictive nature of the accompanying encryption is often omitted from the conversation.

The historical data from the music industry suggests that when legal access becomes more difficult than the alternative, the industry itself suffers the most. The outcome of the current deliberations at the FCC will determine whether broadcast television remains a broadly accessible public resource or becomes a more restricted and hardware-dependent medium.

New MiSTer Cores! 3DO and Apple IIgs FPGA Betas Show Promise

I have been revisiting the MiSTer project recently to look at two new cores currently in development for the platform. This hardware, which costs approximately $160, uses FPGA chips to replicate the original logic of vintage computers and game consoles from the mid-1990s and earlier.

In my latest MiSTer update, I look at two new cores – one for the 3DO and the other for the Apple IIgs, both of which are receiving significant updates from the development community.

See them in action in my latest review!

The 3DO core, developed by Srg320, is nearing completion and is currently available for testing on single RAM MiSTer devices. In 1994, the 3DO occupied a specific niche in the market, offering graphical fidelity that rivaled and in some cases exceeded high-end PCs at a much lower price point. The console had support from Electronic Arts and a few other well known publishers who all made next-gen ports of their 16-bit titles along with new games. I bought my Panasonic 3DO console in 1994 when the price reduced from $799 to $399.

The system seller for the 3DO was the amazing port of Road Rash that came with arcade quality 3D graphics, a great soundtrack featuring Soundgarden and other popular artists, and some killer full motion video cut scenes. Testing Road Rash on the new core showed performance that appears consistent with the original hardware, though perhaps slightly less fluid than a stock console.

I also spent time with Wing Commander 3, a game notable for its transition between full-motion video segments starring Mark Hamill and Tom Wilson and 3D space fighter combat. The video playback is stable, though the output seems slightly dark, suggesting a need for gamma adjustments. I observed minor graphical artifacts, such as unexpected patterns in the starfields.

Compatibility on the 3DO core is not yet universal; titles like Zhadnost load slowly, and the Need for Speed currently fails due to an NVRAM error. Other titles ran but with some glitches like a green vertical line visible in Total Eclipse. However, for a beta core, the majority of the library I tested is functional.

Next I turned to the Apple IIgs core, which is being developed by “Allen SWX.” The IIgs implementation emulates a ROM 1 machine with 8MB of RAM. This setup allows for the use of hard drive and floppy disk images including the newer “Woz” format. I was able to boot into GS/OS System 6 and access personal files from my own hard drive images dating back to the early 1990s. The core reproduces the authentic, albeit slow, operating speed of the original hardware. While the games run as expected, the audio output currently sounds somewhat muffled compared to the original machine.

These developments represent a steady expansion of the MiSTer library into systems that were previously considered outliers. While neither core is finished, the progress indicates that the technical hurdles for these specific architectures are being addressed.

The AT4k Launcher for Google TV and Android TV Brings an Ad Free Experience – No Rooting Required!

I recently spent some time testing a new interface for Android TV and Google TV called AT4K. It brings the visual style of the Apple TV interface to much lower cost devices like the Onn streamer I tested it on. The primary draw of this specific launcher is that it functions without advertisements and can be configured to run as the default launcher without having to root your device, similar to the Projectivy launcher I looked at last year.

Check out AT4k in my latest review!

The layout features a header row that behaves similarly to the standard Android launcher, pulling content cards from associated apps. For instance, when I scrolled to the Apple TV app icon, the header displayed specific shows and movies from that service. If an app does not provide its own cards, the system pulls from other apps like Plex. The header can be removed if you just want the standard app layout.

Below this header, the rest of the applications are arranged in a grid. Managing these icons is straightforward; holding down a selection button triggers a “jiggle” mode that allows for moving apps or grouping them into folders. I created a dedicated folder for games, and the process was functional and mirrored the organizational style found on Apple TV devices.

Navigating the settings reveals two distinct areas: one for the standard Android system settings and another for AT4K’s internal configurations. The launcher supports both light and dark modes, though I found the light mode to be quite legible. There are premium features available for a one-time fee of five dollars, such as the ability to use custom images or videos as backgrounds and the option to expand the app grid from five to seven icons per row. During my time with the app, I encountered some difficulty interacting with the custom image menu, which is something to monitor in future updates.

One of the more practical aspects of AT4K is its ability to become the default launcher without requiring the user to root or hack the device hardware. It utilizes Android’s accessibility options to override the standard launcher. By enabling the AT4K service in the accessibility menu, the launcher can intercept the home button press and manage the boot sequence. To test this, I enabled the “override current launcher” and “start on boot” settings before power-cycling my device.

After the reboot, the original Google TV interface appeared momentarily before AT4K automatically took over. I launched several resource-heavy applications, such as HD HomeRun and Apple TV, and in each instance, pressing the home button returned me back to the AT4K interface rather than the factory default.

The app manager within the settings also provides a quick way to hide specific applications from the launcher or access deep system settings like “force stop” or “uninstall.”

I found the setup process to be accessible for most users, as it does not require adjusting complex security settings. For those who prefer the aesthetic of the Apple ecosystem but want to maintain the flexibility of an Android-based device, this launcher offers a functional middle ground. I plan to keep this as my primary interface for the time being, as it provides a streamlined experience that remains stable under regular use.

Six Self Hosted Apps I Use on my Home Server ! Synology, Unraid, Linux Etc.

The pursuit of digital efficiency often leads to a familiar crossroads where a user must choose between a recurring subscription fee or the sacrifice of data privacy. For some time, I have been looking for ways to streamline my professional and personal workflows without relying on external servers or third-party data mining. The current landscape of open-source software has made it increasingly feasible to host powerful applications on a small home server, such as a Synology or Unraid NAS or a Linux machine and installing the applications via Docker containers.

In my latest video, I take a look at six self hosted Docker applications running on my Synology NAS!

To manage these applications securely, I use a private VPN called Tailscale. This allows me to access my home-hosted tools from any location without opening ports on my firewall. It creates a seamless connection between my mobile devices and my server, ensuring that my data remains isolated from the public internet while remaining accessible to me. This setup provides the foundation for several utilities that have replaced more traditional, paid software services.

One of the basic utilities I maintain is Uptime Kuma, a monitoring tool that tracks the status and performance of my various services. It provides real-time data on ping rates and uptime, sending a notification to my phone via an app called Pushover if a service fails. This eliminates the need for a paid monitoring service and provides immediate feedback on the health of my local network.

Information management is another area where self-hosting has proven effective. I use two different RSS readers, FreshRSS and TT-RSS, to curate content from YouTube and various technology websites. Rather than relying on platform algorithms, these tools allow me to organize feeds into specific topics like retro gaming or modern tech. TT-RSS, in particular, is useful for aggregating large volumes of data—sometimes dozens of articles at once—which I then process through other automation tools.

For personal tasks, I have moved toward simpler, self-hosted alternatives to mainstream apps. Actual is a straightforward personal finance tool that functions as a manual checkbook and budgeting application. I don’t have it connected to my banks, but that options is available through . For note-taking, I have transitioned from the more complex Obsidian to a tool called Blinko. It offers a clean interface that works through the browser on screens of any size, allowing me to capture quick thoughts and organize them with tags later. It also includes an API and an AI component for querying my own notes.

The most substantial part of my current workflow is built on N8N, an open-source automation platform. I use it to handle repetitive tasks that previously took hours of manual effort. For example, my weekly email newsletter (sign up here) is now generated by a workflow that pulls data from my blog and YouTube RSS feeds, formats the text, and utilizes AI to suggest subject lines. I also use N8N to monitor specific FCC dockets for our continuing efforts to stop broadcast TV encryption. When a new filing appears on the FCC website, the system automatically downloads the PDF, sends it to an AI model for summarization, and emails me the highlights.

I have also automated my social media presence using these local tools. Instead of paying for a distribution service, I built a queue system that posts updates to platforms like X, Blue Sky, Threads, Mastodon, Facebook and LinkedIn at regular intervals. This system was developed with the assistance of Claude, which can connect directly to the server to help write and troubleshoot code. This transition to self-hosting has replaced several hundred dollars in annual subscription fees with a stable, private infrastructure.

As I continue to integrate these tools, the focus remains on finding applications that offer high utility without unnecessary complexity. The transition to a self-hosted environment requires an initial investment in learning how to manage Docker containers, but the resulting control over data and workflow efficiency provides a clear alternative to the standard subscription model. I am regularly looking for new applications to add to this local ecosystem as the technology evolves.

Check out more self hosting videos here!

Music Labels Lose a Big Piracy Case at the Supreme Court

A twelve year legal battle about piracy between the music industry and internet service providers has finally come to an end by the US Supreme Court. The court overturned a $1 billion verdict against Cox Communications, a decision that has significant implications for how we understand copyright liability and the responsibilities of those who provide our internet access.

See more in my latest video!

The history of this conflict dates back to the early 2000s when the music and film industries struggled to adapt to the rise of digital file sharing. Initially the music industry started suing their own customers, hitting them with federal lawsuits. One instance involved a 12-year-old girl having to cough up $2,000 for a settlement and another where a woman was held liable for hundreds of thousands of dollars for sharing 24 songs.

At the time, piracy was often driven by a lack of convenient, legal digital options. Physical media sales were declining, and digital purchases were often restricted by digital rights management, or DRM, which limited how and where consumers could listen to their music.

When the strategy of suing individual users failed to curb piracy or improve the industry’s public image, the focus shifted toward where the money is: internet service providers. Organizations representing the record and motion picture industries established the Copyright Alert System, partnering with major ISPs to issue warnings to users who were sharing copyright material.

Cox Communications did not participate in this program and that put a target on their back. A lawsuit was filed in 2014 against the ISP with music label BMG arguing that Cox should be held liable for the infringement occurring on its network. BMG claimed that because Cox did not adequately respond to infringement notices, it lost the “safe harbor” protections usually granted to service providers under the Digital Millennium Copyright Act.

A federal jury originally sided with BMG, awarding a billion-dollar verdict against Cox. However, the Supreme Court’s recent reversal of this decision centered on a specific interpretation of federal copyright law. Justice Clarence Thomas, who authored the decision, noted that while Cox may not have met the requirements for DMCA safe harbor protection, other aspects of federal law do provide for an adequate defense. The ruling clarifies that a service provider is only liable if it intended for its service to be used for infringement or if it marketed itself specifically for that purpose. Because Cox provides a general-use internet service and did not induce its users to pirate material, the court found they could not be held responsible for the specific copyrights violated by their subscribers.

This development changes the landscape for other ISPs as well. They now have a defense beyond the safe harbor provisions, meaning they may not feel the same pressure to react to every automated infringement notice they receive. I suspect this will lead to a decrease in the haphazard distribution of warnings to account holders. While direct lawsuits against individuals may still occur, particularly in cases involving large volumes of distribution, the era of trying to hold the entire infrastructure of the internet accountable for individual user behavior seems to be shifting.

It should be noted that the music industry eventually found success not through litigation, but by listening to consumer demand. When they removed DRM from digital music purchases and embraced affordable streaming services, revenues skyrocketed. It is a reminder that market accessibility often addresses the root causes of piracy more effectively than legal threats.

As other industries, such as broadcasting, consider implementing new restrictions on content, the industry changes that have taken place since this case was filed suggests that focusing on what the customer wants is a more sustainable path than pursuing multi-billion dollar judgments against service providers. This ruling brings a level of technical and legal sanity back to the conversation regarding how we use and access the internet.

What a sub $500 Mini PC looks like these days: GEEKOM A5 Pro Review

Finding a mini PC for under $500 has become increasingly difficult in the current market, but I recently spent some time with the Geekom A5 Pro (compensated affiliate link) to see how it balances cost and performance. While the machine bears a physical resemblance to the more powerful A8 model, this version utilizes a Ryzen 7 5300U processor and targets users with more modest computing requirements.

Check it out in my latest video review!

The unit Geekom sent me for review can be found on Amazon (compensated affiliate link). It features a Ryzen 7530U, which is an older six-core, 12-thread chip running at a 15-watt TDP.

Inside, the hardware is accessible but reveals some of the compromises made to reach this price point. It uses DDR4 RAM rather than faster DDR5, and while there is an expansion slot for a second SSD, it is limited to the SATA interface rather than NVMe. The RAM can be upgraded to 64GB. I also noticed during disassembly that the Wi-Fi antenna design is somewhat delicate; the cable is easily detached when opening the case and requires some patience to reconnect to the motherboard.

The external build quality remains high, featuring a metal case and a variety of ports. The front panel includes two 10Gbps USB-A ports—one of which supports device charging while the PC is powered down—alongside a headphone jack. The side houses a full-size SD card reader, while the back provides two HDMI ports and two USB-C ports. While it lacks USB 4, the USB-C ports do support video output, allowing a four-display 4K setup. There is also a 2.5gigabit per second ethernet port that performed as advertised in my testing.

In daily operation, the A5 Pro is efficient and quiet. It idles at around 7 watts and peaks at 46 watts under heavy load. The system fan is rarely audible during standard desktop tasks. It includes a licensed copy of Windows 11 Pro, and the machine handled web browsing and general office applications smoothly. However, the age of the processor becomes apparent when pushing the integrated graphics. During 4K YouTube playback at 60 frames per second, I observed frequent dropped frames, a limitation not typically seen on more modern AMD chips.

Creative tasks and gaming yielded mixed results. Simple video editing in DaVinci Resolve is feasible for basic projects, but adding complex effects or transitions leads to significant rendering delays and stuttering during playback. Gaming performance is similarly constrained; modern AAA titles like Cyberpunk 2077 struggled to reach 15 frames per second at 1080p on low settings. But, the machine is well-suited for emulation of older consoles or playing legacy PC titles, where it maintained consistent frame rates.

Thermal management is tuned for silence rather than maximum output. The system failed a 3DMark stress test with a score of 95.7%, suggesting about a 4-5% performance drop during sustained heavy workloads. For most users, this five percent dip in performance will likely go unnoticed, especially given the quiet nature of the fan.

The machine performed very well under Linux. Testing with the latest version of Ubuntu showed that all hardware components were recognized immediately, and the interface felt more responsive than Windows, likely due to the lack of operating system bloat.

While the A5 Pro could serve as a capable low-power home server, its AMD architecture makes it less ideal for hardware transcoding in applications like Plex compared to Intel-based alternatives.

Ultimately, this device reflects the current state of the hardware market. A few years ago, this budget would have secured more contemporary components, but today it buys a reliable, if slightly older, set of specifications. It remains a functional option for light office work or a dedicated Linux station, provided the user understands the graphical limitations inherent in the hardware.

Hamgeek FPGA MiSTer Clone Review

I ordered another cheap MiSTer FPGA clone off a site called Hamgeek the other day. Hamgeek mostly sells amateur radio gear and a few other curious gadgets. Like other MiSTer devices we’ve looked at recently, it uses an FPGA chip to accurately replicate retro computing, gaming and arcade systems from the 90s on back.

Check it out in my latest MiSTer video!

The Hamgeek unit cost about $160 and arrived fully assembled with a 32 GB SD card preloaded, which let me skip the initial flashing and get straight to testing. The Hamgeek MiSTer is effectively a “clone of a clone,” utilizing the same hardware design of the QMTech device we looked at a few weeks ago.

Like other MiSTers I’ve tested you will need to download and run the Update_all script to get all of the supported cores and features to work. You can see the full setup process in the MiSTer Pi video I did last year.

Compatibility on the Hamgeek feels just as good as the other MiSTer clones we’ve looked at over the last year. I tested a range of demanding and lower-end cores. The Amiga core looked crisp and executed complex demo scene disk images flawlessly. The Saturn core ran Daytona USA without visible issues, and the Sega 32X handled After Burner perfectly. I also ran Street Fighter Alpha 3 on a CRT for extended periods, played the Neo Geo’s King of Fighters 2003, and tried Wave Race on the Nintendo 64 core. On the low end, NES and Atari 2600 content ran as expected. Overall compatibility and stability across the cores I exercised matched what I’ve come to expect from consumer Mister builds.

I also ran a memory test that exercises the 128 MB memory module. It sustained 167 MHz for about ten minutes without errors, which suggests the hardware has some performance headroom beyond what most cores require.

Video output options are flexible: HDMI for modern displays, a VGA port that can deliver RGB component for late-model CRTs, and analog/optical audio outputs via a combined 3.5mm jack. The unit does not provide RCA composite or S-Video natively, so if your television only accepts composite you’ll need an adapter or consider waiting for the Superstation One MiSTer clone that will include more analog video output options built in.

Like other Mister builds, this one includes a port for SNAC adapters that allow for direct electrical connections to certain controller types and accessories. I verified light-gun functionality on a CRT using the NES core and a Zapper.

The box has a limited number of USB ports — enough for an external hard drive and a couple of controllers, but you’ll likely want a hub — and it does not include built-in Wi‑Fi. You can add Wi‑Fi and Bluetooth with a USB dongle. MiSTers generally do not require an active Internet connection but you will need to go online for core updates.

There’s an internal cooling fan that runs continuously; it’s audible but not loud. The metal case version of the Hamgeek MiSTer I opted for is more robust than the plastic one that’s available for the same price.

If you want a ready-to-use MiSTer without assembling parts, units like this make that option accessible at a lower price than earlier preassembled builds. It’s great to see the MiSTer ecosystem getting more accessible!

See more of my MiSTer content here!

ATSC 3 Update: Dueling Surveys & Contact Your Congressperson!

In my latest ATSC 3.0 update video, I take a look a dueling consumer surveys from the Consumer Technology Association (CTA) opposing TV tuner mandates and another from broadcasters suggesting consumers will be more than happy to buy expensive hardware when the rug is pulled out from under us.

Pearl TV, an organization representing broadcasters, recently published a survey indicating that most viewers would be willing to purchase a low-cost converter box, estimated at around $60, rather than lose access to free television. When looking at current market behavior on platforms like Amazon, consumers are choosing tuners priced as low as $30 that include recording capabilities—a feature the proposed $60 DRM-compatible basic boxes would lack according to Pearl.

Pearl’s survey results released so far lack the “cross-tabs” that would reveal all of the questions asked and answered. Only a small amount of data appears in the Pearl TV slide deck, yet the methodology slide reveals the median time to complete the survey was 16 minutes. Clearly they are holding a lot of data back.

On the other side of the issue, the Consumer Technology Association (CTA), which represents electronics manufacturers, argues against government mandates that would force the inclusion of expensive ATSC 3.0 tuners in every television. Their research suggests that while antenna usage has seen a slight uptick to about 15% of households, awareness of the NextGen TV brand remains low. Only 5% of respondents claimed to be familiar with the term, and the vast majority had never seen the official logo. This matches my own observations in retail environments, where the technology is rarely a primary concern for consumers compared to the availability of streaming applications on a particular device.

As the National Association of Broadcasters (NAB) prepares for its annual trade show, the lobbying effort has intensified. Recently, 91 members of the House of Representatives signed a letter pressuring the FCC to move forward with the transition. This indicates that congressional offices are hearing primarily from broadcast interests. My review of the signers shows a bipartisan group of representatives from across the country, many of whom may not be fully briefed on the technical limitations and costs these encryption standards impose on their constituents.

My suggestion? It’s time to reach out to your member of Congress. My suggestion would be to forward along what you’ve already filed with the FCC. Short of that you can use some sample language that I put together here. If you’re looking for a one stop shop for finding and contacting your representatives, Democracy.io has a helpful utility for doing so.

The FCC remains cautious. Currently, Commissioner Olivia Trusty is the only official scheduled to appear; she is set to deliver a brief 10-minute presentation on ATSC 3.0 at the Las Vegas Convention Center.

With consumer adoption stuck in neutral, thanks to a complicated DRM encryption scheme, broadcasters are now going to rest their hopes on political pressure to try and force their private regulatory regime on the American people. That’s why it’s important for all of us to educate our representatives on what is really going on.

Vibe Coding New Plex Apps ! (sponsored post)

For this month’s sponsored Plex video, I examined the process of integrating the Plex API with AI coding assistants like Claude and Google Gemini. The primary objective was to determine whether natural language prompts could generate functional applications to control and analyze data on my local Plex media server.

See it in action in my video!

The development setup was relatively straightforward. I accessed the Plex Media Server API documentation and downloaded the OpenAPI specification, which resulted in a single JSON file. After placing this file in a dedicated local directory, I instructed Claude’s coding application to reference it for API structure.

I tested this approach with Claude Code, ChatGPT’s Codex, and Gemini’s command-line interface on a Mac. All three tools successfully read the JSON file, interpreted the API requirements, and edited the application files directly on my local machine. Since these applications were designed to run locally, standard authentication was bypassed in favor of a Plex token. This token can be retrieved by viewing the XML data of any media item within the Plex web interface and extracting the character string from the resulting URL. You can see how to do that in the video.

The initial test was a swipe-based media selection tool. I requested an interface that presented random movie recommendations, where swiping right would immediately trigger playback on an Android TV client. Claude generated the core functionality on the first attempt, requiring only minor debugging to ensure the player execution command operated correctly. By default, the coding tools tended to write the web applications in NodeJS. However, to utilize an existing web server on a Synology NAS, I instructed the AI to rewrite a subsequent project in PHP.

This PHP project resulted in a jukebox-style application designed for multiple users on a local network to add songs to a Plexamp que. By scanning a QR code, users access a client screen on their mobile devices where they can search my server’s music library and submit song requests. As the administrator, I monitor the queue from an admin interface and have the ability to reorder the requested tracks, shifting specific songs up or down the playback list before they route through Plexamp.

Subsequent experiments focused on data retrieval and display. I directed the AI to build a statistics dashboard that analyzed my viewing habits over the past year. By programming the app to filter out content consumed by my children, it generated a localized report on my specific media consumption patterns and active viewing days.

A final application served as a digital “Now Playing” marquee. It queries the server to display the current media’s thumbnail and a progress bar, while simultaneously pulling a list of similar titles from the library. Clicking any of the recommended titles halts the current video and initiates playback of the new selection.

My initial experiments suggest the barrier to entry for developing customized Plex experiences has lowered significantly. Where interacting with a platform’s API once demanded fluency in specific programming languages, I found that natural language processing models now act as a functional bridge between raw documentation and executable code.

Moving forward, the integration of Model Context Protocol (MCP) to instruct the AI on Plex’s API instructions will likely make things more efficient especially for those on constrained token limits with their AI provider. I’ve found Gemini Pro’s command line interface to be pretty generous in its token allocations.

See more of my Plex content here!

US Effectively Bans All New Router Products

The U.S. government has effectively implemented a ban on most new routers entering the domestic market, a move driven by a national security determination regarding risks posed by networking equipment produced overseas. While the order is broad, it is important to note that existing models already approved by the FCC—such as those currently found on retail shelves—are not prohibited from being sold or imported. The restriction specifically targets new products that have not yet received FCC certification.

I dive into the order and what it might mean in my latest video.

This action follows long-standing concerns from both the Biden and Trump administrations regarding vulnerabilities in consumer networking hardware.

Specifically, federal authorities pointed to prior sophisticated cyberattacks, such as those the Vault, Flax, and Salt Typhoon attacks, which utilized botnets of small office and home office (SOHO) routers to conceal the origin of attacks against U.S. critical infrastructure. In many cases, these attacks exploited “end-of-life” routers that no longer received security firmware updates from their manufacturers.

To gain authorization for new products, manufacturers must now apply for a conditional approval from the DoW/DOD or DHS. This process requires an extensive disclosure of the company’s supply chain, including a detailed bill of materials, the country of origin for all components and software, and an identification of any single points of failure in the manufacturing process.

Beyond security audits, the government is requiring a commitment to domestic production. Applicants must submit a time-bound plan to establish manufacturing and assembly operations within the United States. This includes detailing planned capital expenditures and providing progress reports on onshoring efforts. Currently, the list of compliant router manufacturers remains empty, as drone makers are the only technology to have successfully navigated a similar regulatory process thus far.

The definition of a “router” under this regulation is tied to NIST standards, focusing on devices marketed for residential use and customer installation. This creates a technical distinction for hardware such as small-form-factor computers; while these devices can be configured to function as routers using open-source software like pfSense, they are not currently subject to the ban because their primary marketed purpose is as a general-use computer.

Industry reactions have been varied according to a report in PC Magazine. TP-Link, which had previously been a specific focus of government scrutiny, expressed confidence in its supply chain and stated it welcomed an evaluation that applies to the entire industry. U.S.-based Netgear commended the action, suggesting that the regulations could lead to a more secure digital future. Both companies will likely benefit from the action – TP-Link gets to survive and Netgear has the capacity to comply with the domestic onshoring when many of their competitors may not.

I will be monitoring the FCC’s exception list to see which manufacturers are the first to successfully onshore their operations and return new hardware to the pipeline. In the meantime, the focus remains on whether these requirements will effectively eliminate orphaned firmware and provide the level of transparency the government is seeking.

Did Microsoft Admit Windows 11 is Too Bloated?

Microsoft is beginning to acknowledge the growing concerns regarding bloatware and performance issues within Windows 11. Windows head Pavan Davaluri recently published a blog post committing to a new standard of Windows quality. In my latest analysis piece, I dive into what Microsoft thinks the problem is and I offer some of my own experiences.

Check it out here!

While Davaluri’s official roadmap highlights specific improvements like increased taskbar customization and a more dependable File Explorer, many of the everyday frustrations experienced by power users and system reviewers remain unaddressed.

The current onboarding process for a new Windows 11 PC takes over an hour, largely due to a gauntlet of updates and forced configuration screens. Even after the initial setup, users frequently encounter a secondary wave of background updates that can lead to audible fan noise and noticeable performance degradation on a brand-new machine.

Beyond the updates, the operating system’s interface is increasingly defined by a series of prompts designed to funnel users into subscription services and cloud storage. These “upsell” screens often prioritize the “Next” or “Accept” buttons, while the options to decline or keep files stored locally are presented in smaller, less prominent text.

OneDrive integration remains a primary point of friction. Even when a user expresses a preference to store files only on their local device, the system defaults to cloud syncing and backup, requiring a manual and repetitive process to disable individual folders. This persistent nudging extends to the Start menu and taskbar, which are frequently populated with icons for features like Copilot, Recall, and the Edge browser immediately following an update. The Start menu itself has become more cluttered, making it increasingly difficult to find what you’re looking for amidst a sea of promotional icons and unhelpful recommendations.

Even basic utility applications are not immune to this expansion of features. Notepad, a tool that remained virtually unchanged for decades, now includes tabbed windows, cloud synchronization requirements tied to a Microsoft account, and integrated co-pilot AI writing assistance. These additions, while intended to modernize the app, introduce new complexities and annoyances for something that doesn’t need any features. Similarly, background processes like the Xbox overlay continue to run by default, regardless of whether the user intends to use the computer for gaming.

While Microsoft’s new commitment to quality is a positive step, the current state of the operating system has led some to rely on third-party debloating utilities to reclaim system performance. There is also a growing awareness of the increasing user-friendliness of Linux distributions, which may be placing additional pressure on Microsoft to streamline its experience. As the company moves forward with its debloating efforts, the true measure of success will be whether it can reduce the constant stream of distractions and return to a more focused, efficient production environment.

I’m curious to see if these promised updates will actually thin out the layers of advertisements and background services, or if the primary goal remains centered on revenue extraction through service nudges.

Gadget Tech Haul #14 – A Mixed Bag

In my latest gadget haul, I am looking at five items that vary significantly in utility and performance. But there are a few good ones in the mix that you can find here on Amazon (compensated affiliate link).

Check the haul out here!

I began with a four-way HDMI multiviewer from Orai (compensated affiliate link) a brand known for various video routing connectors. This device allows for four HDMI sources to be connected and displayed on a single screen simultaneously. The front panel features buttons for switching between sources and modes, including a four-way split that is particularly useful for monitoring multiple broadcasts at once. It supports 1080p at 60Hz and is HDCP compliant, meaning it can handle protected content from streaming services like Netflix. While some of the other display modes distort the aspect ratio of the video, the multiviewer functions reliably as an affordable solution for 4-up multi-source monitoring.

The monitor I used to test this device is a 24 inch 240Hz IPS display from Dell at a crazy low price (compensated affiliate link). For a budget-friendly screen, it performs well with a 0.5ms response time in its extreme mode and support for AMD FreeSync. In testing with both modern PC benchmarks and older gaming hardware, I found very little motion blur or screen tearing. The color accuracy is rated at 99% sRGB, which is respectable for this price point. The primary compromise is the peak brightness, which reaches only about 300 nits, and the included stand, which lacks height adjustment and only offers tilt. However, it does feature a VESA mount for those who prefer a more flexible setup.

Transitioning to mobile accessories, I tested the abxylute M4 Snap-On Mobile Gaming Controller controller (compensated affiliate link), which proved to be a disappointment. Although it uses MagSafe to attach to a phone, the design is top-heavy and the controls are physically cramped. The D-pad and buttons lack a premium feel, and the analog sticks do not include a click function. It also only works with the phone in landscape mode unless the controller is physically detached.

Another item that fell short of expectations was a SanDisk USB-C phone drive (compensated affiliate link). While SanDisk has a long history of reliable storage, this specific drive struggled with write speeds. Although it approached its advertised read speeds at around 140 megabytes per second, the write speeds hovered at 35 megabytes per second. During large file transfers, the drive appeared to write in chunks, often pausing as the cache caught up. It functions adequately for small file transfers or phone backups via the SanDisk app, but it is not a recommended choice for high-volume data tasks.

The final item is the EufyCam S4 (compensated affiliate link), a dual-lens security camera that includes a wide-angle 4K lens and a 2K pan-tilt-zoom (PTZ) camera. A notable aspect of the Eufy system is that it does not require a subscription for AI detection features, such as recognizing humans, vehicles, or pets. The camera effectively tracked movement during my testing, including prioritizing the action when a car was pulling in and I was walking my dog. It comes with a 5.5-watt solar panel and a removable battery, which remained at full charge during a week of outdoor use. The solar panel can be detached for better sun placement, with Eufy providing a weather proof USB-C extension cord for that purpose. While it supports RTSP for integration with personal NAS and NVR devices, using this feature significantly increases power consumption, likely requiring a dedicated USB power source rather than relying solely on the solar panel and battery.

I will continue to keep an eye out for hardware that fulfills its promises as I prepare for the next round of testing.

Beelink ME Pro NAS Review

Beelink recently sent me their ME Pro device, a personal server that essentially functions as a mini PC with expanded storage capabilities. It looks pretty cool too.

Check it out in my latest video review!

The unit I evaluated is the entry-level model featuring an Intel N95 processor. An alternative version with an Intel N150 processor is also available, offering slight improvements in power efficiency and an increase in soldered RAM from 12 gigabytes to 16 gigabytes. Both models operate with a 25-watt thermal design power and are fully capable of managing standard personal server tasks. You can find them on Amazon here (compensated affiliate link).

The internal layout allows for measurable storage expansion. The bottom of the device accommodates up to three NVMe drives, supporting a total of 12 terabytes of solid-state storage. A separate rear panel provides access to bays for two 3.5-inch desktop hard drives. This storage setup is not designed for hot-swapping; all drives and panels must be secured with screws. Beelink includes an Allen wrench for this purpose, though I found its small size makes it somewhat difficult to use effectively. The device is designed for internal maintenance access, allowing the entire motherboard to be removed for cleaning by loosening four screws.

For networking and peripheral connectivity, the ME Pro includes a 10-gigabit-per-second USB-A port on the front and a similarly rated USB-C port on the rear, alongside an HDMI output. The device supports dual 4K display output at 60 frames per second.

Network connections are handled by a 5-gigabit-per-second Ethernet port using a Realtek controller and a secondary 2.5-gigabit port utilizing an Intel controller. When I tested the 5-gigabit connection, it yielded disk writes between 400 and 500 megabytes per second to the solid-state drives, which aligns with expected network overhead limits.

Operating as a media server via Unraid, the hardware demonstrated clear capability with common server loads. When running Plex, the N95 processor managed hardware transcoding of a 4K HDR video file to 720p with low CPU and bandwidth utilization. It also successfully handled HEVC codec transcoding. During these tasks, with two mechanical desktop drives spinning, power consumption measured approximately 33 watts, peaking near 70 watts under maximum load.

Thermal performance remained stable, with the NVMe drives showing only a minor six to seven-degree Celsius temperature increase under sustained load. Both the fans and the drives operate at a low volume. It’s a very quiet device even with running spinning drives.

There are a few hardware design choices that warrant observation. The system relies on a 100-watt wall-wart power supply, which is susceptible to accidental disconnection from standard outlets. Additionally, while the unit was shipped with a version of Windows, the necessary drivers were not pre-installed, preventing the operating system from functioning correctly out of the box. This positions the device more as a platform for user-supplied NAS operating systems, such as Unraid or Linux distributions, rather than a turnkey Windows machine.

Furthermore, for a device categorized as a “Pro” model utilizing an OS like Unraid—where one drive is typically dedicated to parity—expanding the SATA drive capacity from two bays to four would provide a more practical parity-to-storage ratio. The current configuration requires careful planning for anyone looking to maximize their redundant storage capacity on this compact platform.

Disclosure: The ME Pro NAS was provided free of charge. However, they did not review or approve this content prior to publication.

ATSC 3.0 Update: More DRM Nonsense Filed with the FCC

The broadcast industry’s ongoing effort to encrypt the public airwaves is currently awaiting a decision from the Federal Communications Commission. In a recent ex-parte letter to the FCC, broadcasters cited the US Trade Representative’s 2025 Review of Notorious Markets for Counterfeiting and Piracy report to support their push for the ATSC 3.0 encryption standard. The report focuses heavily on live sports and the revenue lost to global piracy – but none of it indicates broadcast TV signals are being stolen.

See more in my latest ATSC 3.0 update video!

The report’s introduction references the NFL’s broadcasting agreements with networks like CBS, Fox, and NBC, which run through 2033. These contracts were signed without any provisions or assurances requiring future signal encryption, suggesting the league does not view over-the-air broadcasting as a primary piracy vulnerability.

The report provides three specific instances of piracy, including the FIFA World Cup, a mention of European soccer games being pirated and the 2017 Mayweather-McGregor fight. While the FIFA World Cup game was broadcast on television stations here in the USA, it is likely that it was pirated off of encrypted sources along with the other European soccer matches. And the Mayweather-McGregor fight was an encrypted Pay Per View event.

The government’s report cites data from Irdeto, a European company specializing in signal encryption for satellite and streaming providers. A review of their technical literature shows that modern piracy relies on methods like stealing session tokens, purchasing compromised account credentials on the dark web, or utilizing a technique known as CDN leeching.

These methods bypass the physical complexities of installing antennas to intercept local signals, demonstrating that for pirates encrypted content is easy to pirate than the unencrypted broadcast signals.

Furthermore, Irdeto’s guidance emphasizes the necessity of multi-DRM systems to ensure a frictionless viewing experience across different platforms. Currently, ATSC 3.0 DRM only supports Widevine, a Google technology. This single-DRM approach limits compatibility, leaving devices like Apple TV, Roku, Xbox, and standard computers unable to decode the encrypted broadcasts.

The push for encryption appears closely tied to the economics of broadcast retransmission fees. In Connecticut, for example, cable subscribers currently pay around $48.30 a month strictly for local channel access. Encrypting the over-the-air signals forces consumers to either maintain these cable subscriptions or purchase new, proprietary decoding hardware. Ahead of the upcoming NAB show, industry executives have discussed a proposed $60 tuner box. However, this device is expected to function solely as a tuner without DVR or gateway capabilities and cost three times as much as current tuning devices that do include DVR functions.

Broadcasters also point to the A3SA encoding rules, which currently permit time-shifting and recording. But these allowances apply only to content that is actively simulcast with the older ATSC 1.0 standard. Once the simulcast requirement expires, broadcasters provide are not committing to restricting or disabling recording capabilities entirely, shifting control of public airwave usage to a private entity.

The FCC is presently collecting public feedback on a separate but related sports broadcasting docket (26-45), which examines the impact of broadcasting practices on consumers and local market obligations. The comment period for this specific docket remains open for roughly another week, offering another venue for the public to submit their observations regarding how signal encryption may affect access to local sports broadcasts.

MiSTer Multisystem 2 Review: A “Consolized” Retro FPGA Device

The MiSTer project has evolved from a complex DIY endeavor into a professional-grade cottage industry, and the MiSTer MultiSystem 2 represents the latest shift toward consolized, single-board hardware.

Check it out in my latest MiSTer Review!

Developed in the UK through a collaboration between RMC’s Neil and electronics manufacturer Heber Limited, this device consolidates the traditional stack of MiSTer FPGA boards into a single unified motherboard. The 3D-printed enclosure, which carries a design aesthetic reminiscent of late 80s electronics, houses a system that remains 100% compatible with the broader open-source ecosystem while offering expanded connectivity and thermal stability.

Operating on the same DE10 Nano framework as other MiSTer devices, the system uses FPGA technology to replicate the logic paths of vintage hardware at the chip level rather than through software emulation. This approach allows for high accuracy across a range of platforms, early arcade systems and 8-bit computers like the Commodore 64 to more demanding cores like the Sega Saturn and Nintendo 64.

My testing indicates the hardware is thermally balanced, maintaining stability even during intensive tasks such as running the Street Fighter Alpha 3 arcade core and in running RAM tests at speeds of 150 MHz on its 128 MB module.

One of the defining characteristics of the Multisystem 2 is its emphasis on user-accessible expansion. A unique cartridge slot on the top of the unit supports different modules, such as SNAC adapters for zero-lag original controller input, composite video output for older televisions, and even MIDI projects like adding a Raspberry Pi powered Roland MT32 synthesizer for DOS games.

The motherboard features various headers, GPIO pins, and internal space for an NVME drive, allowing for significant storage and hardware modifications without external clutter.

Connectivity is notably robust, with four front-facing USB ports, dual rear USB ports, Ethernet, and diverse video output options. While modern displays connect via HDMI, the analog version of the Multisystem is designed with a strong focus on CRT users. It includes a SCART-compatible video output and a VGA connector that supports RGB component cables. Because the hardware is integrated onto a single PCB, the analog video output exhibits reduced electrical noise compared to multi-board configurations, resulting in a cleaner image on traditional tube televisions.

The device lacks built-in Wi-Fi or Bluetooth, requiring USB adapters, and utilizes a full-size SD card for its primary OS and core storage. Power is delivered via a 5V barrel connector, though the system can draw up to 4 amps depending on the peripherals attached.

I paid about $386 for mine (including shipping and tariffs)—which is priced higher than entry-level alternatives like the QMTech we looked a few months ago. But the MultiSystem to is positioning itself as a comprehensive enthusiast platform. It bridges the gap between the technical flexibility of the original FPGA development boards and the convenience of a dedicated home console.

Check out my full playlist of MiSTer related videos here!

I Bought a Macbook Neo – Here’s My Review!

I recently purchased the entry-level MacBook Neo for $599 (compensated affiliate link) to evaluate its capabilities. Positioned as Apple’s low-end laptop alternative to the Mac Mini, it can also be found for $499 through the Apple Education Store for students and school staff.

Check it out in my latest review!

The model I tested features the Apple A18 Pro processor, the same chip utilized in last year’s iPhone 16. It includes 8 gigabytes of memory and 256 gigabytes of solid-state storage. While the base storage and memory are fixed, a $699 variant offers 512 gigabytes of storage and a fingerprint reader.

The physical construction consists of a metal chassis with rounded edges, weighing 2.7 pounds. The 13-inch display operates at a 2408 by 506 resolution with a brightness of 500 nits and a 60-hertz refresh rate. Text and images render clearly on the display and it looks very close to my MacBook Air in overall quality.

The device includes a 1080p webcam equipped with OS-level filters like background replacement, blurring, etc. But I noted an operational detail regarding this webcam: there is no physical indicator light to show when the camera is active, relying entirely on an on-screen software notification. While Apple’s MAc OS is quite secure, not having a hardware light for the webcam makes me a little nervous.

Apple made distinct choices regarding input and port options to meet this price point. The keyboard feels nice and may have a little more travel than the MacBook Air, but it lacks backlighting. The trackpad uses a physical click mechanism rather than the solid-state haptics found on more expensive models.

Connectivity is handled by a headphone jack and two USB-C ports. One of the USB-C ports is limited to USB 2.0 speeds, while the other supports 10 gigabits per second data transfer, video output, and charging. Neither port supports Thunderbolt. Stereo speakers are present, though the audio can distort slightly if hands are resting on the chassis.

In practical testing, the A18 processor handles routine computing efficiently. Navigating websites in the Brave browser is responsive, yielding a score of 44.7 on the Browserbench Speedometer benchmark – one of the highest I’ve tested. High-resolution media, including 4K video at 60 frames per second, plays back without dropped frames.

Like other Macs, the Neo comes with excellent native applications like iMovie for video editing, Garageband for music, and a very functional office suite with Pages, Numbers, and Keynote. All ran flawlessly and felt just as responsive as my more expensive Macs.

When utilizing Apple’s Pixelmator Pro (a Photoshop alternative), the system handled background removal tools and basic edits without noticeable lag. More demanding applications, such as Final Cut Pro, managed 4K 60fps video editing and real-time visual effects effectively, though the 8-gigabyte memory constraint means performance could decrease with larger, heavily layered project files.

Gaming and emulation present usable frame rates within reason. The native Apple silicon version of No Man’s Sky maintained frame rates in the high 50s at a 1408 by 881 resolution. The PCSX2 emulator ran PlayStation 2 titles at full speed without lag. I

The device scored 3,458 on the 3D Mark Wildlife Extreme benchmark which puts it well below the M4 and M5 processors found on the Macbook Air and Mac Mini. Due to its fanless design, a stress test revealed the Neo will be hit with a 13 percent performance reduction over extended periods of heavy load due to thermal and power level throttling. But battery life reached between 10 and 12 hours for basic computing work.

The MacBook Neo demonstrates that a mobile processor can capably drive a full desktop operating system. The Neo provides a highly functional point of entry into the macOS ecosystem. There’s no doubt that this will drive competing PC manufacturers to up their game at the lower end of the market!

Off Grid Comms with Meshcore!

I love playing with digital radio communications. The ability to send data over long distances without any infrastructure or service providers in the middle is such a liberating concept. I’ve done a lot with amateur radio on the channel over the last few years, but lately I’ve been playing with cheap low powered LoRA based devices that don’t require a license.

In my latest video, I take a look at Meshcore, a technology that allows volunteers to build out robust off-grid networks.

Meshcore is similar to Meshtastic, but in my opinion is better suited for long distance communications. My Meshtastic experience in Connecticut has frequently been limited by the lack of nearby users and unreliable message delivery. While I have successfully made contacts from airplanes, ground-level communication has remained a challenge.

The transition to Meshcore revealed a more active community and improved performance within my region. Unlike Meshtastic, which utilizes a managed flood network where every node acts as a repeater, Meshcore requires users to assign specific roles to their devices. A device can be configured as a companion, which serves as a personal radio interface for a phone, or as a dedicated repeater. By separating these roles, the network can route messages through established paths rather than retransmitting every signal from every device. This deterministic approach reduces network congestion and allows for longer-distance communication through strategically placed repeater stations.

During my testing, I successfully sent text packets to Enfield and Vernon, Connecticut—locations over 50 miles away that would take more than an hour to reach by car. These transmissions occurred without the use of the internet or requiring a radio license, as the devices operate on the license-free 900 MHz spectrum.

Setting up these devices is a relatively accessible process. Hardware like the Heltec V3 can be purchased super cheap, while fully assembled units like the ThinkNode M5 cost around $54 (compensated affiliate links). Most Meshtastic devices can be re-flashed with Meshcore firmware.

The Meshcore project website provides a web-based flasher to install the firmware, allowing users to choose between companion, repeater, or room server modes. The room server function is particularly notable, acting as a simplified bulletin board system that stores messages for users to read when they later connect their radios.

The current landscape of the network in Connecticut shows a growing infrastructure of repeater stations, with expansion moving toward neighboring states. While it’s strictly for text messaging—no voice or video—there is something really neat about building a communication network that runs entirely on solar power and off-grid hardware.

For those who have found Meshtastic quiet or unreliable, this alternative protocol offers a different architectural approach using the same hardware. I will be watching to see how the interconnection of these regional nodes continues to develop.

Werewolf VFLEX Review: Power Almost Anything over USB-C!

Every once and awhile I come across an incredibly useful gadget that becomes an essential part of my “nerd toolbox.” The latest device I’m throwing in there is the Werewolf VFLEX – a universal power adapter for just about anything that connects to a USB-C power supply.

Check it out in my latest video!

The base unit is priced at $8 and the adapter cables are $4 A starter kit containing three base units and multiple adapter cables retails for $48. They can be purchased directly from Werewolf’s website here (compensated affiliate link).

Users first need to attach the base unit to a computer or mobile device to program in the required voltage. Configuration is handled through an Android or iOS app, along with a browser-based interface on PCs.

To test the VFLEX, I powered a vintage Atari 2600 using a USB-C battery. The Atari requires 9 volts of direct current and a center-positive polarity. After dialing in the 9-volt requirement via the web interface, the VFLEX base unit stored the setting and successfully supplied the correct voltage, indicated by a green light on the device. The Atari fired up like it was connected to its 40+ year old power supply yet was powered by the battery.

If the unit fails to receive the requested voltage from the source, it displays a red light and cuts power to the connected device. It is necessary to correctly identify both the voltage and polarity before connecting any hardware, as the VFLEX cannot prevent electrical damage if configured improperly.

The capabilities of the USB-C power source dictate what the VFLEX can output. For instance, an Anker 30-watt adapter I examined supports Programmable Power Supply (PPS), a standard that permits granular voltage adjustments. With PPS, a user can specify voltages between 3.3 and 11 volts at 3 amps, or between 3.3 and 16 volts at 2 amps. In contrast, an older 100-watt Kensington power supply lacking PPS could only output fixed increments of 5, 9, 15, or 20 volts. The quality of the USB-C cables is also a variable; relying on established brands for both cables and power adapters minimizes risks associated with non-compliant USB standard implementations.

For a more complex load, I connected a Sega Tower of Power—comprising a Sega CD, a 32X, and a Genesis console—to a single Anker Prime 160 power adapter (compensated affiliate link) using three VFLEX units. The Sega CD requires a 9-volt supply, while the 32X and the second-generation Genesis require 10 volts. Because the Anker Prime adapter supports PPS, I was able to program two VFLEX units to output 10 volts and one to output 9 volts simultaneously. Monitoring the real-time power data from the Prime adapter showed the system drawing roughly 10-12 watts in operation, well within the Anker’s 160-watt capacity. Consolidating multiple enormous Sega “wall wart” into a single adapter proved functional, provided the operator strictly adheres to the voltage and polarity specifications of the hardware.

If you’re like me and have a bunch of stuff to power, the VFLEX can be a major convenience. While the starter kit supplies enough adapters for routine applications, the system requires the operator to accurately verify the voltage before connecting any hardware to avoid damaging the electronics. When configured correctly, the device bridges modern USB-C power delivery with both legacy and contemporary hardware.

Disclosure: Werewolf provided the VFLEX free of charge, however they did not review or approve this content prior to publication. All opinions are my own.

My Toyota Sienna Van is Now a Lemon Due to an Unaddressed Recall..

Back in December, I shared information regarding a recall affecting my 2025 Toyota Sienna. As of today, March 5 2026, the vehicle has been sitting at the dealership without a resolution. The van has been out of service for almost 90 days, having been at the dealer since December 12th. I’m about to take Toyota to Lemon Law Court here in Connecticut.

See more in my latest video!

The recall addresses an issue where the second-row seats rails have a risk of losing their structural integrity and pose a risk of injury due to defective welds. The manufacturer’s notice explicitly stated no one should sit in these seats until a remedy is performed. While the manufacturer instructed dealers to pull the vehicles from lots on October 7th, 2025, my notice did not arrive until 66 days later. To date, no remedy or timeline for a fix has been communicated.

This situation impacts approximately 50,000 Sienna vans. Faced with a vehicle that cannot be safely used as intended, I researched the lemon law in my home state of Connecticut.

Connecticut requires that a vehicle be a new vehicle under two years old, have less than 24,000 miles, and exhibit a condition that substantially impairs its use, safety, or value. Given that I purchased a seven-passenger van and two of the middle seats cannot be used, the impairment is clear. Furthermore, Connecticut law provides eligibility if a vehicle has been out of service for repair for a cumulative total of 30 days or more.

I have filed a lemon law complaint with the state, and it has been accepted for a hearing. At the hearing, I will make my arguments for either a replacement or a refund. For other owners dealing with this extended recall, researching state-specific lemon laws is a practical step. Resources like Justia provide a 50-state survey of lemon laws across the United States, detailing varying procedures.

While the process in Connecticut is designed so consumers can file without an attorney, legal counsel may be consulted if the hearing process is intimidating. Following my hearing, I will share my presentation and arguments so other owners have something they can use in their own hearings. Stay tuned!

California Law to Require Age Verification on All Operating Systems (Including Linux)

Recently, a new California law signed by Governor Gavin Newsom caught my attention due to its potential impact on the open-source community, specifically Linux users. The legislation mandates that operating systems for PCs and other general computing devices like tablets and phones must implement a form of age verification during the initial account setup process.

I take a look at the implications of this law in my latest video.

While California is not the only state pursuing such measures—Texas recently faced legal hurdles over a similar law—this development raises questions about how open-source organizations, rather than traditional corporate entities, will comply.

The text of the California bill, which was signed on October 13, 2025, and takes effect on January 1, 2027, calls for an interface that requires the account holder to provide their birth date or age. This information generates a signal regarding the user’s age bracket—categorized as under 13, 13 to 16, 16 to 18, or over 18—to be read and enforced by applications within a covered app store.

The legislation defines an operating system provider broadly enough to include independent developers creating Linux distributions. Furthermore, a covered application store is defined as a publicly available online service, which could encompass command-line package managers used daily by Linux administrators.

From a practical standpoint, the current requirement relies entirely on self-reporting. Users are asked to volunteer their age, meaning anyone could input inaccurate information to bypass restrictions. Despite this, the penalties for non-compliance are clearly defined. Operating system makers face civil penalties ranging from $2,500 for negligent violations to $7,500 for intentional violations per “affected child.” If a developer has internal data showing a user’s actual age differs from the self-reported signal, they are legally obligated to act on that information or face action from the California Attorney General.

The implications for Linux distributions are notable. Commercial entities with a business nexus in California, such as the organizations behind Ubuntu and Fedora, will likely implement the necessary prompts to comply.

However, smaller projects face a different reality. Many distributions are maintained by volunteer groups without the financial resources or organizational structures to shield them from liability. Midnight BSD has already modified its software license to exclude California residents, but this legal maneuver may not satisfy California regulators if the software remains accessible for download within the state’s borders.

This legislative push is not confined to the West Coast. My home state of Connecticut is currently evaluating controls for minors on the internet, and Colorado is exploring operating system-level age verification. Texas attempted to regulate app stores before a federal court blocked the law, citing First Amendment concerns regarding its broad application. The absence of a unified federal privacy law has resulted in a fragmented regulatory landscape across different regions.

Historically, some internet users have responded to localized regulations by migrating to decentralized platforms. When Discord faced scrutiny over its age verification methods that included video selfies and government IDs, users began exploring open-source alternatives like Revolt and Matrix. These self-hosted and federated platforms demonstrate how technical communities can circumvent centralized data collection and restrictive legal mandates.

As the 2027 deadline approaches, it is likely that many Linux distributions will simply integrate a birth date or age prompt into their installation screens to mitigate legal risks. The technical challenge of passing that age signal consistently to various package managers and standalone applications remains a logistical hurdle. The coming months will test how far state authorities are willing to go in enforcing these mandates on the broader open-source software ecosystem.