Is Lower Power TV the Future of Broadcasting? With the LPTV Broadcast Association’s Frank Copsidas

I recently spoke with Frank Copsidas, the president of the Low Power TV (LPTV) Broadcast Association, to discuss a segment of the broadcast industry that is seeing a surprising amount of activity and even optimism!

Watch the full interview here!

While the broader television landscape faces significant headwinds due to the decline of cable and shifts in advertising, the LPTV sector is experiencing a period of expansion. During a recent filing window, the FCC received 2,300 applications for new licenses, a notable figure considering there are currently about 5,300 licensed stations in operation.

Copsidas explained that LPTV stations operate with less power than that used by full-power stations, it provides a signal radius of approximately 30 miles depending on the market. This range is often sufficient to cover a metropolitan area, allowing these stations to reach a large localized audience effectively. One of the primary distinctions of LPTV is its cost-effectiveness; the filing fee for a new station is under $1,000, and the operational expenses are a fraction of what is required for a major broadcast outlet.

Our conversation touched on the current technical standards governing the airwaves. Most stations currently use ATSC 1.0, but the larger full power stations are pushing to move toward the ATSC 3.0 standard. Copsidas expressed reservations regsarding ATSC 3.0 for low power stations, saying the new standard has been a “nightmare.” He described his own 17-month attempt to implement ATSC 3.0 in Boston as a struggle with complexity, stability and cost. Instead of a government mandate for a specific new standard, his association is advocating for the freedom to choose between various technologies, including 5G Broadcast.

The 5G Broadcast standard is a global technology developed by 3GPP, the same organization that handles standards for cellular and satellite communications. Unlike traditional cellular data, 5G Broadcast does not require a SIM card or a subscription. It allows a station to transmit video or data directly to smartphones using a one-to-many model. Copsidas noted that this is particularly efficient for emergency alerts, which can be delivered to every device in a coverage area within half a second, even if cellular networks are congested or down. Qualcomm has indicated that chips supporting this technology may be available in consumer devices as early as next year.

The audience for over-the-air television appears to be shifting. Copsidus observed that while older viewers who transitioned to cable decades ago are often unaware that free television still exists, viewers under the age of 35 are rediscovering the medium. This younger demographic is often motivated by the rising costs of streaming and cable. In response, some LPTV operators are experimenting with localized content, such as high school broadcast programs in Alabama and short-form linear programming, to see what resonates with these new viewers.

Financially, LPTV stations operate on a different scale than their full-power counterparts. They do not benefit from the same mandatory carriage requirements on cable systems, meaning they rely primarily on over-the-air viewers and digital platforms. Revenue is typically generated through local advertising and leasing out sub-channels to other programmers. Because a single 6MHz channel can host multiple sub-channels, an operator can broadcast a variety of niche content simultaneously.

The future of this sector likely depends on how the FCC handles these emerging standards and whether mobile device manufacturers begin to enable broadcast reception. The next few years will determine if these stations remain a niche local service or become a primary vehicle for delivering data and video directly to the pockets of the general public.

OSMC Vero V Review: A Legit Nvidia Shield Alternative for Plex?

For some time, my search for a media player that matches the capabilities of the Nvidia Shield has come up mostly empty. The goal is usually the same: find a device that handles Dolby Vision Profile 7 and Profile 5, lossless Dolby and DTS audio, and 24p frame rate switching without requiring significant technical workarounds.

While I have previously examined devices like the Ugoos AM6B+, they often required a level of modification that made them less than user-friendly. But the other day I picked up a Vero V, a device from the open-source OSMC project that is being positioned as a dedicated high-end player for media enthusiasts.

Check it out in my latest video!

The Vero V is priced at approximately $200 and must be imported from the UK, which puts it at around the same price point as the Nvidia Shield. Unlike the Shield, which runs on Android TV, the Vero 5 is built on a minimal Linux installation running the OSMC fork of the Kodi media player. This means it lacks mainstream streaming applications like Netflix or Disney Plus. It is a specialized tool intended for playing back local media or files served from a home server.

The Vero V has an AM Logic S905X4 processor, 4GB of RAM, and 32GB of storage. While the Wi-Fi is limited to 802.11 AC, the inclusion of a gigabit Ethernet port provides the necessary stability for high-bitrate 4K files. On the back, it features HDMI, optical audio, and analog outputs, alongside USB 3.0 and 2.0 ports. In my testing, the device booted into its interface in under 30 seconds, and the setup process for audio passthrough and resolution switching was straightforward within the OSMC menus.

Since there is no native Plex application for OSMC, I utilized the PM4K for Plex add-on. The interface differs slightly from the standard Plex client but remains functional, maintaining metadata, watch history, and library organization. During playback tests, the Vero V handled Dolby Vision Profile 7 and Profile 5, as well as lossless formats like Dolby Atmos, TrueHD, and DTS:X. It also demonstrated capable tone mapping when playing HDR content on a standard 4K display.

A significant point of discussion for enthusiasts is the level of Dolby Vision support. Currently, the Vero 5 supports the Minimum Enhancement Layer (MEL), which is comparable to the Nvidia Shield’s capabilities. However, the developers are testing a beta firmware that aims to support the Full Enhancement Layer (FEL), a feature typically found only on standalone 4k Blu-ray players.

Choosing between this and more established hardware depends largely on one’s specific needs. The Nvidia Shield remains a more versatile device for those who want a single box for both Plex and subscription streaming services. Nvidia’s CEO, Jensen Huang, recently promised to support the Shield “for as long as we shall live” given his personal affinity for the now decade-old media player.

However, for those looking for a dedicated player supported by an active community of enthusiasts, the Vero V serves as a reliable alternative that functions effectively right out of the box. Having a hardware option that does not rely on a large corporate ecosystem provides a certain level of security for the future of high-end home media playback.

ATSC 3.0 Update: NAB News and My Updated Predictions!

The lack of ATSC 3.0 / NextGen TV news at this year’s NAB was surprising. Given the critical decision currently before the FCC, you’d expect broadcasters to be pulling out all the stops to prove they’re ready to ditch the old over the air TV standard for a new one.

In my latest video, I take a look at some of the most interesting news of the show and update my current predictions about where the FCC lands on this mess.

One of the more significant developments from the floor was an announcement by a company called Castanet regarding 5G broadcasting. Their technology allows a 5G signal to be embedded within an ATSC 3.0 signal, essentially tunneling mobile-compatible data through traditional broadcast frequencies. While very few mobile phones currently support this, the proximity of certain TV frequencies to cellular bands suggests a future where a smartphone could function as a portable receiver for broadcast data.

This is gaining traction with the Low Power TV Broadcasters Association, whose members are exploring 5G as a potentially more viable and cost-effective alternative to the complex ATSC 3.0 transition. In Boston, a low-power station has already begun experimental 5G broadcasts, demonstrating that the barriers to entry for this technology may be lower than previously thought.

In contrast, the progress on consumer hardware for ATSC 3.0 remains slow. Pearl TV, a consortium of major broadcasters, showcased prototypes of low-cost converter boxes intended to retail for under $60. These devices are designed to decrypt the new signals for older televisions, yet they remain limited in functionality compared to current $30 ATSC 1.0 tuners that offer recording capabilities. The requirement for digital rights management (DRM) and encryption is a primary driver of these costs and technical limitations. By pushing specific hardware solutions, broadcasters appear to be boxing out the independent hardware market that has sustained the industry for decades.

The absence of clear guidance from the FCC during the show was also apparent. Despite the presence of high-level officials, including Media Bureau Chief Evan Morris and Commissioners Olivia Trusty and Anna Gomez, there was little specific information offered regarding the transition. Remarks remained general, focusing on balancing regulation with free-market competition or discussing broad First Amendment issues. This suggests a cautious approach as the commission weighs the public interest against the interests of large broadcasters.

Looking ahead to the upcoming FCC rulings, I anticipate several developments. The commission will likely end the mandate requiring stations to simulcast their signals in both the old 1.0 and new 3.0 standards. While this sounds significant, the low adoption rate of NextGen TV means most broadcasters will continue to support the older standard to avoid losing their entire over-the-air audience and cable carriage.

Furthermore, I expect the FCC to become more permissive regarding new and existing technologies. This could include allowing broadcasters to use more efficient encoding, such as MPEG-4 or HEVC, on current ATSC 1.0 channels. I think that might allow 5G broadcasts too to see what standard, or combination of standards, works best for the public interest.

Regarding encryption, the FCC may choose to remain silent, essentially letting the market determine the fate of DRM. If encrypted broadcasts continue to hinder consumer adoption and keep tuner prices high, the technology may struggle to survive on its own merits. This lack of adoption poses a long-term risk to the industry. Already, petitions are being filed to reallocate broadcast spectrum for 6G wireless internet. If television viewership continues to decline and the new standard fails to take root, the pressure to claw back these frequencies for other uses will only intensify.

The industry finds itself at a crossroads between the mandates broadcasters now seek and the free-market experiment they originally proposed back in 2016. As the FCC moves toward a more permissive environment for experimental technologies like 5G and better 1.0 encoding, the path forward for traditional broadcasting remains uncertain. The future of the airwaves may ultimately be shaped not by the major networks, but by the smaller, low-power stations currently testing the limits of what a broadcast signal can be.

(Sorta) Affordable Mini PCs Aren’t Dead Yet: GMKTec K17 Review

The mini PC market has faced various supply constraints recently, making it difficult to find hardware that balances cost and performance. But there are still some decent value propositions out there, one of them being the GMKTec K17, a unit priced at approximately $550 (compensated affiliate link).

See it in action in my latest Mini PC review!

The K17 is powered by an Intel Core Ultra 5 226V processor from the Lunar Lake family. In my testing, the chip proved to be power-efficient, drawing only about five watts at idle and reaching about 48 watts under load. One significant trade-off for this efficiency is the memory configuration; the system includes 16GB of LPDDR5X-8533 RAM that is soldered to the board, meaning it cannot be upgraded.

Storage, however, is more flexible. The internal chassis features two NVMe 2280 PCIe 4.0 slots. My review unit came with a 1TB drive pre-installed, leaving the second slot open for additional storage or a secondary operating system. When I opened the case to inspect the internals, I noticed that the Wi-Fi antennas are attached to the bottom plate. This requires careful handling during disassembly, as the cables can be easily disconnected.

The port selection is varied, though the labeling requires close attention. On the front, one USB-A port supports 10 Gbps while another is limited to 5 Gbps. There is also a full-service USB 4 port capable of 40 Gbps, which supports Thunderbolt devices, power input, and video output. The rear of the device houses a USB 2.0 port, three 5 Gbps USB-A ports, two HDMI ports, and a 2.5 gigabit Ethernet jack. I confirmed the Ethernet and Wi-Fi 6E performance met the expected speeds for those standards during my network tests.

In practical day-to-day use, the K17 handled standard tasks without hesitation. Web pages loaded quickly, and 4K video playback on platforms like YouTube remained stable without dropped frames.

For video editing work, I tested DaVinci Resolve with a 4K project. While the system handled basic cuts and transitions well, more complex effects and color grading led to significant slowdowns. Without an external GPU, this machine is better suited for light editing rather than professional-grade production.

Gaming performance was stable for an integrated GPU. In Cyberpunk 2077 at 1080p with low settings, the frame rate hovered between 40 and 50 frames per second. For those willing to drop the resolution to 720p, achieving 60 frames per second is likely. I also tested PlayStation 2 emulation, which ran at full speed with some room for graphical enhancements. Benchmark results from 3DMark TimeSpy benchmark gave the K17 a score of 3,458, placing its graphical capabilities in the same range as older dedicated cards like the Nvidia GTX 1060.

One of the most distinct characteristics of the K17 is its thermal management and acoustic profile. During a 3DMark stress test, the system maintained a temperature of 59 degrees Celsius with a 98.9% stability score, indicating very little thermal throttling. More notable, however, was the noise level. Even under a full load, the fan remained nearly silent, producing only a faint whir that was difficult to hear in a standard room environment.

While the device comes with a Windows 11 Pro license, I also tested its compatibility with Linux by booting the latest version of Ubuntu. The hardware was recognized immediately, including the Wi-Fi, Bluetooth, and audio components. The interface felt particularly responsive under Linux, offering a viable alternative for users who prefer that environment.

Given its low power draw and quiet operation, the K17 functions as a capable general-purpose machine that manages to perform reliably within the constraints of its compact form factor and current market conditions.

I Built a Better YouTube Subscription Tab & An Apple TV App!

A few days ago, I shared a look at the self-hosted applications I use to manage my digital life, including an RSS reader for tracking YouTube creators. While that system worked, the interface lacked the specific functionality I needed to categorize content effectively. The standard YouTube subscription tab has become increasingly difficult to navigate due to the inclusion of Shorts and a lack of consistent organization. I wanted a way to group creators by topic—such as amateur radio, retro tech, or gaming—and have them appear in a streamlined, chronological feed. So I built my own browser!

See it here in my latest video!

I spent about an hour collaborating with the Claude Code to draft the code for a custom RSS reader. I am not positioning myself as a professional developer, but rather as someone using these tools to solve specific personal workflow issues. By providing initial instructions and refining the output through a series of prompts, I was able to build a functional application that organizes videos into specific buckets and subcategories.

The application utilizes YouTube’s RSS feeds rather than the platform’s API. This decision simplifies implementation, as every channel and even specific playlists have an associated RSS feed. This allows for more granular control; for example, if a creator produces various types of content, I can subscribe only to the playlist that interests me. To prevent hitting delivery limits from YouTube, the code includes a staggered refresh cycle. While the default is to check feeds every 60 minutes, the interval can be adjusted on a per-channel basis for news-heavy content that requires more frequent updates.

The current interface allows for easy management of the 129 channels I have imported so far. I included an OPML import feature, which makes it possible to migrate existing subscriptions from other readers. Within the app, I can move channels between categories, create new labels, and click directly through to YouTube to watch videos. Since I use a premium account, the absence of an integrated player to avoid ads is not an issue for my viewing experience.

One significant limitation of my previous setup was the difficulty of accessing these feeds on a television. I tasked the AI with helping me build a client for Apple TV that connects to the database server, which I plan to host in a Docker container on my local network.

This process involved learning the basics of Apple’s Xcode environment to side-load the Apple TV app. I was really pleased to see the Apple TV app could in turn call up the YouTube app and have the video start playing immediately.

The development of this project was funded in part by credits provided by the AI service, totaling approximately $20 in usage costs. My goal now is to move this code into the open-source community. I do not have the personal bandwidth to manage a software project or maintain the code long-term, so I am looking for interested parties to take over the project and post it to GitHub. If someone is willing to maintain it as an open-source tool, I believe it could serve as a useful alternative for those who find the current state of video subscription feeds unsatisfactory.

The server-side logic is designed to run in a container, which opens the door for other developers to create clients for Android or various web platforms. At the moment this serves as a personal tool that aligns my video consumption with the specific categories I prefer for browsing what I want to watch. I intend to continue refining the system for my own use while waiting to see if a broader community project develops around the initial codebase.

Multiple Studies Show DRM Encourages Rather than Restricts Piracy!

I recently observed the National Association of Broadcasters criticizing Major League Baseball and Netflix for placing sports content behind paywalls. This critique is a notable contradiction when considering the broadcast industry’s current efforts to encrypt public airwaves. While broadcasters claim to be the center of community connection by delivering free games to millions, their recent actions suggest a shift toward a business model more closely resembling the streaming platforms they criticize – including locking down over the air content with DRM.

In my latest video, we take a look at whether or not DRM actually works in stopping piracy. Spoiler alert: it doesn’t – in fact there’s strong evidence to suggest it actually increases piracy.

In my home state of Connecticut, for instance, broadcast TV fees for cable subscribers have risen from $8 in 2018 to over $48 per month today. This cost exceeds a standard Netflix subscription and reflects the price consumers are paying for access to local stations via cable. While an antenna remains a traditional method for receiving these signals at no cost, the industry is moving toward a new standard known as NextGen TV. This transition involves digital rights management, or DRM, which requires consumers to purchase specific high-end televisions or expensive external (and barely functional) tuning boxes. This shift also restricts the use of gateway devices that currently allow viewers to watch over-the-air television on various screens throughout their homes.

I find the current trajectory of the broadcast industry mirrors the mistakes made by the music industry two decades ago. During the early 2000s, record labels were on the ropes with a huge decline in revenue as consumers desired digital music that simply wasn’t available. Eventually the labels were strong-armed into selling music online but insisted on DRM to restrict how and where consumers could play purchased music. This lack of interoperability led many consumers to favor piracy for its convenience. It was only after the industry moved toward DRM-free audio that its financial health improved. Today, the music industry sees record revenues because it no longer restricts the devices or platforms consumers use to listen to their products.

Research supports the idea that restrictive encryption often backfires. A 2003 study conducted by HP in partnership with MIT concluded that DRM features were not effective at combating piracy. The researchers noted that content must eventually be converted into an unprotected form, such as sound waves or light, to be consumed—a vulnerability often called the analog hole which is easily exploited. Furthermore, data from the University of Toronto in 2013 showed that removing DRM led to a 10% increase in music sales and a 30% increase for back-catalog items. A 2010 study from Seoul National University similarly found that the inconveniences of DRM reduced legal demand and increased piracy.

The broadcast industry’s current approach to DRM lacks ubiquity. At present, the encryption used for NextGen TV only functions on Android-based devices, leaving users of Roku, Apple TVs, PCs, iPhones, iPads and Xbox devices unable to decode the content. This is a significant departure from successful platforms like Netflix or Spotify, which ensure their encrypted content works across nearly every available device. By narrowing the range of compatible hardware, broadcasters risk alienating their remaining audience.

The Federal Communications Commission is currently weighing the implementation of these encryption standards. I believe it is important for the public to communicate the potential inconveniences of this technology to their congressional representatives. While the industry highlights the technical benefits of the new standard, the restrictive nature of the accompanying encryption is often omitted from the conversation.

The historical data from the music industry suggests that when legal access becomes more difficult than the alternative, the industry itself suffers the most. The outcome of the current deliberations at the FCC will determine whether broadcast television remains a broadly accessible public resource or becomes a more restricted and hardware-dependent medium.

New MiSTer Cores! 3DO and Apple IIgs FPGA Betas Show Promise

I have been revisiting the MiSTer project recently to look at two new cores currently in development for the platform. This hardware, which costs approximately $160, uses FPGA chips to replicate the original logic of vintage computers and game consoles from the mid-1990s and earlier.

In my latest MiSTer update, I look at two new cores – one for the 3DO and the other for the Apple IIgs, both of which are receiving significant updates from the development community.

See them in action in my latest review!

The 3DO core, developed by Srg320, is nearing completion and is currently available for testing on single RAM MiSTer devices. In 1994, the 3DO occupied a specific niche in the market, offering graphical fidelity that rivaled and in some cases exceeded high-end PCs at a much lower price point. The console had support from Electronic Arts and a few other well known publishers who all made next-gen ports of their 16-bit titles along with new games. I bought my Panasonic 3DO console in 1994 when the price reduced from $799 to $399.

The system seller for the 3DO was the amazing port of Road Rash that came with arcade quality 3D graphics, a great soundtrack featuring Soundgarden and other popular artists, and some killer full motion video cut scenes. Testing Road Rash on the new core showed performance that appears consistent with the original hardware, though perhaps slightly less fluid than a stock console.

I also spent time with Wing Commander 3, a game notable for its transition between full-motion video segments starring Mark Hamill and Tom Wilson and 3D space fighter combat. The video playback is stable, though the output seems slightly dark, suggesting a need for gamma adjustments. I observed minor graphical artifacts, such as unexpected patterns in the starfields.

Compatibility on the 3DO core is not yet universal; titles like Zhadnost load slowly, and the Need for Speed currently fails due to an NVRAM error. Other titles ran but with some glitches like a green vertical line visible in Total Eclipse. However, for a beta core, the majority of the library I tested is functional.

Next I turned to the Apple IIgs core, which is being developed by “Allen SWX.” The IIgs implementation emulates a ROM 1 machine with 8MB of RAM. This setup allows for the use of hard drive and floppy disk images including the newer “Woz” format. I was able to boot into GS/OS System 6 and access personal files from my own hard drive images dating back to the early 1990s. The core reproduces the authentic, albeit slow, operating speed of the original hardware. While the games run as expected, the audio output currently sounds somewhat muffled compared to the original machine.

These developments represent a steady expansion of the MiSTer library into systems that were previously considered outliers. While neither core is finished, the progress indicates that the technical hurdles for these specific architectures are being addressed.

The AT4k Launcher for Google TV and Android TV Brings an Ad Free Experience – No Rooting Required!

I recently spent some time testing a new interface for Android TV and Google TV called AT4K. It brings the visual style of the Apple TV interface to much lower cost devices like the Onn streamer I tested it on. The primary draw of this specific launcher is that it functions without advertisements and can be configured to run as the default launcher without having to root your device, similar to the Projectivy launcher I looked at last year.

Check out AT4k in my latest review!

The layout features a header row that behaves similarly to the standard Android launcher, pulling content cards from associated apps. For instance, when I scrolled to the Apple TV app icon, the header displayed specific shows and movies from that service. If an app does not provide its own cards, the system pulls from other apps like Plex. The header can be removed if you just want the standard app layout.

Below this header, the rest of the applications are arranged in a grid. Managing these icons is straightforward; holding down a selection button triggers a “jiggle” mode that allows for moving apps or grouping them into folders. I created a dedicated folder for games, and the process was functional and mirrored the organizational style found on Apple TV devices.

Navigating the settings reveals two distinct areas: one for the standard Android system settings and another for AT4K’s internal configurations. The launcher supports both light and dark modes, though I found the light mode to be quite legible. There are premium features available for a one-time fee of five dollars, such as the ability to use custom images or videos as backgrounds and the option to expand the app grid from five to seven icons per row. During my time with the app, I encountered some difficulty interacting with the custom image menu, which is something to monitor in future updates.

One of the more practical aspects of AT4K is its ability to become the default launcher without requiring the user to root or hack the device hardware. It utilizes Android’s accessibility options to override the standard launcher. By enabling the AT4K service in the accessibility menu, the launcher can intercept the home button press and manage the boot sequence. To test this, I enabled the “override current launcher” and “start on boot” settings before power-cycling my device.

After the reboot, the original Google TV interface appeared momentarily before AT4K automatically took over. I launched several resource-heavy applications, such as HD HomeRun and Apple TV, and in each instance, pressing the home button returned me back to the AT4K interface rather than the factory default.

The app manager within the settings also provides a quick way to hide specific applications from the launcher or access deep system settings like “force stop” or “uninstall.”

I found the setup process to be accessible for most users, as it does not require adjusting complex security settings. For those who prefer the aesthetic of the Apple ecosystem but want to maintain the flexibility of an Android-based device, this launcher offers a functional middle ground. I plan to keep this as my primary interface for the time being, as it provides a streamlined experience that remains stable under regular use.

Six Self Hosted Apps I Use on my Home Server ! Synology, Unraid, Linux Etc.

The pursuit of digital efficiency often leads to a familiar crossroads where a user must choose between a recurring subscription fee or the sacrifice of data privacy. For some time, I have been looking for ways to streamline my professional and personal workflows without relying on external servers or third-party data mining. The current landscape of open-source software has made it increasingly feasible to host powerful applications on a small home server, such as a Synology or Unraid NAS or a Linux machine and installing the applications via Docker containers.

In my latest video, I take a look at six self hosted Docker applications running on my Synology NAS!

To manage these applications securely, I use a private VPN called Tailscale. This allows me to access my home-hosted tools from any location without opening ports on my firewall. It creates a seamless connection between my mobile devices and my server, ensuring that my data remains isolated from the public internet while remaining accessible to me. This setup provides the foundation for several utilities that have replaced more traditional, paid software services.

One of the basic utilities I maintain is Uptime Kuma, a monitoring tool that tracks the status and performance of my various services. It provides real-time data on ping rates and uptime, sending a notification to my phone via an app called Pushover if a service fails. This eliminates the need for a paid monitoring service and provides immediate feedback on the health of my local network.

Information management is another area where self-hosting has proven effective. I use two different RSS readers, FreshRSS and TT-RSS, to curate content from YouTube and various technology websites. Rather than relying on platform algorithms, these tools allow me to organize feeds into specific topics like retro gaming or modern tech. TT-RSS, in particular, is useful for aggregating large volumes of data—sometimes dozens of articles at once—which I then process through other automation tools.

For personal tasks, I have moved toward simpler, self-hosted alternatives to mainstream apps. Actual is a straightforward personal finance tool that functions as a manual checkbook and budgeting application. I don’t have it connected to my banks, but that options is available through . For note-taking, I have transitioned from the more complex Obsidian to a tool called Blinko. It offers a clean interface that works through the browser on screens of any size, allowing me to capture quick thoughts and organize them with tags later. It also includes an API and an AI component for querying my own notes.

The most substantial part of my current workflow is built on N8N, an open-source automation platform. I use it to handle repetitive tasks that previously took hours of manual effort. For example, my weekly email newsletter (sign up here) is now generated by a workflow that pulls data from my blog and YouTube RSS feeds, formats the text, and utilizes AI to suggest subject lines. I also use N8N to monitor specific FCC dockets for our continuing efforts to stop broadcast TV encryption. When a new filing appears on the FCC website, the system automatically downloads the PDF, sends it to an AI model for summarization, and emails me the highlights.

I have also automated my social media presence using these local tools. Instead of paying for a distribution service, I built a queue system that posts updates to platforms like X, Blue Sky, Threads, Mastodon, Facebook and LinkedIn at regular intervals. This system was developed with the assistance of Claude, which can connect directly to the server to help write and troubleshoot code. This transition to self-hosting has replaced several hundred dollars in annual subscription fees with a stable, private infrastructure.

As I continue to integrate these tools, the focus remains on finding applications that offer high utility without unnecessary complexity. The transition to a self-hosted environment requires an initial investment in learning how to manage Docker containers, but the resulting control over data and workflow efficiency provides a clear alternative to the standard subscription model. I am regularly looking for new applications to add to this local ecosystem as the technology evolves.

Check out more self hosting videos here!

Music Labels Lose a Big Piracy Case at the Supreme Court

A twelve year legal battle about piracy between the music industry and internet service providers has finally come to an end by the US Supreme Court. The court overturned a $1 billion verdict against Cox Communications, a decision that has significant implications for how we understand copyright liability and the responsibilities of those who provide our internet access.

See more in my latest video!

The history of this conflict dates back to the early 2000s when the music and film industries struggled to adapt to the rise of digital file sharing. Initially the music industry started suing their own customers, hitting them with federal lawsuits. One instance involved a 12-year-old girl having to cough up $2,000 for a settlement and another where a woman was held liable for hundreds of thousands of dollars for sharing 24 songs.

At the time, piracy was often driven by a lack of convenient, legal digital options. Physical media sales were declining, and digital purchases were often restricted by digital rights management, or DRM, which limited how and where consumers could listen to their music.

When the strategy of suing individual users failed to curb piracy or improve the industry’s public image, the focus shifted toward where the money is: internet service providers. Organizations representing the record and motion picture industries established the Copyright Alert System, partnering with major ISPs to issue warnings to users who were sharing copyright material.

Cox Communications did not participate in this program and that put a target on their back. A lawsuit was filed in 2014 against the ISP with music label BMG arguing that Cox should be held liable for the infringement occurring on its network. BMG claimed that because Cox did not adequately respond to infringement notices, it lost the “safe harbor” protections usually granted to service providers under the Digital Millennium Copyright Act.

A federal jury originally sided with BMG, awarding a billion-dollar verdict against Cox. However, the Supreme Court’s recent reversal of this decision centered on a specific interpretation of federal copyright law. Justice Clarence Thomas, who authored the decision, noted that while Cox may not have met the requirements for DMCA safe harbor protection, other aspects of federal law do provide for an adequate defense. The ruling clarifies that a service provider is only liable if it intended for its service to be used for infringement or if it marketed itself specifically for that purpose. Because Cox provides a general-use internet service and did not induce its users to pirate material, the court found they could not be held responsible for the specific copyrights violated by their subscribers.

This development changes the landscape for other ISPs as well. They now have a defense beyond the safe harbor provisions, meaning they may not feel the same pressure to react to every automated infringement notice they receive. I suspect this will lead to a decrease in the haphazard distribution of warnings to account holders. While direct lawsuits against individuals may still occur, particularly in cases involving large volumes of distribution, the era of trying to hold the entire infrastructure of the internet accountable for individual user behavior seems to be shifting.

It should be noted that the music industry eventually found success not through litigation, but by listening to consumer demand. When they removed DRM from digital music purchases and embraced affordable streaming services, revenues skyrocketed. It is a reminder that market accessibility often addresses the root causes of piracy more effectively than legal threats.

As other industries, such as broadcasting, consider implementing new restrictions on content, the industry changes that have taken place since this case was filed suggests that focusing on what the customer wants is a more sustainable path than pursuing multi-billion dollar judgments against service providers. This ruling brings a level of technical and legal sanity back to the conversation regarding how we use and access the internet.

What a sub $500 Mini PC looks like these days: GEEKOM A5 Pro Review

Finding a mini PC for under $500 has become increasingly difficult in the current market, but I recently spent some time with the Geekom A5 Pro (compensated affiliate link) to see how it balances cost and performance. While the machine bears a physical resemblance to the more powerful A8 model, this version utilizes a Ryzen 7 5300U processor and targets users with more modest computing requirements.

Check it out in my latest video review!

The unit Geekom sent me for review can be found on Amazon (compensated affiliate link). It features a Ryzen 7530U, which is an older six-core, 12-thread chip running at a 15-watt TDP.

Inside, the hardware is accessible but reveals some of the compromises made to reach this price point. It uses DDR4 RAM rather than faster DDR5, and while there is an expansion slot for a second SSD, it is limited to the SATA interface rather than NVMe. The RAM can be upgraded to 64GB. I also noticed during disassembly that the Wi-Fi antenna design is somewhat delicate; the cable is easily detached when opening the case and requires some patience to reconnect to the motherboard.

The external build quality remains high, featuring a metal case and a variety of ports. The front panel includes two 10Gbps USB-A ports—one of which supports device charging while the PC is powered down—alongside a headphone jack. The side houses a full-size SD card reader, while the back provides two HDMI ports and two USB-C ports. While it lacks USB 4, the USB-C ports do support video output, allowing a four-display 4K setup. There is also a 2.5gigabit per second ethernet port that performed as advertised in my testing.

In daily operation, the A5 Pro is efficient and quiet. It idles at around 7 watts and peaks at 46 watts under heavy load. The system fan is rarely audible during standard desktop tasks. It includes a licensed copy of Windows 11 Pro, and the machine handled web browsing and general office applications smoothly. However, the age of the processor becomes apparent when pushing the integrated graphics. During 4K YouTube playback at 60 frames per second, I observed frequent dropped frames, a limitation not typically seen on more modern AMD chips.

Creative tasks and gaming yielded mixed results. Simple video editing in DaVinci Resolve is feasible for basic projects, but adding complex effects or transitions leads to significant rendering delays and stuttering during playback. Gaming performance is similarly constrained; modern AAA titles like Cyberpunk 2077 struggled to reach 15 frames per second at 1080p on low settings. But, the machine is well-suited for emulation of older consoles or playing legacy PC titles, where it maintained consistent frame rates.

Thermal management is tuned for silence rather than maximum output. The system failed a 3DMark stress test with a score of 95.7%, suggesting about a 4-5% performance drop during sustained heavy workloads. For most users, this five percent dip in performance will likely go unnoticed, especially given the quiet nature of the fan.

The machine performed very well under Linux. Testing with the latest version of Ubuntu showed that all hardware components were recognized immediately, and the interface felt more responsive than Windows, likely due to the lack of operating system bloat.

While the A5 Pro could serve as a capable low-power home server, its AMD architecture makes it less ideal for hardware transcoding in applications like Plex compared to Intel-based alternatives.

Ultimately, this device reflects the current state of the hardware market. A few years ago, this budget would have secured more contemporary components, but today it buys a reliable, if slightly older, set of specifications. It remains a functional option for light office work or a dedicated Linux station, provided the user understands the graphical limitations inherent in the hardware.

Hamgeek FPGA MiSTer Clone Review

I ordered another cheap MiSTer FPGA clone off a site called Hamgeek the other day. Hamgeek mostly sells amateur radio gear and a few other curious gadgets. Like other MiSTer devices we’ve looked at recently, it uses an FPGA chip to accurately replicate retro computing, gaming and arcade systems from the 90s on back.

Check it out in my latest MiSTer video!

The Hamgeek unit cost about $160 and arrived fully assembled with a 32 GB SD card preloaded, which let me skip the initial flashing and get straight to testing. The Hamgeek MiSTer is effectively a “clone of a clone,” utilizing the same hardware design of the QMTech device we looked at a few weeks ago.

Like other MiSTers I’ve tested you will need to download and run the Update_all script to get all of the supported cores and features to work. You can see the full setup process in the MiSTer Pi video I did last year.

Compatibility on the Hamgeek feels just as good as the other MiSTer clones we’ve looked at over the last year. I tested a range of demanding and lower-end cores. The Amiga core looked crisp and executed complex demo scene disk images flawlessly. The Saturn core ran Daytona USA without visible issues, and the Sega 32X handled After Burner perfectly. I also ran Street Fighter Alpha 3 on a CRT for extended periods, played the Neo Geo’s King of Fighters 2003, and tried Wave Race on the Nintendo 64 core. On the low end, NES and Atari 2600 content ran as expected. Overall compatibility and stability across the cores I exercised matched what I’ve come to expect from consumer Mister builds.

I also ran a memory test that exercises the 128 MB memory module. It sustained 167 MHz for about ten minutes without errors, which suggests the hardware has some performance headroom beyond what most cores require.

Video output options are flexible: HDMI for modern displays, a VGA port that can deliver RGB component for late-model CRTs, and analog/optical audio outputs via a combined 3.5mm jack. The unit does not provide RCA composite or S-Video natively, so if your television only accepts composite you’ll need an adapter or consider waiting for the Superstation One MiSTer clone that will include more analog video output options built in.

Like other Mister builds, this one includes a port for SNAC adapters that allow for direct electrical connections to certain controller types and accessories. I verified light-gun functionality on a CRT using the NES core and a Zapper.

The box has a limited number of USB ports — enough for an external hard drive and a couple of controllers, but you’ll likely want a hub — and it does not include built-in Wi‑Fi. You can add Wi‑Fi and Bluetooth with a USB dongle. MiSTers generally do not require an active Internet connection but you will need to go online for core updates.

There’s an internal cooling fan that runs continuously; it’s audible but not loud. The metal case version of the Hamgeek MiSTer I opted for is more robust than the plastic one that’s available for the same price.

If you want a ready-to-use MiSTer without assembling parts, units like this make that option accessible at a lower price than earlier preassembled builds. It’s great to see the MiSTer ecosystem getting more accessible!

See more of my MiSTer content here!