I Made My Own (Mostly) Self-Hosted Content Recommendation Engine with N8N

I’ve been getting increasingly frustrated with how social media algorithms decide what to put in front of me. More often than not, what matters most to the platform are not my interests but rather content that the platform thinks will keep me glued to it.

Over the past few months, I started experimenting with self-hosted and hybrid solutions to build something I could actually control. What I ended up with is a little algorithm of my own that now emails me every morning with a curated digest of topics I care about.

You can see it in action in my latest video.

The system runs on my Synology NAS using N8N, which I’ve also been using for other projects. The content engine pulls about 150 headlines a day from RSS feeds across trusted websites, YouTube channels, and Reddit forums I follow. From there, the workflow filters, organizes, and compiles the results into an HTML email.

It works really well. For example, when multiple outlets covered handheld gaming PCs, it was smart enough to recognize the GPD Win 5 and Asus ROG Ally as belonging to the same category and group them together. That gives me a cleaner view of what’s trending and helps me decide whether something is worth reviewing.

At the core of this is RSS, which has quietly persisted even as many sites moved away from it. I use TT-RSS to merge dozens of feeds into a consolidated source for each topic area. N8N then pulls those feeds into an AI agent workflow powered by Google Gemini’s free tier. I experimented with local models, but they couldn’t handle the complexity of parsing and structuring the data effectively. Cloud models still work better for this task, and because I only run it twice a day, I’m not paying anything for API usage.

Getting the prompt right was a big part of making this work. I had to iterate with both ChatGPT and Gemini until I landed on instructions that consistently returned useful results. The agent is told I’m a YouTube host looking for new topics, and I specify what types of content to prioritize and what to ignore. I also provide it with a structured HTML template so the output is consistent. The final email includes my calendar at the top, followed by curated sections on gadgets and cord cutting. It also uploads a copy to my FTP server so I can pull it up in a browser.

The advantage of this system is that I can fine-tune it. If something irrelevant slips in, I just add instructions to exclude it. If I want to emphasize a certain category, I can adjust the prompts. Unlike the opaque systems behind social platforms, this workflow only surfaces items from sources I choose and in the way I want to see them.

Everything I’m using—N8N, TTRSS, the Gemini free tier—is either free or open source. There are limits with the Gemini free tier, like rate caps and the possibility of data being used for training, but for my purposes it’s not a problem since I’m only working with publicly available content.

I haven’t put together an N8N installation tutorial yet, but Network Chuck has a good walkthrough that can help get N8N running on a server or NAS. It’s been interesting to see how popular N8N has become for building these AI agent tasks, and I’m trying to explore ways of using it that feel practical and useful. If you’ve also been frustrated by the way platforms filter your content, experimenting with something like this might give you back some control.

Check out some more projects like this in my “How To” series!

TCL D2 Palm Vein Door Lock – Raise your hand to unlock!

My latest review looks at TCL’s D2 palm vein door lock (compensated affiliate link), which came in for review after a number of viewers asked me about this type of technology. These locks work by just lifting your hand a few inches away from the lock’s sensor.

You can see it in action in my latest video review.

Testing this was pretty easy: after assigning my right palm to the lock, raising that hand opens the lock in just a second or two. If I tried with an unregistered hand (or somebody else’s), the lock rejected it.

The D2 Pro is a full deadbolt replacement. That means both the inside and outside hardware of your existing lock need to come off, and you’ll be using new physical keys – yes a pair of physical keys are included. I would have preferred no physical key option since it’s the least secure part of the system!

Alongside palm scanning, there are several other ways to unlock it: a keypad that supports six digit pin codes, a pair of RFID key fobs also included in the box, and the companion smartphone app.

The lock runs on a rechargeable 10,000 mAh battery that charges over USB-C. TCL says it should last about eight months depending on usage, but recharging takes time, which means your door will be offline for a while. In an emergency, you can power the lock temporarily with a USB-C power bank to get inside or of course just yse the physical key. I found installation straightforward, taking under half an hour including removing my old lock, though the build quality didn’t feel as solid as the Schlage I replaced. Once installed, it felt sturdy enough and carries an IP55 weather rating.

The TCL Home app is where you manage everything. It requires Wi-Fi on a 2.4 GHz network and I recommend putting it on a guest network to isolate it from other devices. The companion app works on both iOS and Android and integrates with Google and Amazon ecosystems, but does not currently support Apple HomeKit. Inside the app, you can manage users, register palm veins, assign or revoke codes and RFID cards, and even set temporary or one-time passwords.

TCL says palm data stays on the lock and isn’t uploaded, though there’s no way to verify that independently. You can store up to 50 palms and 50 six digit codes. It also offers features like duress passwords and limited-duration codes that could be useful for rentals or security-conscious households. What you won’t find is scheduling access for specific times of day, something some competing products do offer.

The lock also includes a built-in doorbell. It’s loud enough to hear inside, and you’ll get notifications on your phone, but there’s no video or two-way communication like a dedicated smart doorbell provides. Event logs are available in the app, so you can review entries and exits. My only annoyance with the app was the requirement to type in a PIN code every time I wanted to access the lock settings. Face ID or Touch ID support would have made that process smoother.

In daily use, the palm scanning was reliable. Registered users could approach the lock and gain access quickly. It even handled different angles well, and I haven’t yet had it mistakenly grant access to someone it shouldn’t. Rejections take a bit longer than acceptances, which might be a subtle security feature. It’s worth registering both hands since it only recognizes the ones you set up, and sometimes one hand might be occupied.

As a way to enter the house without pulling out a phone, typing a code, or carrying a key, the palm vein technology worked smoothly. It’s one of the more seamless experiences I’ve had with a smart lock.

I bought the cheapest Windows laptop at Walmart: The $179 HP 14 Laptop

This weekend I bought one of the least expensive Windows laptops I could find on a retail shelf, the HP Laptop 14, which I bought at Walmart for $179 (compensated affiliate link). The goal was to see just how far a low-cost machine like this can go, and what I found is that while there are certainly compromises, there are also a few pleasant surprises.

Check out my video review here!

The biggest surprise is how easy it is to upgrade. The bottom cover comes off with just four screws, revealing a standard DDR4 RAM slot and an empty NVMe slot. Out of the box, it ships with only 4 GB of memory and 128 GB of UFS storage, but I easily swapped in 16 GB of RAM and added a 1 TB SSD, making the system far more usable without losing the warranty or HP’s one-year support. All in it’s feasible to double the RAM while still keeping the total investment here under $250 or less.

Its Intel N150 processor is the same quad-core part I’ve tested in many budget mini PCs, and while it won’t compete with a high-end laptop, it’s efficient enough to get real work done even without the memory upgrade.

The built-in storage performed better than I expected, hitting around 800 MB/s in reads and writes, but adding the NVMe drive brought things closer to 1.2 GB/s. This setup even opens the door for dual-booting Windows and Linux, which the Intel N150 chip inside handles quite effectively.

With 4 GB installed, video playback stuttered and multitasking was sluggish. With 16 GB, YouTube ran smoothly at 1080p60, Office apps opened quickly, and even some light gaming became possible. GTA V, for example, managed to hit around 30 frames per second at low 720p settings, and a PS2 emulator pushed through most titles at mostly full speed.

The biggest letdown here is the display. It’s a 14 inch 1366×768 TN panel at 250 nits, which means washed-out colors and narrow viewing angles. It’s fine for web browsing and word processing, but it’s not suited for editing photos or video. The webcam isn’t much better, but it does at least include a physical shutter.

Weight comes in at 3 pounds and the build quality is all plastic, yet sturdier than I expected for the price. The keyboard and trackpad—though springy and spongy—are functional. Ports are limited, with just two USB-A, one USB-C for data only, HDMI, and a headphone jack. Wi-Fi 6 support is built in, and in my testing it delivered 300–400 Mbps, which is enough for streaming and even cloud gaming. Services like GeForce Now ran smoothly as long as the WiFi connection was decent.

Battery life came in at about five to six hours under light use, which is reasonable given the efficiency of the processor. The fan does kick on under load, but at idle it’s quiet. Windows 11 ships in S mode by default, restricting installs to Microsoft Store apps, though switching out of S mode is quick if you need more flexibility.

Linux also ran well here – in fact it’ll run better than Windows with the base 4GB of RAM when using a lightweight distribution. I am running a few home servers on N150 Mini PCs and the performance here felt very much on par with those devices.

What stood out to me is how much you can get out of this little machine with a few inexpensive upgrades. It’s a cheap laptop from a recognizable brand, with a one year warranty and domestic support, and that sets it apart from the nameless imports that sometimes offer slightly better specs. The display holds it back from being truly versatile, but with extra RAM and an SSD, the HP Laptop 14 becomes a surprisingly capable everyday computer for not much money. It’s good to see these budget options are still available.

Disclosure: I paid for the laptop with my own funds. No one reviewed or approved this content before uploading and all opinions are my own.

Tubi’s “Boss Key” PR Stunt Encourages Workplace TV Streaming

Tubi, the free streaming TV service, has released a Chrome extension aimed at people who sneak in some streaming while at work. The extension includes what’s known as a boss key which will stop the video and instantly replace it with a productivity-looking website, giving your boss the impression you’re working.

While this is just a stupid PR stunt, the extension turned out to be more robust than I anticipated. It also got me thinking back to some of the fun boss keys that used to be included with computer games in the 80s and early 90s.

See the Tubi boss key and a few classic ones in my latest video!

Tubi promoted this with a press release that claimed 84 percent of Gen Z users watch movies or TV shows at work. That number seemed high to me. Back when I worked in an office, I might throw on a podcast while doing mindless tasks, but full shows felt like more of a commitment. Still, the extension itself turned out to be worth a closer look.

Inside the folder where the extension is stored, I found some customization options. The HTML page that appears when the boss key is triggered can be edited or replaced, so it’s possible to swap in something from a corporate intranet or a more believable screen. There’s even decent documentation included for modifying its code.

The extension only works on Tubi out of the box, but it looks possible to adapt it to work on other sites too. I ran the code through Google Gemini to see if it was sending anything back to Tubi, but it appears benign and limited to their site.

The idea of masking your screen with a fake productivity page has a long history. Back in the early 80s, computers could only display one program at a time, so a quick swap was the only way to hide what you were really doing.

The earliest example I came across was on the Apple II. A game called Bezare, written by Roger Wagner, had a boss key that displayed a fake VisiCalc screen—the spreadsheet program that was the Apple II’s killer app. Later, a DOS version of Tetris had one too, swapping to a Lotus 1-2-3 lookalike when triggered. Sierra Online built them into several of its adventure games as well. Leisure Suit Larry popped up a colorful chart of sales data for contraceptives, while Space Quest III flipped the idea on its head by ratting you out with a dialog box showing how long you’d been playing.

I spent part of my weekend firing up emulators to revisit a few of these boss keys, and it was fun to see how far back the tradition goes. For something a little more modern, the NCAA has long had a “boss button” on their March Madness website.

Channels App Beta Offers Over the Air Multiview Feature

The Channels App just rolled out one of the more interesting cord-cutting tools I’ve come across in a while: The ability to watch four separate over the air TV channels at the same time when watching on an iPad or Apple TV.

Check it out here!

The feature is still in beta, so users will need to obtain the beta app through Apple’s testflight app. A subscription to the Channels App is also required ($8 monthly or $80 annually). I tested it on an Apple TV connected to my antenna through an HDHomeRun Flex 4K, and the experience worked better than I expected. Switching between streams was quick, and I could easily bring one channel forward while keeping an eye on the other three in the background.

If you’re not familiar with Channels, it’s a DVR platform that runs on a variety of devices. It requires a server component—usually a small PC or NAS—and supports hardware transcoding and out-of-home viewing. The app costs about eight dollars a month and works with HDHomeRun tuners for over-the-air broadcasts. It also integrates TV Everywhere channels if you still have a cable subscription and even supports creating your own custom virtual channels.

The multiview feature only works with live channels, so you can’t use it with recorded shows or personal media, but it’s flexible enough to handle both ATSC 1.0 and ATSC 3.0 broadcasts. Setting it up is straightforward: choose a channel, enable the multiview option, and then fill the other slots with the stations you want to monitor. Once you’re watching, you can switch the audio and enlarge a window with a click, or replace a channel on the fly. There are some rough edges at this stage, like the occasional frame stutter, but for a beta release it’s functional.

On the server side, the number of streams you can run depends on your tuner hardware. Each channel you add uses up one tuner, so if you want four channels at once, you’ll need a device that supports four simultaneous streams. Adding an extra HDHomeRun box is one way to scale if multiple people in the household want to record or watch at the same time.

One caveat is the ongoing battle over encryption of broadcast TV signals. If broadcasters succeed in pushing for mandatory encryption, features like this could be limited or disappear entirely, since broadcasters are blocking devices like the HDHomeRun from decrypting over the air broadcasts.

This beta is a fun way to get more out of live TV and is one the coolest things I’ve seen in the cord cutting space in quite some time. It feels especially handy for sports fans who want to keep tabs on multiple games at once. I’ll keep experimenting with the feature and will update as it develops.

MeLE Overclock4C N150 Mini PC Review

My latest Mini PC review is of the Mele Overclock4C, a mini PC built around Intel’s N150 processor. Despite the name, it isn’t actually overclocked, but the cooling solution sets it apart. Unlike some of Mele’s other fanless designs, this one uses a fan paired with a sizable copper heat sink. That design choice helps it sustain performance better under load while keeping noise levels surprisingly low. Even when the fan spins up, it remains quiet enough to be unobtrusive.

https://www.youtube.com/watch?v=0QCArhYDYoY

The model I tested came with 16 GB of DDR4 RAM and a 512 GB NVMe drive, both of which are accessible if you want to swap or upgrade components. The RAM is expandable up to 32 GB, and storage upgrades are straightforward. The case itself is mostly plastic with a metal base, and a VESA mount is included for attaching it to a display.

You can see all of the configurations over at Amazon (compensated affiliate link).

Connectivity is decent. There are two USB 3.0 ports, a USB 2.0 port, dual HDMI outputs, a headphone jack, an SD card slot, and a full-service USB-C port that supports video, data, and power, though it lacks Thunderbolt or USB4.

Networking is where the system feels dated, limited to gigabit Ethernet and Wi-Fi AC, while many similar N150 devices now ship with 2.5 gigabit Ethernet and Wi-Fi 6. Power draw is modest—about 13 watts at idle and up to 32 watts under load.

Performance is what you would expect from the N150 line. General computing tasks at 4K resolution ran smoothly, with no issues using applications like Word, Excel, or browsing the web. Video playback was reliable, handling 4K60 streams without hiccups beyond a brief stutter on startup.

Benchmark results lined up with other N150-based systems I’ve looked at. Gaming is possible if you set your expectations accordingly. Grand Theft Auto V ran at around 30 frames per second on low settings at 720p, and PlayStation 2 emulation was mostly full speed. Streaming from GeForce Now at 4K60 was smooth over Ethernet, further broadening the system’s gaming options.

Thermals are where this PC stands out. A stress test confirmed stable performance with little to no throttling, holding steady at around 47°C, lower than comparable fanless or less robustly cooled designs. The stronger cooling doesn’t make the N150 chip any faster, but it ensures consistency during prolonged heavy use.

On Linux, the system behaved as expected with one exception—the built-in Intel AC 9560 Wi-Fi chipset wasn’t recognized by the latest Ubuntu release. Ethernet worked fine, and with the right drivers, Wi-Fi should too. That small issue aside, it has the potential to serve well as a compact server, whether for Docker containers or media streaming.

The Mele Overclock4C doesn’t deliver more raw performance than other N150 mini PCs, but its cooling design makes it a better fit for those who plan to run it under sustained workloads. It’s a practical little system that can handle everyday tasks, some light gaming, and server duties without struggling to keep its performance stable over time.

See more Mini PC reviews here!

Disclaimer: Mele sent the computer to the channel free of charge no other compensation was received. They did not review or approve this content prior to uploading, all opinions are my own.

Aurzen Roku D1R Cube Smart Projector Review

In my latest video review, I take a look at a new projector from a company called Aurzen that comes with Roku built right in. When you power it on, you’re greeted with the Roku interface, and it even ships with a Roku remote. It’s not a stick or an add-on—it’s fully integrated.

The projector is on the lower tier price-wise – this is one of those devices that sees frequent price fluctuations and sales so take a look over at Amazon (compensated affiliate link).

At 330 ANSI lumens, it isn’t very bright, so in a well-lit room the image can be hard to see. It performs better in a darkened room with blinds drawn. Resolution is capped at 1080p, though it will accept 4K input and downscale. There’s no HDR support, but the major streaming services negotiate resolution correctly, and Netflix plays back at full 1080p, which is notable since many budget projectors don’t have a Netflix certification.

The hardware is compact, with a built-in power supply and stereo speakers that sound decent. There are options for connecting external audio via Bluetooth or the analog output. On the back you’ll find a USB port for loading in media files, an HDMI input, and minimal physical controls. The included remote works reliably, and because it’s a Roku device, the Roku mobile app is also supported. For positioning, there’s a small kickstand and a standard tripod mount. An 85-inch screen requires about 8 feet of throw distance, and that’s close to the maximum usable size in my testing.

In practice, the image looks sharp enough and color reproduction is consistent with expectations for the price. Brightness, however, remains a limitation, especially with darker content. There’s no manual brightness control, though autofocus and auto-keystone work well. These adjustments, along with orientation settings, are accessed through the Roku menu rather than physical dials. You can manually adjust the focus and keystone through the interface.

Streaming performance feels similar to a Roku stick. Apps like Disney+ and YouTube run at 1080p, and casting via Apple AirPlay or Miracast works smoothly. I tested AirPlay with a Keynote presentation with my iPhone, and the projector was able to carry the presentation while my iPhone displayed the next slide and presenter notes.

Gaming was a different story. While HDMI inputs displayed a sharp, fluid 60 fps image, input lag was severe—around a quarter second. For casual presentations or watching content, it’s fine, but fast-paced gaming is not something I can recommend with this one.

For someone who already likes Roku’s ecosystem and needs a simple, low-cost projector, this fits the bill. It’s best suited for smaller screen sizes in dark rooms. The biggest drawback is brightness and input lag, but for straightforward streaming use, it works as advertised.

Disclosure: Aurzen provided the projector to the channel free of charge. However no other compensation was received, they did not review or approve this content before it was uploaded and all opinions are my own.

ATSC 3.0 Update: Broadcasters Contradict Themselves in Recent Filing

The nation’s largest broadcasters are continuing to push an over the air encryption plan that will make it harder for people to record content or use gateway devices to watch TV around the house. What has been a free and open system is moving toward a locked-down approach unless the FCC steps in.

As it becomes clearer that encryption—and the market gatekeeping it enables—are holding back both tuning device availability and adoption, broadcasters are now demanding a government mandate to push it all through. But just a short time ago they were advocating for government to stay out of the process.

In my latest video, I take a look at how broadcasters are contradicting themselves in a recent FCC filing.

After Tyler the Antenna Man and I met with the FCC, the nation’s largest broadcasters quickly followed with their own meeting and filed an ex parte letter about it. In the letter, the broadcasters say:

“We emphasized that all parts of the broadcast ecosystem – from CE manufacturers to developers of converter boxes to retailers and smaller market broadcasters – are waiting for a signal from the FCC that there is a plan to bring the transition to ATSC 3.0 to an end.”

In a response, the Consumer Technology Association reminded the FCC in a meeting and a follow-up ex-parte filing that all parties to the transition, including the broadcasters, never wanted the government stepping in on the transition as it was supposed to be a voluntary, market-driven one. But the CTA stopped short of saying what is obvious—that DRM has been the real barrier to adoption.

But the CTA was joined by Public Knowledge in their meeting with the FCC, and that organization very strongly pointed out the pitfalls in allowing a select group of broadcasters to essentially regulate consumer electronic devices.

Check out my interview with Public Knowledge’s lead attorney here.

Looking back at their own public statements shows how much the broadcasters have shifted in their position. In 2019 Pearl TV, the organization comprised of the large broadcast owners, was promising great new technology and choice for consumers under this voluntary transition strategy. In 2021 they touted gateway devices like the HDHomeRun, even though they later denied that device certification. By mid 2023 they were boasting about adoption and asking a rhetorical question “where’s the problem?” in regards to tuner adoption. They urged the FCC to stay out of the market, but now they want a mandate to force adoption.

They even contradicted statements they made just a few weeks ago. In their letter they state:

“We discussed A3SA’s uniform set of policies that applies equally and objectively to all manufacturers of a particular device type. Finally, we explained that A3SA does not certify hardware components or chips within devices.”

Yet in July, these very same lawyers told the FCC that the HDHomeRun was being blocked because of its chips. They CC’d the industry press and just about every relevant department with it too.

Conversations I’ve had with broadcast executives suggest they don’t really understand the technology they’re trying to bolt onto broadcasting. Encryption designed for the web doesn’t translate cleanly to over-the-air TV. Yet they continue to dig in, convinced it’s necessary. Much of their industry today is built on retransmission fees rather than actual viewers, and DRM protects those business interests.

And this goes beyond just the encryption. Another feature, signal signing, gives this small group of large broadcasters the ability to take a channel off the air. Even stations that don’t want encryption still need to pay for a certificate from the major broadcasters just to appear on certified tuners. Engineers like Weigel’s Kyle Walker have raised these concerns, but the executives pushing this system seem more interested in invoking flawed analogies—like comparing broadcast encryption to SSL on websites—than in engaging with real technical risks. Here’s an example of that from a recent LinkedIn exchange from one of those executives:

The examples they cite don’t hold up. The 1987 Chicago “Max Headroom” hijacking and the more recent Russian satellite hijacks were both upstream feed compromises that encryption and signing would not have prevented. Yet they continue to argue that certificates protect against threats that have nothing to do with the broadcast signal itself.

For consumers, the result is fewer choices and fewer freedoms. Encryption blocks devices, limits how recordings can be made, and puts unnecessary restrictions on how people watch the signals they’re legally entitled to receive. The broadcasters are not tacitly acknowledging that this market has failed, but it’s their own system that created the failure.

If they really want adoption, there’s a simple solution: stop encrypting. Remove the DRM and devices will appear, consumers will buy them, and the market they keep talking about will actually materialize. Instead, they’re asking the FCC for a mandate to force this system into place. I think the better mandate would be the opposite—no encryption and no private regulation of public airwaves. That’s the kind of order I’d get behind.

GeForce NOW Game Streaming Service with Nvidia RTX 5080 – 2025 Review

It’s been a while since I did a deep dive into Nvidia’s GeForce Now streaming service, so in my latest video I take a look at where things stand in 2025.

The idea behind GeForce Now remains the same: for a monthly fee, you’re effectively renting time on high-end Nvidia hardware in the cloud, which lets you play games at higher settings and frame rates than you could manage on a low-end or aging PC. It also works on mobile devices, gaming handhelds and TV boxes.

The service does not include any games, however. GeForce Now syncs with accounts from popular PC game stores such as Steam, GOG, and Microsoft’s PC Xbox store. Games you’ve purchased on those platforms are playable on GeForce Now, provided the game’s publisher allows streaming—though not all do.

Games directly supported on the service are already downloaded and ready to go with optimized settings. Your saved games will also sync up automatically. Nvidia has also added a new “install to play” feature. Alongside its usual “ready to play” optimized titles, you can now allocate up to 500 GB of cloud storage to install games that allow streaming but haven’t yet been optimized for the GeForce Now service. Those titles require manual graphics tuning, but it does expand the potential catalog quite a bit.

Another recent update to the service allows users on the “Ultimate” subscription tier to play a select number of games with new RTX 5080 hardware. Most games will spin up on the RTX 4080 servers, since not every title is yet supported on the 5080. In my video I demoed streaming Cyberpunk 2077 at 4K on a cloud 5080 with variable refresh rate and G-Sync enabled. The game stayed well above 100 frames per second with excellent image quality and minimal latency.

The GeForce Now statistics overlay provides helpful realtime data such as real-time bandwidth consumption and latency. My connection to Nvidia’s New Jersey datacenter held steady at 11–12 milliseconds of latency on Comcast’s Gigabit Pro service, which helped the experience feel close to native PC gaming. Ethernet proved essential here; Wi-Fi isn’t reliable enough can’t keep up with the bandwidth demands of 4K 120fps streaming.

I also ran the service on very low-end hardware. My budget GMKtec mini PC, which costs under $200, had no trouble streaming Doom Dark Ages at 4K 60fps. As long as I used Ethernet, the experience was smooth with minimal lag. GeForce Now also supports mobile platforms including a native Steam Deck client. On handhelds, where resolution demands are lower, Wi-Fi worked well and only needed about 20 megabits per second.

Pricing spans three tiers. The free tier provides one-hour sessions on 1080p/60 servers—useful for testing whether your connection can handle it. The Performance tier steps up to 1440p/60, while the Ultimate tier unlocks RTX 4080 and 5080 access, 4K streaming, and frame rates up to 360 fps. At $200 annually, the ultimate plan gives you eight-hour gaming sessions, which for most people is more than enough time per play.

Geforce Now works equally well on a tricked-out desktop with a G-Sync display or a bargain mini PC that could never manage these games locally. The key variable remains your proximity to Nvidia’s datacenters and the quality of your ISP’s routing. For me in Connecticut, it is a seamless way to play, and it’s clear Nvidia has continued to refine the experience since the last time I tested it thoroughly.

DJI Mic 3 Review

In my latest review, I took a look at the new DJI Mic 3, which is the latest iteration of DJI’s wireless microphone system. In this review I focus more on the casual user, who is looking for a simple “run and gun” system.

Like the previous iterations, it does work as advertised for those looking for a simple solution. Plug it into a phone, camera, or computer and it pops up and works with little fuss.

I bought the two-microphone kit (compensated affiliate link), which comes with two transmitters and a receiver, though the receiver can handle up to four mics. In certain setups you can even record each mic onto its own track, which is useful for editing later if your gear supports it. This new version doesn’t require the receiver unit at all – in fact you can just buy a transmitter and link it directly via bluetooth to a phone. But that’s probably not the ideal configuration.

The included USB-C dongle locks securely into place, which is a big improvement over the earlier version where it would slip out easily. It works with newer iPhones thanks to the switch to USB-C, but anyone with a lightning-based iPhone will need to purchase an adapter from DJI.

The carrying case doubles as a charger, and DJI rates the mics for about eight hours of use and the receiver for about ten. Enabling advanced features like 32-bit float recording will drain the battery faster, and the batteries aren’t replaceable, so longevity may diminish over time. The receiver now has a scroll wheel for navigating menus, which I found more precise than the older tiny touchscreen taps.

Connectivity is broad. Beyond USB-C, there’s also analog TRS mic output and a headphone jack for monitoring, making it usable with cameras, computers, and phones. Each transmitter can also record internally, which is a safeguard in environments with heavy interference. DJI says they’ll store about 57 hours in standard mode or 42 hours in 32-bit float, automatically overwriting the oldest files when full. Audio is downloaded by docking the transmitters in the charging case, and connecting the charging case to a PC, phone or tablet via USB-C.

Speaking of interference, the DJI Mic 3 works across the 2.4 ghz and 5 ghz spectrum, occupying the same frequencies that Wi-Fi uses. It will “frequency hop” to keep the signal steady, but there may be environments with a lot of Wi-Fi and other devices using the same spectrum that could result in diminished performance. My advice would be to always enable the transmitter recording feature just to be safe.

In practice, the microphones sound better than the first-generation set I used before. They are omnidirectional, so they’ll pick up surrounding noise, but there are new noise reduction settings. In testing at a trade show, the basic noise reduction helped cut down background chatter, while the strong mode made the audio sound a little too processed. These settings only apply to the live wireless signal, not the onboard recordings, so any recorded files still need software cleanup if conditions are noisy. There are also voice presets labeled standard, rich, and bright. They’re subtle changes, but I found “rich” gave a touch more warmth to my voice.

Mounting options are flexible. The transmitters have clips and magnets strong enough to hold through clothing, though there are plenty of small accessories to keep track of. For outdoor work, the included furry “dead cat” wind screens snap on securely and help tame wind noise. Through the companion app, I was able to configure professional features like timecode synchronization, lossless recording, adaptive gain, and 32-bit float capture. Timecode is especially useful when syncing multiple tracks in editing. The advanced modes aren’t really plug-and-play and require more post-production work, but they’re there if you need them.

Overall, I see the DJI Mic 3 as both approachable for those looking for a basic mic set but with some additional features that pros will appreciate. While I use a higher-end Sennheiser set for my remote shoots, it’s nice to have something that’s quick and easy for the times I need a quick solution with minimal hardware to get the job done.

Apple Already Told Us Their AI Plan in 1987 with the Knowledge Navigator concept?

I just finished watching Apple’s keynote, and like most years, it was a predictable lineup of iPhones, AirPods, and Apple Watches. The hardware got its annual refresh, but there wasn’t anything that felt new or unexpected. The biggest topic of conversation was what Apple didn’t show: updates on its lagging AI strategy.

The “Apple Intelligence” feature set still feels underwhelming, and it made me think back to the Knowledge Navigator AI agent concept video Apple made in 1987 that might give us a clue about what they might be working on today.

I explore that and show you my own AI agent workflows in my latest video.

I first saw the Knowledge Navigator video as a kid in the early ’90s, when some friends and I formed an Apple user group that received promotional videos like this from Apple.

At the time, the Knowledge Navigator seemed like science fiction, but watching it now, it feels like a plausible direction for Apple’s AI ambitions. The video depicts a professor interacting with a digital assistant that not only responds to commands but anticipates needs—pulling up articles, reminding him of events, leaving messages, and even coordinating schedules with presumably other people’s agents.

What struck me most was how the agent handled tasks on the professor’s behalf, like trying to reach someone by phone, leaving a message, and then being ready to relay instructions when she called back. It even set up meetings.

If both parties had agents, they could negotiate directly without human back-and-forth. That kind of invisible efficiency is something I’d welcome—scheduling meetings is one of the biggest time sinks I deal with. With language models as capable as they are now, this no longer feels like far-off science fiction.

I suspect Apple is quietly working on this agent model. Their recently released Apple Invites app caught my attention because it seemed like such an odd standalone product, but it would make sense as a building block in a future where AI agents manage more of our day-to-day logistics.

When Apple is finally ready to make their big AI push, I think it will be around agents. “I’ll have my Siri call your Siri and we’ll do lunch” might be in our near future.

I’ve been experimenting with this idea myself. Using an open-source tool called N8N, I’ve built a few agents that automate parts of my routine. One sends me a daily morning email with my calendar and curated stories from the gadget and cord-cutting sites I follow. It uses Google’s Gemini API model to filter through RSS feeds and highlight what I might want to cover on my channel. The setup works well enough that it reminds me of the professor’s morning briefing in that Apple demo.

Scheduling is trickier. I’ve tried building an agent that can handle booking meetings based on my availability, and while it sometimes works, it’s far from reliable. Getting the models to properly parse my calendar was a challenge until GPT-5 came along, but even then, the success rate isn’t high enough to trust it with real interactions. Still, the framework is there, and it feels like a glimpse of what’s possible once the technology matures.

Right now, most consumers are engaging with AI through search-like interactions, asking questions and getting quick answers more efficiently than searching on their own. But the real leap will come when agents can act on our behalf, working with other agents to complete tasks without constant human oversight. That’s the vision Apple hinted at nearly 40 years ago, and it may be the key to making their AI efforts feel truly impactful when they finally step into this space.

Linux Gaming Part 2 : AMD to the Rescue?

In my latest video, I revisit my Linux gaming experiment with AMD hardware after the feedback I received on my first attempt. You can see the results here!

In that earlier video, I installed a Linux distribution called Bazzite on a gaming laptop with an Nvidia GPU and the results were disappointing compared to Windows. Many of you suggested that the real problem was Nvidia’s drivers and recommended I try an AMD GPU instead. That’s what I did this time.

For this follow-up, I set up a GMKtec Evo T1 mini PC (compensated affiliate link) with an Intel Core 9 285H paired with GMKtec’s external GPU unit, the AD-GP1 (affiliate link), on top. Inside that enclosure is an AMD RX 7600M XT with 8 GB of VRAM connected over Oculink. This is essentially the same as plugging a card into a desktop. It’s the only AMD setup I had on hand, but it seemed like a good test case, especially for those interested in eGPUs.

Bazzite installed without issues. The hardware, including the GPU, was detected automatically with no manual intervention. I should note that both the mini PC and GPU were provided free of charge by GMKtec, but they had no role in this video’s content or opinions.

For benchmarking, I started with Cyberpunk 2077 on medium settings at 1080p. On Windows, the same setup averaged 131 frames per second. On Linux with AMD, the benchmark came in at 127.77 frames per second, essentially within the margin of error. On the prior video we saw about a 20% reduction in performance running similar tests. What impressed me most was that I didn’t have to touch the command line or tweak anything—it simply worked out of the box.

Next up was No Man’s Sky. Running at 1080p with enhanced settings, the game hovered around 60 frames per second, sometimes higher. The performance felt on par with Windows, without the performance hit I saw on Nvidia.

Not everything worked perfectly. Red Dead Redemption 2, which I own on Steam, wouldn’t boot at all. Others in the Bazzite community reported similar issues, so it seems like a known compatibility problem. On the other hand, Terminator Resistance, a fun first person shooter, ran at 4K medium settings at about 60 frames per second, again comparable to Windows.

Overall, using AMD hardware brought me much closer to a plug-and-play Linux gaming experience. Many of the games I tested ran just as well as they do on Windows.

All of this reminds me of the Linux based Alienware Steam Machine I tested about a decade ago, where the promise was there but the compatibility wasn’t. Proton has changed that equation, and while not every title works, most do, and they work well. This experiment showed me that with the right hardware, Linux gaming can feel nearly turnkey similar to how it does on the Steam Deck.

Thanks to everyone who encouraged me to try AMD hardware. It made a big difference, and if you have an AMD GPU, you might find that Linux gaming works better than expected. The progress in just ten years is remarkable, and it raises the question of whether we might soon see purpose-built Linux gaming machines make a comeback.

A Tech Dispatch from IFA in Berlin, Germany!

I just got back from Berlin, Germany where I attended Showstoppers, a companion event to IFA which is Europe’s version of CES. And just like CES I produced a dispatch from the event!

Check it out here!

Lenovo sponsored my trip again this year, covering travel but not influencing what I covered, and I was able to see a wide mix of concept ideas, shipping products, and quirky tech that you don’t always come across in the U.S.

At Lenovo’s press event I saw a few different concepts and new products. One that stood out was a concept laptop with a pivoting hinge that gives the display extra vertical space, potentially useful for editing, coding, or browsing. Lenovo’s new Legion Go 2 handheld is about to ship with detachable controllers, an OLED display, and robust ports, while their Legion Pro OLED gaming monitors push into high refresh rates and slim designs. Other Lenovo highlights included affordable Idea Tab Plus tablets, higher-end Yoga Tabs, a new aluminum-clad ThinkPad X9 line, and even some concept gear like a Smart Motion dock that physically tracks your face and keeps the laptop pointed at you.

Over at Showstoppers, I came across a number of smaller companies showing interesting devices. DigiEra had a chunky handheld tablet PC called the HoloMax with 3D display capabilities and powerful specs. BlackView showed off a rugged smartphone with built-in VHF and UHF radios, essentially combining a walkie-talkie and phone into one device. Momax introduced inexpensive Find My-compatible trackers with some extra safety features like a loud siren. Ugreen displayed a six-bay NAS powered by Intel’s Core Ultra processors and outfitted with Thunderbolt. Anker had a massive $5,000 portable projector and sound system called the Nebula X1 Pro, clearly aimed at outdoor movie nights.

There were also plenty of niche gadgets and fun experiments: a waterproof point-and-shoot camera for kids from Agfa Photo, keyboards from Epomaker with detachable screens and unusual switches, a SwitchBot robot that plays tennis with you, and the Hover Air camera that autonomously follows you around with no need for a remote control. Belkin had some new budget-friendly earbuds, car chargers, and magnetic wireless chargers, while Charge showed an SSD with integrated cooling and hub functionality.

This event is much more fun to watch than read. Check out the video and see all of my previous dispatches here!

I’ll have a mini-dispatch coming up in late October from a Pepcom event. Stay tuned!

HP Omnibook 5 with Snapdragon Review

My latest laptop review is of the HP Omnibook 5, a Windows laptop built around Qualcomm’s Snapdragon X Plus processor.

The Omnitbook 5’s Snapdragon processor is ARM-based, which means it offers strong battery efficiency compared to similarly priced Intel or AMD machines, but it also comes with some compatibility trade-offs.

It starts off at a very reasonably price, (check out the latest price on Amazon – compensated affiliate link), and even at that entry tier, it comes with 16 GB of RAM and a 512 GB SSD, which is more generous than what I typically see at this price. The model I tested was equipped with 32 GB of RAM and a 1 TB SSD.

The display is a 14-inch OLED panel running at 1920 by 1200. It delivers strong contrast and sharpness, though reds can appear oversaturated, and it doesn’t fully cover the DCI-P3 color space, so it’s not suited for professional color grading. At 300 nits, brightness is adequate under most conditions, and the screen is topped with Gorilla Glass 3. Versions with touch support are available, but the one I used was non-touch.

There’s a 1080p webcam supports Windows Hello facial recognition and comes with a physical shutter. Microsoft’s Copilot effects, like background blur, run smoothly. The keyboard is comfortable, backlit, and has decent travel, while the trackpad, though a little soft in feel, worked without issue.

Port selection includes two USB-C 3.2 ports capable of charging, video out, and 10 Gbps transfer speeds, alongside a 10 gigabit USB-A port and headphone jack. The build mixes recycled plastics with aluminum on the lid and base, and the machine is light at about 2.8 pounds. It feels balanced and can be opened with one hand.

Battery life is where this laptop stands out. In my use, it lasted between 12 and 15 hours on typical workloads like Office apps and browsing, and I never ran into low-battery anxiety even after using it on and off for more than a day. The fan inside is rarely audible, and the system stays stable under sustained load.

Performance on everyday tasks is solid. Browsing, Office, and media playback ran smoothly, with 4k 60fps YouTube video playing back without trouble. For video editing, DaVinci Resolve has an ARM-optimized version, and I was able to stitch together 4K clips with smooth transitions, though heavier effects slowed things to a crawl.

Gaming is the biggest compromise. Modern titles like Red Dead Redemption 2 and No Man’s Sky wouldn’t run, but older games and a remastered Star Wars: Dark Forces played fine. Benchmarks landed it near older Ryzen 7 laptops, but game compatibility remains inconsistent because many titles still don’t support the Snapdragon ARM processor even with Microsoft’s compatibility efforts.

Linux is technically possible on Snapdragon laptops but requires extra work, so those wanting to dual-boot would be better off with Intel or AMD hardware.

I came away with the impression that the Omnibook 5 is best for casual work and media use. It’s not the machine for gamers or professionals needing specialized software, but for someone who wants a lightweight laptop with excellent battery life and a capable feature set at a low entry price, it has a clear place.

No, the FCC Did Not End ATSC 1.0 Broadcasts Today

The FCC’s Media Bureau put out a notice on September 2nd (DA 25-789) about the ongoing transition to ATSC 3.0, also known as NextGenTV. There really isn’t anything new in this order that wasn’t already in place from 2017 when ATSC 3.0 broadcasts first began.

One of the key points is around the requirement that full-power and Class A stations continue to provide their main channel in the current ATSC 1.0 format if they switch their primary signal over to ATSC 3.0. This is meant to protect viewers who don’t yet have compatible equipment.

The rules allow an application to be processed on an expedited basis if a station keeps at least 95 percent of its existing audience covered with an ATSC 1.0 simulcast. What the Bureau clarified is that it will continue to use detailed terrain-based coverage analysis to determine whether that 95 percent threshold is met.

For stations that can’t quite hit that 95 percent mark, the notice emphasizes that their applications won’t be ignored. The Bureau says it will still review them on a case-by-case basis, weighing factors such as whether viewers in the “loss area” are still served by another station carrying the same network, or whether the station offers mitigation like providing converter boxes.

The Bureau also highlighted some of the flexibility already built into the rules. Stations are only required to simulcast their main channel in ATSC 1.0, not additional sub-channels. The “substantially similar” programming requirement applies only to that main stream, which gives broadcasters room to experiment with new ATSC 3.0 features such as interactive services or higher-resolution video. Stations can also partner with more than one host station to meet the 95 percent coverage goal. Low-power and translator stations aren’t required to simulcast at all, though they can volunteer to host other stations’ signals.

It’s important to note that this notice doesn’t create new obligations or change the simulcast rules. Instead, it’s meant to give broadcasters more certainty about how the FCC staff will interpret and process applications. In other words, this is more about guidance and reassurance than a firm new policy.

Look for a draft order that will more specifically spell out the rules for the cutover – including whether or not DRM will be allowed. Stay tuned!

ATSC 3 Update – “High Noon” : A secret broadcaster plan to take over the public airwaves

I’ve been following the ongoing debate over the encryption of over-the-air television signals for several years now. While most of that coverage has focused on the consumer experience, there’s also some pain in store for smaller independent broadcasters through the “High Noon” effort being imposed by the nation’s largest conglomerates.

I dive into that in my latest analysis piece.

“High Noon” is not some conspiracy theory – it’s the actual name for a plan about to be implemented by the nation’s largest broadcasters that mandates every station to purchase an encryption certificate through a private security authority called the A3SA. That authority, of course, is owned and operated by the nation’s largest broadcasters and has the power to revoke these certificates at will – essentially being able to pull those smaller stations off the air even if they have a valid FCC license.

These certificates are a requirement of the ATSC 3.0 standard even if the station doesn’t broadcast a DRM encrypted signal. And if that’s not all bad enough, the rules of how all of this work are locked behind an NDA so nobody can talk about it. And of course this private authority can change the rules anytime they want.

And how can they pull a station off the air? Well, the few tuners on the market that support DRM have to also support this signature authority. If the tuner doesn’t detect the certificate it won’t show the station to the viewer citing security issues.

The backdrop here is a filing from Weigel Broadcasting, one of the larger independent broadcasters with stations nationwide and a digital over the air network reaching most U.S. households. Unlike the big conglomerates, Weigel relies on actual viewers tuning in for ad revenue, so they’ve resisted DRM from the start. They’ve also been vocal in their opposition on the FCC docket, pointing out that DRM-compliant tuners are significantly more expensive than current ATSC 1.0 gear.

In tests, Weigel engineers confirmed that TVs are denied access to a channel when presented with unsigned signals, putting the A3SA effectively in the role of gatekeeper instead of the FCC.

“High Noon” was supposed to roll out on June 30, but broadcasters delayed its implementation in March. The reasons aren’t public, and under the NDA, people in the know can’t say why. I think pressure from independent stations and public opposition may be playing a role. Still, once that “High Noon” switch is flipped, broadcasters could find themselves in a position where their ability to reach viewers depends less on FCC licensing and more on private agreements with a handful of corporations.

The justification offered is security—protection against hijacking and what their industry association says are “deepfakes” of a broadcast. But history shows these incidents are exceptionally rare. The only real example of a hijack was the Max Headroom incident in Chicago in 1987, when someone overpowered a microwave relay and briefly took over a broadcast. More recent disruptions have been the result of poor security practices, like leaving default passwords on emergency alert systems or mistakes made inside the broadcast center by technicians. Encryption and signing certificates wouldn’t have prevented those.

Meanwhile, the consumer side of ATSC 3.0 remains sluggish. DRM has made tuners more expensive, stunting adoption of what otherwise could be a much more consumer-friendly standard. Independent broadcasters argue that the only way forward is to drop DRM entirely and allow viewers to access the public airwaves without interference which would bring down the cost of tuning devices substantially.

That’s where things stand now. The “High Noon” switch hasn’t been thrown yet, but the threat of it looms over the industry. For me, the question is whether the FCC will continue letting private groups usurp their authority, or if it will step in before viewers lose access to something they’ve always been entitled to receive.