The Sandisk Pro G40 portable SSD is marketed towards creative professionals and stands out thanks to its Thunderbolt & USB 4.0 compatibility, promising multiple gigabytes per second of transfer rates. But it also is (mostly) backwards compatible to slower USB devices that lack Thunderbolt technology.
Because it’s a “professional” product this drive comes it at a higher price point, about $174 for the 1TB version (compensated affiliate link).
The drive’s build quality is impressive, featuring a solid, rubberized metal design with a grippy bottom, common in other SanDisk drives. It is IP68 weatherproof, making it resistant to splashes, though users should avoid exposing it to harsh weather when plugged in.
It comes with a short Thunderbolt 3 cable USB-C cable which will also work on computers not equipped with USB-C. Other cables, like USB-C to USB-A cables will work as well but those will need to be purchased separately.
Performance tests on a Thunderbolt-equipped Windows computer showed it achieving over 2.4 gigabytes per second in read and write speeds, maintaining this without thermal drop-offs or cache issues. For optimal performance, it’s crucial to connect to the correct port, identified by the Thunderbolt or USB 4.0 icon, as using a standard USB port significantly reduces speeds.
The CrystalDiskMark benchmark further highlighted its strengths in sequential and random reads and writes, suggesting potential for PC gaming. Comparisons with other USB-C-based drives such as the SanDisk Extreme Pro and Samsung T9 showed the Pro G40’s superior performance in Thunderbolt mode. Speeds were comparable to competitors when not connected to the Thunderbolt, however.
The SanDisk Pro G40 SSD contains an off-the-shelf WD Black SN850 NVMe (compensated affiliate link) drive inside its enclosure.
Notably, the Pro G40 does not work with iPhones, which limits its utility despite its potential for high bitrate, lossless video formats. Attempts to use it with other phones also failed, possibly due to power draw issues, making it suitable primarily for computers.
For Windows users, achieving full performance requires enabling write caching in the drive’s settings. I outlined those steps at the 6:12 mark in the video review. This extra step is unnecessary for Mac users, where the drive works at full speed out of the box.
While the Pro G40 is compatible with game consoles, its high performance can’t be utilized, making it an impractical choice for that purpose. However, for PC gaming and professional creative work, it offers excellent performance akin to internal drives. Despite its limitations with mobile devices, the Pro G40 is a compelling option for those needing a fast, reliable external drive.
My quest for the perfect wireless subscription-free doorbell has been a long one.. I’ve looked at Blink, Wyze, Google Nest and a few others but the Eufy e340 is one that I think is the best of the bunch. You can see my full review here.
It currently is selling at its Prime Day sales price at $119 (compensated affiliate link) but is normally $179. An indoor chime costs an additional $39.
One of the standout features of the Eufy E340 is its dual-camera setup. It includes a front-facing camera and a second camera aimed downward to monitor packages left at the doorstep. This design is particularly useful for keeping an eye on deliveries, ensuring that packages are not missed or stolen.
The doorbell stores footage locally, boasting 8GB of internal storage. If more space is needed, users can opt for the chime accessory, which includes an SD card slot for additional storage. There is also compatibility with Eufy’s hub for broader integration with other security devices.
The doorbell’s installation is straightforward, and it includes a secure mounting mechanism to deter theft. Users can also angle the doorbell using an included bracket for optimal coverage. The device operates efficiently in various lighting conditions, featuring both infrared and color night vision modes. The infrared mode uses invisible light to illuminate the scene, while the color mode employs visible lights for a clearer picture at night. You can see examples of both modes in the video above.
Audio quality is another strong point. The doorbell includes a microphone for clear audio recordings and a speaker for two-way communication. This setup allows users to interact with visitors or delivery personnel directly through their smartphones, providing real-time communication without the need for a walkie-talkie style exchange.
Battery life is impressive, lasting several months under low to moderate usage. The device also supports direct power connections if existing doorbell wiring is available, which helps maintain the battery charge indefinitely.
The Eufy app, available on both Android and iOS, provides a comprehensive interface for managing the doorbell. Users can view live footage, review recorded events, and adjust settings directly from the app. The app’s AI offer reliable human detection while ignoring non-essential movements or trees blowing in the wind or animals. This reduces false alerts and enhances the overall user experience.
One of the advanced features is the “Delivery Guard,” which tracks packages left at the door and can also activate a deterrent alert when a package is detected and someone walks up to the door. Users can also receive alerts if a package remains unattended for too long. The familiar faces feature allows users to upload photos of known individuals, enabling the doorbell to recognize and log visits from these people.
All in, the Eufy E340 Dual Camera Wireless Doorbell is a well-rounded, high-performing device that excels in key areas such as video quality, audio clarity, battery life, and smart features. Its dual-camera design and robust app integration make it a versatile choice for anyone seeking a reliable, subscription-free doorbell solution. For those in search of a dependable and cost-effective security addition to their home, this doorbell is worth serious consideration.
Disclosure: I paid for the doorbell with my own funds.
While Amazon does regularly update their Echo devices, it’s often hard to discern one version from the next these days as most are more revisions versus something totally new. But the other day Amazon released their latest Echo Spot that looks and operates a little differently than the others. You can see more in my latest review.
Unlike the more versatile Echo Show, the Echo Spot is being marketed primarily as a smart alarm clock. This device lacks a camera and has a more limited display functionality, but at the same time it’s not displaying ads to the user as a means of subsidizing its cost.
The design is reminiscent of other Echo devices, with a cut-off sphere look. It comes with a low resolution display that is mostly dedicated to showing the time and weather conditions. Unlike the Echo Show, it cannot display video footage from security cameras or provide the same depth of visual information. However, it will provide basic visual responses to certain queries, such as showing the weather or the current music track playing.
While it is capable of performing many of the same tasks as other Echo devices, the limited display means it might not be as suitable for tasks that require more visual information, like kitchen timers. The Echo Spot can also handle multiple countdown timers, although the small display makes it challenging to manage several at once.
Amazon offers some limited customization of the clock face, including choosing from a few different layouts for the clock display along with the base color it uses. The clock faces can be adjusted using the touch functionality of the display but it’s much easier to do it via the Alexa app.
Amazon also built in a night mode that dims the display and switches to a red color for the clock to avoid disrupting sleep. Night hours can be set within the app. The alarm clock functionality is also intuitive; you can set alarms by voice and snooze them with a simple tap on the top of the device. There are different alarm tones available, including some celebrity voices, which can add a fun element to waking up.
In terms of audio, the speaker on the Echo Spot is decent, offering a richer sound than the entry-level Echo Dot but not quite at the level of higher-end audio devices. It’s suitable for background music or podcasts, especially when getting ready in the morning.
The physical controls on the top include a mute button for the microphone and volume buttons. The Echo Spot’s microphones are quite sensitive and can pick up commands from across a room, making it a convenient addition to a bedroom or office.
Overall, the Amazon Echo Spot serves well as a smart alarm clock, offering basic smart assistant functionalities without the added complexity or privacy concerns of a camera. It integrates smoothly into Amazon’s ecosystem and provides a balance of features for those looking for a simple, voice-controlled device for their bedside table.
Japan has officially banned the use of floppy disks in its government operations. This move comes as part of a broader effort by the country’s new Digital Transformation Minister, Taro Kono, who has declared a war on outdated technology. According to a recent Ars Technica article, the Japanese government required obsolete media formats, including floppy disks, CD-ROMs, and even mini-discs for many official filings.
In my latest video we take a look at the Japanese law and look at why floppy disks persist as a storage medium over a decade after the last one was made. We also have a fun interview with Tom Persky, owner of floppydisk.com, who is one of the last remaining floppy disk retailers.
In the United States, the use of floppy disks persisted in critical areas longer than one might expect. For example, until 2019, the US nuclear arsenal relied on 8-inch floppy disks, which were eventually replaced by secure digital storage solutions. Floppy disks also remain essential in certain private sector areas, particularly in aviation, where some Boeing 747s update their software using 3.5-inch floppy disks.
Despite their obsolescence, floppy disks are not entirely dead. They are still available for purchase online, often from niche suppliers like Tom Persky’s Floppydisk.com. Persky’s business, which started as a software development house, morphed into a disk duplication service in the 1990s. Now it thrives by catering to a dwindling yet persistent market for floppy disks. As retail shelf space for these disks has vanished, Persky’s large inventory and online presence have made him a go-to source for individuals and businesses needing floppy disks.
Floppydisk.com supplies both new old stock and repurposed disks. Persky’s stock comes from various sources, including large purchases from countries like South Africa, Argentina, and Brazil. He also provides a recycling service for used disks, repurposing those that pass reformatting tests and selling others for artistic or promotional uses.
The clientele for floppy disks is diverse. About 10% of Persky’s customers are computer hobbyists looking to revive old games or systems. He says the bulk of his business comes from industrial customers who rely on floppy disks to operate machines built decades ago. These machines, designed to last for decades, still use floppy disks to ingest data for automation.
One common concern about floppy disks is their longevity. Persky notes that disks manufactured during the peak production years of the 1980s and 1990s are generally reliable. In contrast, disks made towards the end of their manufacturing run may be less so. Interestingly, well-maintained floppy disks can sometimes be more dependable than other storage media like USB drives or CDs, which can suffer from issues like “disc rot” over time. I have a few CD’s that I burned in the late 90’s that are rotted out yet many of my 40 year old Apple II disks still read perfectly.
Floppydisk.com also offers data transfer services, helping customers recover old data from floppy disks, such as early drafts of books, financial records, and treasured photographs.
While the future of floppy disks is finite, given that no new disks are being produced, Persky remains optimistic. He acknowledges that the business may not last forever, but is confident that the current inventory will suffice for the foreseeable future.
The G5 is priced at $149 for the 256GB storage version (compensated affiliate link) and $200 for the 512GB variant. Unlike some of the larger mini PCs with two storage bays, this model offers only one. But both the M.2 2242 storage and bluetooth / wifi card are replaceable and upgradeable.
In terms of ports, the G5 is equipped with two USB 3.0 ports, a USB Type-C for power only, a gigabit Ethernet port, and two HDMI outputs that support 4K at 60Hz. Although it has fewer ports than other mini PCs, it remains practical for basic tasks.
Performance-wise, the G5 handles web browsing and video playback smoothly, even at 4K. It managed to play back a 4K YouTube video without drop frames after initial buffering. The device’s fan is noticeable during intensive tasks, yet it maintains a manageable noise level.
For office tasks, the G5 is more than adequate, running Microsoft Word at 4K without lag. However, it lacked a VESA mount plate in the box, limiting its mounting options. Despite this, its small footprint makes it a convenient desktop addition.
Gaming on the G5 proved impressive for its size. Running a remastered version of “Dark Forces” at 1080p maintained 60fps. GTA V at 720p at its lowest settings hovered around 40 fps, showcasing its capability with older, resource-intensive games. Emulating PS2 games via PCSX2 delivered a consistent 60fps running at the default settings and the games’ native resolution.
Benchmarking on 3DMark TimeSpy resulted in a score of 487, surpassing similar mini PCs with the N100 processor. While it may not match Ryzen or Intel Core Ultra machines, its performance remains commendable for the price. On the 3DMark stress test, it achieved a passing grade of 97.9%, indicating stable performance under load despite its smaller fan having to work harder.
The G5, like many of the Mini PCs we’ve looked at recently comes with what appears to be a legitimately licensed and activated copy of Windows 11 Pro. No malware was detected during testing, but it allowed local account creation without needing a Microsoft account. I suspect that they are buying up bulk OEM Pro licenses and assigning them to these PCs.
Linux performance on the G5 was seamless, with all hardware components functioning properly on the latest version of Ubuntu. As we saw recently with the N100 processor, the G5 has great potential for Plex serving with its slightly faster N97 chip. It is of course limited by its internal storage capacity, necessitating USB drives for expansion.
Overall, the GMKTec Nucbox G5 is a versatile mini PC, suitable for office tasks, light gaming, and media serving. Its compact size and decent performance make it an excellent secondary computer for various uses. But I still prefer the N100 based G3 as it offers more expandability.
Disclosure: The G5 was provided to the channel free of charge for this review. No other compensation was received, nor did anyone review or approve the review before it was uploaded.All opinions are my own.
A YouTube executive made a notable admission about the relevance of subscriptions on the platform, confirming that subscribers won’t see everything uploaded by a subscribed channel under most circumstances. I talk about this in my latest video.
It all started with a tweet on X from Rene Richie, the “YouTube liaison,” who responded to a query about the algorithm and its impact on visibility. Richie replied to a question from Curtis Judd, a content creator focused on audio and visual production tools who frequently hears from subscribers that they never see his content being uploaded.
Richie’s response shed light on YouTube’s current stance on subscriptions. He emphasized that YouTube doesn’t push content from creators; instead, it pulls content based on viewer behavior. Richie says this is largely dictated by time constraints on the viewer.
He adds that subscribers can always access content through the subscription feed, but YouTube hasn’t updated the subscription feed in years. It often becomes a chaotic, chronological fire hose, that works very differently across the various viewing platforms YouTube supports. The feed lacks sorting and filtering tools for those with a large number of subscriptions.
Moreover, the YouTube homepage recommendation algorithm considers various factors, such as how recently a viewer subscribed and their current viewing habits. This means that if a viewer subscribed to a channel years ago and doesn’t religiously watch every single video that channel uploads, YouTube might deprioritize those older subscriptions. This can lead to a situation where the platform essentially decides the subscription status on behalf of the viewer, rather than allowing users to manage their subscriptions actively.
This system presents significant challenges especially for channels like mine that cover a wide range of tech content. Not every subscriber will be interested in every upload, which frequently demotes me to the bottom of the barrel for recommendations. The result is that a vast majority of my nearly 360,000 subscribers are never even given the choice to watch my content – they never see the thumbnail.
Data from my channel shows a marked decline in thumbnail impressions from the subscription feed over the years, despite a steadily growing subscriber base. This indicates that fewer people are using the subscription tab because it just isn’t useful to most users.
I’ve developed alternative ways to connect with my audience, such as email lists and a dedicated blog. These platforms allow me to share all my content regularly, giving the viewer the choice to watch or not. I’ve also set up a short URL, at https://lon.tv/latest that viewers can bookmark to always see my most recent uploads.
This brings us back to the subscription feed being a hot mess. Because it is mostly a fire hose of content, users with large feeds will only effectively see content posted around the time they log in. If a viewer’s favorite creator posts at a time opposite that viewer’s regular viewing hours the post will get buried – especially if they viewer has a large subscription base.
The user interface varies significantly across devices too. On desktop, it’s a mix of live channels and a chronological list of videos from subscribed channels with very few fitting on screen at once. On a TV, there’s a semblance of organization with frequently watched channels appearing at the top but no way to control what channels get pinned to the top of that list. The mobile version offers filters like ‘Live,’ ‘videos only,’ and ‘Continue Watching’, but the overall experience is still difficult to manage – especially if the viewer is subscribed to channels that dump a whole bunch of content at once. The mobile interface also has a “most relevant” list of channels pinned to the top of the screen but lacks a means of deciding which channels go in that spot.
To navigate these limitations, I’ve resorted to using an RSS reader, a tool that some might find archaic but is surprisingly effective and still a backbone of digital content distribution. RSS allows me to categorize and follow my favorite channels along with the ability to filter those channels by topic. This method ensures I never miss content from creators I genuinely care about even if I don’t watch every single one of their videos. The best part is that consuming videos from the RSS feed helps improve my recommendations as I’m consuming content that genuinely interests me.
Despite these efforts, the core issue remains. YouTube’s current system of managing subscriptions is flawed, allowing the subs tab feature to rot away in a subtle effort to push users to use the recommendations instead. While these recommendations can help discover new content, they should not supplant a well-functioning subscription system. Users deserve the ability to tailor their viewing experience, deciding which channels to prioritize and de-prioritize if they so choose.
So what would I fix? First, give the user the ability to maintain a list of favorites at the top of the sub feed (similar to the mobile and TV experience). We already have a “bell” to click for notifications so bell clicked channels would be an easy way to implement this feature.
Next, YouTube should “stack” videos in the sub feed from channels that dump a bunch of content all at once. A great example are my local news channels that upload all of their news packages at the same time which fills my feed. This apparently is a feature that YouTube is “experimenting” with – but like most sub feed experiments they rarely come to fruition.
A “stretch goal” for me would also be to have some <gasp> algorithmic filtering options based on the video topic. This would be similar to how recommendations can be sorted by topic. This might help re-surface channels a viewer has missed based on what kind of content mood they are in.
But that’s it – YouTube really doesn’t need to do much to make the sub feed just a little bit better to attract more usage. But it’s likely they don’t want to do anything that takes viewers off the home page.
My video generated a huge response from viewers. Many agreed with my concerns and many others posted to say that the current system is working just fine for them. Curious, I put up a poll on my community tab (unscientific of course) to see how many subscriptions those who liked the current sub feed have on their list. The result? More than half, 54%, have less than 100! I have well over 1,000 that I’ve accumulated over nearly 20 years.
Improving the subscription tab would not only enhance user satisfaction but also help improve recommendations for content discovery. It’s a win-win for everyone in my humble opinion.. So why not do it?
I recently had the opportunity to review the HP LaserJet M209dw, a budget-friendly laser printer ideal for those who don’t print frequently and are looking for an economical option. The printer, priced at around $120 (compensated affilate link), features a low cost per print and replacement cartridges at $54 for 1100 pages and $88 for 2400 pages. Interestingly, HP also offers an Instant Ink subscription, although for this model, purchasing high-capacity cartridges outright proves more economical. You can see it in action in my latest review.
One of the standout features of laser printers like the M209dw is their reliability for infrequent printing – even if it’s left idle for months. Unlike inkjet printers that often require nozzle cleaning after prolonged inactivity, laser printers remain functional without the need for such maintenance. This makes them particularly suitable for environments where printing needs are sporadic.
In terms of performance, the M209dw boasts a print speed of 30 pages per minute, which is quite impressive for its price range. During my tests, the printer handled a variety of print jobs over a Wi-Fi network efficiently. Despite being a bit noisier than some other laser printers, the speed and quality of the output were commendable. The standard print mode delivers sharp text and decent B&W photos at 600 DPI, which is sufficient for most black-and-white printing needs.
Setting up the printer was straightforward, thanks to the HP smartphone app, which facilitated wireless connection. Additionally, the printer can connect over USB or ethernet. Once connected, it was effortlessly recognized by various devices, including Macs, Windows PCs, iPads, iPhones, Android phones, and even Chromebooks.
Another notable feature is the duplex printing capability, which allows automatic double-sided printing. This function works smoothly, albeit a bit slower than single-sided printing as it has to pull the paper back in to flip it over.
The M209dw’s paper handling capabilities include a 150-sheet tray and a 100-sheet output tray. While it can handle standard paper sizes efficiently, its design may not be ideal for printing on smaller items like index cards or envelopes, as there is no manual feeder for straight-through printing.
One important consideration for potential buyers is HP’s aggressive stance on generic cartridges. The company employs dynamic security measures to prevent the use of third-party cartridges, frequently updating firmware to enforce this policy. This could be a drawback for those who prefer using cheaper, non-HP cartridges. Additionally, the persistent promotion of the Instant Ink subscription inside the app, along with a hard-to-remove promotional sticker on the printer itself, might be an annoyance.
Despite these minor drawbacks, the HP LaserJet M209dw stands out as a cost-effective, reliable option for users with occasional printing needs. Its speed, print quality, and ease of setup make it a compelling choice in the budget laser printer category.
Disclosure: This item came in free of charge from the online retailer Flip. However Flip nor HP reviewed or approved this review before it was uploaded, no other compensation was received and all opinions are my own.
As I write this post we are in the middle of the “Steam Summer Sale” where the popular gaming platform offers deep discounts on thousands of PC games. Like many gamers of a certain age I find myself buying cheap games to add to my library but never get around to actually playing them.
A recent analysis by PCGamesN highlights a staggering amount of unplayed games worth billions of dollars in users’ libraries. This got me curious about my own Steam library, and I decided to delve into this issue further. This is the subject of my latest video.
PCGamesN estimates that there are approximately $1.9 billion worth of unplayed games in publicly accessible Steam profiles. This figure only accounts for about 10% of the profiles in the Steam ID database, suggesting that the actual amount of unplayed games could be significantly higher. The variability in game prices and sales makes it difficult to pinpoint an exact value of these unplayed titles, but it’s safe to assume it’s at least several hundred millions of dollars.
Curious about my own collection, I discovered that I have around $2,000 worth of unplayed games in my Steam library, accounting for just over 50% of my overall library. Many of these games were acquired through bundles or sales, often at significantly reduced prices, so I think my actual cost is much lower. Despite having played only 48% of my games, I continue to add more to my library.
You can check out your own “pile of shame” by making your Steam profile public and searching for it on SteamIDfinder.com. You can also keep track of unplayed games inside of the Steam interface by using their filtering options as demoed in my video. You can turn your filtered search into a “dynamic collection” that will automatically update the list as you work your way through the unplayed games.
One downside of these digital libraries is a lack of true ownership. I can’t sell these unplayed games like I could with a CD or cartridge based game. What’s worse is that any issue with the account can result in losing access to all purchased content – after all you’re merely purchasing a revokable license to play the game.
One of the staples of my mid-life crisis set is my beloved Apple IIgs, a computer I purchased second hand from a guy in town when I was 13 years old. In a recent livestream, I pulled the IIgs off its perch to replace its power supply and get it back into working condition.
Thankfully there are some great resources out there to rectify these power supply problems and get the old hardware working again. I opted to send mine into Reactive Micro and take advantage of their power supply rebuild service. I quickly received my power supply back with new, modern guts that should keep this old computer working for decades to come.
The Apple IIgs looked and behaved like a Mac but was a late 80’s upgrade to the Apple II line with enhanced graphics and sound. In addition to running its own enhanced 16 bit software it also was fully backwards compatible with the 8 bit software for the earlier Apple II models.
I pushed the IIgs to the limit when I was 13 with all sorts of upgrades like adding a telephone-based modem to host a BBS, and even installing an emulator board to turn it into an IBM-XT compatible PC!
But I still wanted to do more to upgrade my IIgs back in the day. For the entire duration of it being my “daily driver,” it lacked a hard disk drive and I had to run software off of its 3.5″ and 5.25″ floppy disk drives.
It also only had a single megabyte of RAM. While that was more than enough for the older Apple II software, the more advanced GS applications that ran in the Mac-like GS/OS operating system needed more memory. I was completely locked out of the last and greatest GS/OS System 6.0 release in 1992 as I needed 1.25MB of RAM vs. the single meg on mine.
Back in 2007 I pulled the IIgs out of my Mom’s basement and embarked on a project to bring it up to the level I always wanted it to be at while also archiving all of my floppy disks.
I managed to find an upgraded Transwarp GS accelerator board for a great price on Ebay that brought the machine up to a whopping 12 mhz from its original 2.8mhz. I also added a 4 megabyte RAM upgrade board.
Around this time hobbyists began developing some amazing new hardware for the Apple II. I picked up the “CFFA 3000” that emulates a IIgs hard drive on Compact Flash cards & USB drives along with the “Uthernet II” ethernet card that connected my IIgs to my local network and the Internet!
If you have an old IIgs kicking around, you’ll find Reactive Micro to be a great source for upgrade hardware. If you’re looking for software check out “What is the Apple IIgs,” an awesome archive of most of the software available for the IIgs. They also have some pre-compiled hard disk images on the home page that will boot up in emulators along with the real hardware.
My YouTube channel is about to celebrate its 12th anniversary on YouTube and I wanted to share some insights and experiences on where things go from here. Despite many old-time YouTubers like me quitting the platform, I’m here to stay, even though the landscape has changed significantly. It’s getting tougher for tech creators, but the work is still fun and rewarding. See more in my latest video.
I first want to extend my thanks to all of you who support my work. Whether you watch my videos, donate through various platforms, or participate in live streams, your support is invaluable. The YouTube algorithm makes it harder to reach subscribers, so I appreciate those who actively seek out my content.
Consumer habits have shifted drastically over the last decade or so. Products like Blu-ray players, music players, and even computers are purchased less frequently as the convenience of streaming services multifunctional devices like smartphones dominate. This trend affects my channel’s content and viewership as consumers are looking to purchase less individual devices. The competition for reviews is fierce, especially for products consumers do want to buy like smartphones, making it challenging to attract a broad audience and the revenue streams that come with it.
The tech industry also faces struggles trying to broaden the gadget marketplace. For instance, despite selling 20 million Quest headsets, Meta finds that users don’t engage with them regularly. Now manufacturers are focused on AI, but the hardware is just not there to do any kind of meaningful work.
Looking ahead, I plan to continue producing content while adapting to these changes. So far this year I gained 8,700 new subscribers, bringing the total close to 360,000. Despite a decline in viewership post-pandemic, my revenue remains lower but relatively stable, thanks in part to diverse income sources like YouTube, Amazon, sponsorships, and affiliate marketing.
I’m also experimenting with platforms like Flip for shorter product reviews. Those videos are landing on my extras channel, rebranded as Lon’s Gadget Picks, as I continue to try and find a good model for that secondary outlet.
I’m also considering launching a new channel focused on video production, a niche that doesn’t perform well on my main channel. Live streaming is another area I plan to expand, leveraging platforms like Amazon for additional content distribution.
To manage the overflow of tech gadgets, I’m selling items through my store and live auctions & giveaways on Whatnot (affiliate link). Additionally, my email lists offer weekly and daily updates, keeping subscribers informed about new content and store additions.
Twelve years on YouTube have been a remarkable experience, full of learning and adaptation. If you’re curious, this video of a Lazer Tag device from 2012 is what I consider my first official Lon.TV video.
I’m committed to keep going full time as long as its sustainable. Even if a course change is needed rest assured I’ll still be sharing my content on this platform for many years to come.
I recently had the opportunity to test out the new Lenovo Yoga Slim 7x, an ARM-based Windows laptop featuring the Snapdragon X Elite processor. You can check it out in my latest review.
The Yoga Slim 7x is priced at around $1,200 (compensated affiliate link). My review loaner came equipped with 16GB of LPDDR5x RAM and a 512GB NVME SSD. There’s also a ThinkPad variant available at a slightly higher cost, offering similar performance.
The core distinction between this laptop and its Intel or AMD counterparts lies in its ARM architecture. While Windows has supported ARM processors for years, it often came at the expense of performance and compatibility. However, the latest Snapdragon chips have significantly closed this performance gap, delivering a much-improved experience, albeit with lingering compatibility issues. Not all software runs seamlessly, and the success of running different applications can be unpredictable.
One of the major advantages of this ARM-based machine is its battery life. The Slim 7x houses a 70-watt-hour battery, comparable to those found in Apple’s MacBook Pros. In my tests, it performed admirably, offering battery life on par with ARM-based MacBooks, making it a solid choice for users prioritizing battery life.
One of its standout features is the 14.5-inch OLED display with a 3K resolution (2944×1840). This screen supports Dolby Vision and boasts a peak brightness of 1,000 nits, with a typical brightness of 500 nits. The display also supports a 90Hz refresh rate which would be great for gaming if the compatibility with games wasn’t so bad. But in day-to-day use the 90hz display does make the computer feel faster and more responsive.
The laptop’s build quality is impressive, featuring an all-aluminum design that weighs in at 2.82 pounds (1.28 kg). The keyboard is well-designed, with good key travel and backlighting, making typing a comfortable experience. The large trackpad is responsive, although I preferred turning off the tap-to-click function for better accuracy.
In terms of connectivity, the Yoga Slim 7x has three USB 4.0 ports, all capable of 40Gbps data transfer and supporting display output and power delivery. Unlike the HP Omnibook X laptop I reviewed earlier, this Lenovo device handled a Thunderbolt-connected external hard drive reliably, although the performance was slightly below typical Thunderbolt speeds.
For those considering external GPUs, the current state of ARM drivers does not support this functionality, although future updates and driver compatibility may change this. The 1080p webcam is adequate for video calls, though not exceptional, and the speakers are sufficient for conference calls and casual media consumption.
The laptop’s performance in web browsing and Microsoft Office applications is excellent. The ARM optimized Brave web browser, for instance, ran smoothly, and media playback was flawless, including 60fps YouTube videos. The device also supports Wi-Fi 7, ensuring compatibility with the latest wireless standards.
For more demanding tasks like video editing, the ARM optimized version of DaVinci Resolve performs well, handling a simple 4K 60fps project without issue with similar performance to Intel and AMD based machines. However, this level of use does reduce battery life more significantly.
Gaming on ARM-based laptops remains problematic due to compatibility issues. Titles like No Man’s Sky, Red Dead Redemption 2, and Doom Eternal either failed to run or encountered significant glitches when they did load up.
While Linux is not yet compatible with the Snapdragon X Elite processors, Qualcomm has promised future support.
The Lenovo Yoga Slim 7x shows considerable promise, but its ARM architecture still faces significant compatibility hurdles. It’s suitable for users primarily working with Office 365 and web-based applications, providing excellent performance and battery life. However, those requiring a broader range of software compatibility may still find Intel or AMD-based laptops to be a safer bet as compatibility is just not guaranteed.
Installing Plex on Linux is easier than you might think, even on a low-cost mini PC like the GMKtec G3 with its Intel N100 processor (compensated affiliate link). The goal of this tutorial is to set up the Plex Media Server without diving into complex command lines, making it accessible even for those not well-versed in Linux.
I chose Ubuntu 24.04 which is known for its user-friendly setup. I installed Ubuntu on a MSATA M.2 SSD inside the G3, allowing me to dual boot between Windows and Linux. My Windows installation is on the G3’s other M.2 slot’s NVME drive.
For this example I have an external USB SSD attached with two movies and a season of a television show for demonstration purposes. I suggest formatting the drive in the exFAT format which will simplify access permissions.
The installation process, guided by a thorough online tutorial, involves using a USB drive to boot and install the OS. Once set up, the desktop environment is ready for use, with an app center to facilitate installing additional software like Plex.
To install Plex, I accessed the app center, searched for Plex Media Server, and installed it with a click. After logging into my Plex account and naming the server, it was time to add media libraries. The process involves pointing Plex to the appropriate folders on the external drive where my media is stored. This setup is straightforward, similar to what one would experience on Windows.
Hardware transcoding is a notable feature that works efficiently on Linux. Unlike Windows, Linux supports hardware HDR to SDR tone mapping, significantly improving performance when trying to transcode large HDR 4k Blu Ray files to much smaller streams for remote viewing.
Testing with a 4K HDR movie and a TV show episode simultaneously, the mini PC handled the transcoding with ease, utilizing less than 35% of system resources. By contrast, the Windows version of the Plex media server ground to a halt when the 4k movie began transcoding due to its hardware transcoder not accelerating the tone mapping process. On Windows only Nvidia GPUs are supported right now for hardware tone mapping.
Updating the system and Plex server is managed through the app center, ensuring the software remains current. I also detailed in the video how to back up the installation by navigating to the Plex data stored in var/lib/plexmediaserver/Library/Application Support/Plex Media Server/.
Personally I’ve found Docker to be the best way to manage Plex on Linux as it makes the installation easier to backup and migrate. But Docker does bring with it more installation complexity. In the future we might take unRAID out for a spin that integrates Docker in a very user friendly way. Stay tuned!
Disclosure: This was a paid sponsorship by Plex. However they did not review or approve this video before it was uploaded and all opinions are my own.
A recent FCC rule proposal has sparked significant debate between broadcasters and cable companies over retransmission fees. This proposed ruling, initially intended to require customer rebates when local channels are pulled from a lineup, has evolved into a contentious issue with potential implications for the ATSC 3.0 transition and the desire of broadcasters to encrypt their signals. You can learn all about it in my latest video.
The scenario contemplated by the proposed ruling is becoming more and more common as broadcasters continue to raise their rates and cable companies are pushing back and pulling local channels from lineups. Consumers, who continue to pay their cable bills despite losing access to these channels, are left footing the bill while providers potentially profit.
An executive order from the Biden Administration aimed at addressing such consumer issues has led to this and other similar actions across many industries. A recent high profile example involves the FTC’s recent actions against Adobe for hidden fees and restrictive cancellation policies.
Cable companies, predictably, oppose the FCC’s proposed rule. Verizon and other industry players argue that the rule could harm consumers by giving broadcasters additional leverage in negotiations. The National Cable & Telecommunications Association (NCTA) claims that calculating the cost of carrying networks is complex, despite many providers like Comcast providing itemized fees appearing on customer bills.
Dish Network’s response stands out, proposing reforms to the retransmission consent process while also opposing the FCC’s proposal. Dish highlights that many broadcasters demand carriage of additional, non-broadcast channels they own as part of the agreement to carry local affiliates. Dish also suggests allowing cable providers to import out-of-market signals as a leverage tactic and advocates for a la carte pricing to save consumers money by letting them choose their own lineup.
The Dish Network filing offers a window into what is usually hidden from consumer and government oversight. The carriage and retransmission agreements between broadcasters and cable distributors are always done in private without any involvement or oversight from consumers or government regulators.
Broadcasters counter that pay-TV providers never pass along savings to consumers. They argue that government intervention is unnecessary, as private negotiations should dictate terms. Dish rebuts by pointing out the dramatic increase in retransmission fees, which have surged by 2,600% since 2009, far outpaced inflation and economic growth.
A critical aspect not discussed in this debate are consumers’ ability to receive free over-the-air TV using antennas. Broadcasters are complicating this with the new ATSC 3.0 standard by encrypting signals, which necessitates specialized, licensed equipment. This move seems aimed at pushing consumers towards paid cable subscriptions.
Efforts are underway to oppose this encryption. A significant number of consumers have filed comments with the FCC and signed my petition urging the FCC to ban encrypting free over the air television.
The new ARM-based CoPilot+ PCs (compensated affilate link) have generated a lot of buzz, but the reality of their exclusive AI features is far less impressive that marketed. HP loaned me their Omnibook X 14 for my recent review and I put Microsoft’s heavily marketed AI features to the test. The verdict? Meh.. See more in this video.
Starting with the Paint application, the “Co-create” feature allows users to enhance their drawings with AI with images generated on-device using the new Snapdragon Elite X processor. The feature will generate artwork in a number of different styles based on the user’s original drawing but it also requires a text prompt.
Because the image generation happens on device, the Snapdragon isn’t capable of generating the types of beautiful images found on cloud-based solutions like OpenAI’s Dall-E. Still the images generate quickly and without the complex UI of some of the open source on-device solutions I’ve played with. In short, it’s a gee-whiz feature that sorta works but is not very useful.
My big gripe is that although this AI does its work on-device, it still requires an Internet connection to execute. The reason? Microsoft’s servers review each request to make sure users aren’t doing something the company finds inappropriate.
In the Photos app, the AI enhancement feature can add fun background elements to portraits but struggles with more complex tasks. Like the Cocreate feature, it will require a prompt (and server approval) but will do its work on an existing photo. But the results are lackluster at best. Images without people get mangled up pretty decently. When people are present the AI will only add background effects as safety policies will not allow the user to do any manipulation of images with human faces.
Live Captions, another CoPilot+ feature, offers real-time translation of video and audio content. This feature stands out as genuinely useful, accurately translating spoken language during video playback and calls. However, it only translates into English and does not support two-way translation just yet.
Copilot+ adds a few more webcam Studio Effects like eye contact adjustment, making it appear as though the user is looking at the camera even when they’re not. This subtle feature, along with improved background blur and creative filters, enhances video call experiences but remains a minor improvement of a feature already found in recent Intel and AMD based devices.
Notably absent from these devices is what was supposed to be the flagship AI feature called Recall. This feature takes screenshots of the user’s activity along with associated documents and applications open at the time and allows the user to search through their history using plain language prompts. If, for example, a user was planning a trip and lost track of a website they had visited a simple text query could pull it out of the usage history.
Recall is a great task for the limited capacity of the on-board NPU but it raised a number of privacy and security concerns that forced Microsoft to pause the feature’s release. Without Recall the AI functionality Microsoft built their Copilot+ PC marketing campaign around falls way short in this reviewer’s opinion.
Overall, while the AI features on CoPilot+ PCs are interesting, they are not compelling enough to justify choosing these devices over traditional Intel or AMD-based machines. The promise of superior battery life and performance improvements will be the real test for these ARM machines. You can check out my review of the HP Omnibook X here to see how well it does in those key areas.
I recently had the chance to test out the new HP Omnibook X14, one of the first set of PCs marketed under Microsoft’s Copilot+ line. This model is powered by the Snapdragon X Elite processor, an ARM-based chip similar to those found in smartphones and MacBooks. Historically, Windows on ARM has struggled with performance and compatibility issues, but this new chip aims to bridge that gap.
The Omnibook X14 is priced at around $1,149 (compensated affiliate link), placing it in the mid-range of the laptop market. It boasts several AI features, although these seem underwhelming and not a significant reason to choose this model over Intel or AMD alternatives. I demoed those features in my previous video. My focus for this review is on its performance as a standard laptop.
In terms of hardware, it comes with 16 GB of LPDDR5x RAM and a 512 GB NVMe solid-state drive. The 14-inch display runs at a 2.2k resolution and is a touch panel, although it doesn’t support full 360-degree rotation. The build quality is solid, with an aluminum body and a weight just under 3 pounds. However, the keyboard tends to lift with the display, a minor inconvenience not seen in more premium laptops.
The Omnibook features a decent 1440p webcam with a manual shutter, suitable for video calls. The speakers are adequate for conference calls but not impressive for music. The keyboard is comfortable, with good key spacing and travel, and the trackpad works well once tap-to-click is disabled. It supports facial recognition for quick logins but lacks a fingerprint reader. In terms of ports, it includes a 10 gigabit USB-C port and a faster 40 gigabit USB 4.0 port, although compatibility with Thunderbolt devices is hit-or-miss in my testing on the USB 4.
Performance-wise, the Omnibook X14 handles productivity tasks well. Running Microsoft Word and browsing with the Brave browser was smooth and responsive, comparable to mid-range Intel or AMD laptops that cost about the same. This is a major improvement over prior ARM windows PCs I’ve tested where performance was well below comparably priced PCs.
Battery life is the standout feature here, with reports suggesting up to 24 hours on a single charge. In my experience, it easily handled a full day of work without needing a recharge, aligning it with the longevity seen in MacBook Air models. I have yet to hit a batter warning even with leaving it unplugged throughout the day.
For more demanding tasks, such as video editing, the laptop performed admirably with DaVinci Resolve when using the ARM-optimized version of the software. This marks a significant improvement over previous ARM-based machines, although the availability of optimized applications like this one remains a limiting factor.
Windows is doing a much better job of emulating Intel & AMD x86/x64 code on ARM processors – applications run without much fuss provided they’re relatively simple applications. But as before gaming remains a weak point. Popular titles like No Man’s Sky and Red Dead Redemption 2 failed to run properly due to compatibility issues. Doom Eternal did boot up but experienced significant glitches and inconsistent performance. All three games run at surprisingly good framerates on similarly priced Intel and AMD hardware.
Benchmark tests, such as 3DMark Wildlife, suggest there is potential for great performance once games are optimized for the chipset. But, like the video editing example above, developers have to make some effort to port their games over to the new platform to realize the performance potential. The Windows emulation layer simply doesn’t cut it.
I also attempted to run Linux on the Omnibook but was unsuccessful. The ARM version of Ubuntu did not fully boot, though it reached the GRUB screen. The Omnibook does run with a standard UEFI bios so it should be possible to get Linux up and running in the very near future.
The HP Omnibook X and the other Snapdragon Elite X PCs released this week represent a step forward for Windows running on ARM processors. Previous iterations sacrificed both performance and compatibility for battery life, but these new machines only struggle with compatibility.
For now, if your needs are basic productivity applications and super long battery life, the Omnibook X is a solid choice. However, if gaming or specialized software is a priority, you’ll want to stick with more traditional hardware unless there’s a specific ARM port of the application. By comparison, Intel and AMD battery life is much better than it was a few years ago (although not as good as the new Copilot+ line), and the latest iteration of x64 processors also have NPU hardware acceleration for AI applications.
Microsoft and its partners have started shipping CoPilot+ PCs equipped with the new Qualcomm Snapdragon X Elite ARM processors. Microsoft and Qualcomm claim that these new chips are finally comparable to the ARM-based Apple silicon processors found in Macs. This is promising since, to date, the best-performing ARM-based Windows PCs in my experience are Macs!
I attended a Lenovo press event in New York City yesterday and got some hands on time with two of their new devices: the Yoga Slim 7x and the new ThinkPad T14s Gen 6. The Yoga is pictured below:
In addition to the battery longevity benefits ARM processors have always brought to Windows, these new PCs promise to add some limited on-device generative AI functionality – most of it centered around image generation.
One demo involved a new feature being added to Microsoft Paint that allows the user to draw a rudimentary image and then have the on-board AI generate a much nicer looking version in a number of different styles.
The user still has to enter a text prompt although the AI will take into account the placement of objects in the original image. The photo above doesn’t show that happening, but I did see it correctly place the tree and sun in subsequent image generations. The Windows image viewer also brings similar generative features to images and photos.
Co-pilot+ PCs will get additional OS-level webcam controls that allow for adding realtime filters and a few other neat effects not found on other PCs.
According to a Microsoft blog post, additional on-device AI features are available from third party developers including some generative text capabilties. I did find it odd that Microsoft did not include any generative text features at the OS level like Google and Apple recently announced in their operating systems.
I also saw a short demo of the now infamous Microsoft Recall feature that takes snapshots of user activity and provides the ability to quickly go back to documents, applications and websites with a simple plain english query. Clearly this was going to be the centerpiece of the new embedded AI features but security concerns forced Microsoft to hold off on its release for now.
Without Recall the AI features are a bit underwhelming and currently limited only to these new Snapdragon Elite X PCs. Intel and AMD both have embedded NPUs on their new processors so over time Copilot+ features will likely make their way across the Windows ecosystem which will be necessary for widespread developer adoption.
While these AI features will all execute on device, user queries do apparently get sent to Microsoft to prevent inappropriate use. When I get these machines in for review I plan to explore exactly what type of monitoring will be going on with them.
From a performance perspective the new Snapdragon X Elite feels like a nice step up from the ARM PCs I’ve looked at previously. While the prior attempts delivered great battery life, performance was lackluster especially for applications that were not specifically written for the ARM architecture. This will be another area we’ll explore in my upcoming reviews.
You can see a lineup of CoPilot+ PCs at Best Buy (compensated affiliate link). Most are selling at or above $1,000. Both HP and Lenovo are sending me loaner units to play with. Stay tuned!
AWARN, an organization dedicated to standardizing television emergency alerts, has been instrumental in developing parts of the ATSC 3.0 standard. Their goal is to ensure that emergency alerts are consistent nationwide, allowing people to receive critical information in times of crisis. Improved emergency alerts has been one of the key selling points the industry is making in favor of adoption.
Like everything related to the ATSC 3.0 rollout, not much progress has been made in actually getting these these alerts to work. While the industry worked quickly to encrypt their signals to protect revenues, everything else appears to be falling by the wayside. This is the subject of my latest video.
John Lawson, AWARN’s executive director, told me that both broadcasters and the FCC need to provide some leadership to get this superior alert technology ready for the transition:
“Several major broadcast companies highlighted advanced alerting as the key benefit of NextGen TV when they filed comments requesting that the Commission approve voluntary transmission in ATSC 3.0. Chairman Pai thanked me personally for the role of AWARN in getting him to three votes for approval. But Sinclair and Capitol Broadcasting are really the only two broadcast companies making investments in advanced alerting since then. This inertia is exacerbated by a lack of leadership on the issue from the FCC.”
-Statement from John Lawson
Currently, emergency alerts are not being transmitted via ATSC 3.0 in most if not all markets, as demoed by WNY Weather on YouTube. This could pose a significant risk during emergencies when cellular networks are often the first to fail.
In a recent FCC Filing, AWARN showed examples of widespread cell tower outages during hurricanes in Florida and Louisiana but very few TV stations getting knocked off the air.
ATSC 3.0 promises enhanced alert features like geo-targeting to prevent “over alerting,” rich media content, device wakeup capabilities and more, which are crucial for effective emergency communication. These features can provide detailed information, such as evacuation routes and shelter locations, directly to affected individuals. Despite this potential, the lack of a standardized approach means these capabilities remain underutilized.
AWARN’s presentation to the FCC included practical suggestions, such as the use of battery-powered receivers for low-income households that might not have access to other forms of media. These receivers could ensure that everyone receives emergency alerts, regardless of their financial situation. They also pointed out that current set-top boxes like the ADTH and Zinwell devices could support these alerts, though no broadcasters are transmitting them yet.
The promise of ATSC 3.0 in improving emergency alerts remains unfulfilled due to a combination of industry priorities and a lack of interest by regulators for any part of the ATSC 3.0 rollout. The technology is available, but without a coordinated effort, its life-saving potential will not be realized.
After hearing from viewers about Retrobat, I decided to explore this one-click installer for retro game emulators. Retrobat supports a vast array of systems and offers a simple installation process, making it easy to organize and manage games with just a game controller. You can see it in action in my latest video.
One appealing feature is its portability; by installing it on an external hard drive, I can carry my configurations, save games, and save states between different computers seamlessly.
I started by downloading Retrobat from its website and proceeded with the installation, opting to place it on an external drive for portability. The installation was straightforward, involving a typical Windows setup process. Once installed, the software created essential folders like BIOS and ROMs on my drive. I began by adding some Sega Genesis games, as they do not require BIOS files to run. After copying the ROM files to the appropriate folder, I launched Retrobat.
The initial boot of Retrobat was smooth, and my games appeared in the menu without any additional configuration. The interface even applied a CRT-like curvature to the display, which can be customized or disabled based on preference. Using the scraper feature, I quickly matched metadata and box art to my games. Game manuals were also added to the interface thanks to the Screenscraper database.
For systems requiring BIOS files, like the 3DO, Retrobat provided clear instructions on obtaining and placing these files in the correct directory. Once the BIOS was added, games from that system ran without issue.
Retrobat also manages controller profiles so no up-front configuration is required in almost every instance. Even hot keys like save states tend to work the same no matter which emulator Retrobat summons to play a game.
The best part is that when I moved my USB SSD to another computer everything picked up right where I left off. All of the meta data, interface preferences and even save states carried over seamlessly.
Retrobat simplifies the emulation experience on Windows PCs, offering an easy-to-use interface and extensive customization options. Its portability makes it an excellent choice for those who want to enjoy retro gaming across multiple devices without repeatedly configuring settings.
For a long time, Google’s Pixel phones, including the flagship models, lacked the ability to connect directly to an external display via their USB-C ports. Users had to rely on Chromecast for screen mirroring.
That all changed this week with a new “feature drop” for the new Pixel 8, 8a, and 8 Pro phones which can now output video directly through a USB-C to HDMI or DisplayPort adapter. You can see it in action in my latest video.
The setup process is straightforward: connect a USB-C to HDMI adapter to the phone, then link it to a display. I also found docking stations and USB-C hubs with video output work well too. Once connected, the phone displays a message indicating that it’s ready to mirror to an external display.
In practical use, I found it to work well but with some limitations. When using Google Photos, for example, the Pixel does not adjust the external display to the proper aspect ratio resulting in photos and videos not filling the external screen.
I also tested latency using a Sega Genesis emulator. The experience was decent, with some minor input lag typical of Bluetooth controllers connected to Android devices. Like the Google Photos example there are aspect ratio issues that result in a much smaller play area.
In a followup YouTube Short, I answered a viewer’s question about apps that support controlling the external display independently using the video camera app Filmic Pro. Filmic Pro supports a “clean” output over HDMI while still having control overlays, monitors, etc. on the phone display. These features worked fine on the Pixel 8a phone tested.
There are rumors that Google might introduce a desktop mode similar to Samsung’s DeX, which offers a more desktop-like experience when connected to an external display. Some beta versions reportedly include this feature, though it is not yet available in the current release. As it stands, the feature is purely mirroring with no additional desktop interface.
Overall, this update marks a positive step for Pixel phones, particularly for the Pixel 8 series users. The ability to mirror the phone’s display directly to an external monitor via a USB-C to HDMI adapter adds versatility to the devices, especially useful for presentations, video playback, and casual gaming. However, this feature is currently limited to the Pixel 8, 8A, and 8 Pro, with no indications that it will be rolled out to older models.
Disclosure: the Google Pixel 8a phone featured in this video was provided free of charge by Google. No other compensation was received and no one has reviewed or approved this content before it was uploaded.
In my latest video, I take a look at HP’s Chromebook Plus 14. It is basic computing transportation but it’s decent basic computing transportation.
The laptop is priced at $529 (compensated affiliate link) and comes equipped with features that distinguish it from standard Chromebooks, including AI writing tools and advanced webcam controls. I covered those features in my prior Chromebook Plus videos.
A notable addition to Chromebook Plus is a one-year subscription to Google’s Gemini Advanced AI service, which typically costs $20 per month. This subscription includes two terabytes of cloud storage that works across any devices connected to the user’s Google account. This Chromebook will receive updates through June 2033, and should receive many new Chromebook Plus software features as they are developed.
Under the hood, the HP Chromebook Plus 14 is powered by an Intel i3-N305 processor, part of the Alder Lake lineup, which is known for its balance of performance and power efficiency. Paired with 8GB of RAM and 128GB of UFS storage, this configuration provided good performance for typical Chromebook tasks such as web browsing, word processing, and media consumption. The 14-inch display, while not suited for professional creative work due to its limited color gamut, offers sharp and readable text with a resolution of 1080p.
The device also includes a 1080p webcam, featuring a manual shutter for privacy and OS-level controls for background blurring and lighting adjustments. While the speakers provide adequate sound for conference calls, they may not satisfy audiophiles seeking high-quality music playback. The build quality, predominantly plastic, does not feel cheap and maintains a balance between durability and weight. It weights 3.2 pounds or 1.45 kg.
Connectivity options are good, with two full-service USB-C ports supporting display output, data transfer and power input, alongside a headphone/microphone jack and a USB-A port.
During my tests, the Chromebook Plus 14 managed tasks efficiently without significant issues. However, I recommend using web browsers for streaming services like Netflix and Disney Plus to ensure optimal resolution, as the Android apps for these services may not support full display resolution on Chromebooks.
Benchmark tests reinforced the Chromebook’s capabilities, with the device scoring well in web-based performance assessments. It also handled Android games and game streaming services like GeForce Now effectively, though it may struggle with titles designed for ARM processors. I was unable to get Genshin Impact to install, for example.
For those interested in running Linux applications, the Chromebook Plus 14 supports a variety of Linux apps, including LibreOffice, which operates smoothly on the device.
All in the “Plus” in Chromebook Plus does not add a price premium, but it is a good indication of a better performing Chromebook. The performance on this HP is excellent and its free year of cloud storage makes it a decent value for those looking for a no frills laptop.
Disclosure: The HP Chromebook was provided on loan. No compensation was received for this review nor did anyone review or approve this before it was uploaded.