NVME Six Pack: Beelink ME Mini Server / NAS

I recently got a look at a compact mini PC from Beelink called the ME, and what makes it stand out is its ability to hold six NVMe drives internally. This device is built with network-attached storage in mind, and while I’m demoing it here with Unraid, it also supports other NAS operating systems and Linux distributions. It even ships with a licensed copy of Windows if you want to go that route.

You can see it in action in my latest review.

Inside, it runs on an Intel N150 processor—definitely on the lower end—but well-suited for light server tasks and Docker containers. You can find it on Amazon or direct with a few more configuration options on their website directly (compensated affiliate links).

My review unit included a Crucial-branded NVMe drive pre-installed in slot 4. All the bundled storage options appear to use Crucial, which I’ve been using myself for years.

The drives insert vertically and make contact with a heat pad that connects to a large central heatsink. That design does a noticeably better job at keeping drives cool than other compact NAS units I’ve tested recently. The slots themselves are mostly 1x PCIe interfaces, with slot 4 being the faster with a 2x lane slot. Even so, it maxed out around 1.3 GB/s with the Crucial PCIe 4.0 SSD out of that slot. The rest are slower but the bottleneck in most NAS applications will be the network, not the drive speeds.

This unit includes two 2.5Gb Ethernet ports, which gave me around 200–250 MB/s throughput over the network during my tests. It’s unlikely you’ll saturate even the slowest drive slot with this kind of networking. Internally, the device has 12GB of soldered Crucial RAM. That’s not expandable, but for NAS and home server purposes, it’s enough. There’s also an Intel AX101 Wi-Fi 6 card if you’d rather go wireless.

Ports include two USB 3.2 Gen 2 ports (one USB-A, one USB-C), HDMI, USB 2.0, and a power jack—no external power brick here, just a built-in 45W supply. The casing is plastic but feels solid and clean, especially for a device that may sit out in the open. Video output supports 4K60, and I tested it with Ubuntu and Windows 11 Pro, both of which ran without issues. The hardware was properly recognized under Linux, and the preinstalled Windows license activated without a problem.

To test Unraid, I simply took the drives out of a GMKtec NAS I had been using and inserted them into this one. Everything came up immediately, including my external USB drive array. The only hiccup came from the USB-C port not playing nicely with my drive array; switching to the USB-A port resolved it, but I did lose my parity drive in the process. That seems more like a controller compatibility issue than a fatal flaw, though it’s something to be aware of.

I’m now considering moving entirely to solid-state storage, especially since this device gives me two more NVMe slots than the GMKtec box did. With Unraid’s parity setup, five slots can be used for storage and one for parity, giving me up to 20TB of usable space if I install 4TB drives across the board. I’ve only got about 9TB of data right now, so it’s feasible. 4TB NVME storage is pretty pricey at the moment so I’ll probably piece it together with smaller drives.

Power consumption is low—about 18–20 watts idle with five NVMe drives installed and a couple of Docker containers running. Under load, like when writing large files or playing back a Plex stream with hardware-accelerated 4K HDR tone mapping, it edged up to around 26-30 watts. Hardware transcoding works just fine in Unraid as long as you remember to add /dev/dri to your container configuration. I detail that in the video.

Temperatures on the drives were impressive. A WD cache drive that previously idled at 69°C in the GMKtec unit now hovers around 50–51°C in this one. Under load, those numbers go up a bit, but they’re still dramatically better than before. It’s a testament to the improved passive cooling inside this unit. The fan is also whisper-quiet—much less noticeable than my spinning external drives.

One downside is thermal throttling under extended CPU load. A 3DMark Time Spy stress test resulted in a fail grade, with performance dropping around 16%. That’s shouldn’t impact most NAS workloads, but I wouldn’t use this for anything that demands sustained CPU performance.

Overall, this mini PC has proven to be a capable, efficient little box for self-hosting in tight spaces. I’ve got some reconfiguring to do now—time to dig through my parts bin and see which higher-capacity NVMe drives I can consolidate onto this unit. It feels like there’s real potential to go all solid-state here and simplify the setup.

PeerTube: The YouTube Alternative Nobody’s Talking About

In my latest video, I share my insights on a lesser-known yet intriguing option in the realm of video sharing platforms: PeerTube. This open-source application offers a unique approach to video hosting and sharing, diverging from the centralized control typical of major platforms like YouTube and instead opting for a “federated” approach like Mastodon.

What does “federated” mean? Each “instance” of Peertube operates on a self-hosted server that are spun up by individuals or groups similar to how a web server might work. But what’s different here is that Peertube instances can talk to each other, giving a user on one instance access to content across many other instances.

In this image you can see my personal subscription feed where the top three videos are uploaded to my server, but I’m also pulling in a channel called “Veronica Explains” that resides on TILVids.com:

So even though these videos reside on different servers thanks to federation users can enjoy an experience similar to that of a centralized platform. Playback data will even get sent back to TILvids.com.

As a viewer, the experience on PeerTube is quite similar to that of YouTube. The interface is user-friendly, and videos are chunked for efficient streaming. A notable feature is that Peertube employs a peer-to-peer bandwidth sharing feature, which reduces server load by having viewers simultaneously watching a video share chunks of data among themselves. This not only enhances efficiency but also keeps hosting costs manageable.

While there’s no direct monetization through ads at the moment, creators can link to their support pages, offering an avenue for viewer contributions. The platform also supports plugins, potentially opening doors to various customization and monetization options in the future.

Setting up a PeerTube instance is surprisingly straightforward, especially with tools like Docker. This ease of deployment means that anyone with basic technical knowledge can start their own video sharing platform. The administrative interface of PeerTube is robust, offering a range of configuration options from appearance settings to user management and video transcoding settings.

PeerTube’s potential extends beyond just an alternative social media platform. It can be an excellent solution for corporate intranets or educational institutions needing a private, controlled environment for video sharing. The platform’s adaptability makes it suitable for a variety of uses, from hosting corporate training videos to creating a community-driven video sharing space.

Behind PeerTube is Framasoft, a French nonprofit dedicated to decentralizing the Internet. They are not just focused on video sharing but are developing a suite of tools to replicate the functionality of popular internet applications, all with a focus on privacy and user control.

In my exploration of PeerTube, I’ve found it to be more than just a YouTube alternative. It’s a statement about the direction of the internet, a throwback to the days when the web was a patchwork of individual sites and communities, each with its own identity. PeerTube brings back that sense of individual ownership and control, blended with modern technology and the interconnectedness of today’s platform-centric Internet.

Will it replace YouTube? Of course not. But what it does do is offer an alternative and an example of how a better Internet might look.

Running Plex in a Docker Container on Synology is Super Easy

Over the last couple of months I’ve been playing around with a bunch of self-hosted projects using Docker containers on my Synology NAS. In my most recent sponsored video for Plex, we take a look at spinning up a Plex server inside a container using Synology’s new Container Manager on DSM 7.2.

One might wonder, why use Docker when you can simply install Plex from the Synology package center? The answer lies in the flexibility and advantages Docker offers. Docker containers provide backup and migration opportunities that are more straightforward than other methods. They also offer a level of isolation, enhancing security. In the case of Synology specifically, the Docker versions tend to get updated more frequently, ensuring you always have the latest features.

Before diving in, ensure your Synology NAS is compatible with Docker. Synology’s website has a list of compatible devices that work with their Container Manager. If you are a Plex Pass holder and want to enable hardware transcoding you’ll also need to ensure your Synology NAS is running with an Intel processor that’s compatible with QuickSync video encoding. You can learn more about video transcoding in another video I made on that topic.

My video will take you step by step through the installation process by using a Docker Compose file to configure the container. If you’d like to see the one I’m using you can download it here.

Setting up Plex on Synology NAS using Docker was one of the more straightforward Docker projects I’ve undertaken. The process is efficient, and the benefits, especially in terms of backup and migration, make it worth considering for your next install.

New Synology How To: Using Docker Containers with the new Synology Container Manager

In my latest video we veer off into the nerdy weeds with a detailed step-by-step tutorial about how to spin up and manage complex Docker applications using the new Synology Container Manager that can be found in DSM 7.2.

As I mentioned in my previous video about my self hosted projects, there are hundreds of amazing open source applications out there that offer similar functionality to popular cloud apps. I received so many questions and comments from that video about how I get them running via Docker on a Synology NAS, so that’s where this video comes in.

Because the Docker containers run in an isolated environment, they’re a little more secure than just running applications on the NAS directly. They’re also very easy to back up and move to another server if needed. Just copy the folder over to the new machine, rebuild the containers with a mouse click, and migration is done!

In the video I demonstrate installing Wallabag, an open source “read later” application similar to Pocket and Instapaper. The way it works is that Wallabag will download an archive of a provided URL, transform the web page into a readable format with just the content, and make it available for offline reading via a web browser. The Wallabag app for Android and iOS can sync the Wallabag container’s data with a phone or tablet.

Wallabag runs on the NAS in a container and its data is stored locally there as well. Using Tailscale I’m able to connect back to the application from anywhere in the world securely without having to open up any ports on my router.

I chose wallabag for this demonstration because it’s an example of a project that consists of multiple Docker containers working in concert with each other. In this case there’s the main Wallabag application in one container, a mysql database server in another and a third container runs a redis caching server.

In the past it was possible to get a project like this working but it had to be done outside Synology’s Docker app using the command line or another tool. Container Manager now makes it possible to build and run applications like this without having to use anything else.

In the tutorial I detail the steps of finding and editing Wallabag’s Docker Compose file and building the application as a “project” inside of Container Manager. One of the important things in this process is pointing the containers to a directory on the NAS for storing data. Containers are considered expendable with each update or build, so user data has to be mapped to a persistent storage location on the NAS. After trouble shooting a few minor error codes I was able to get Wallabag project built and operating relatively quickly and reliably on the NAS.

While all of this might seem a bit daunting vs. finding an app and hitting the install button, containerized applications are in many ways the new standard for running open source applications like this. While there is some up-front complexity, the advantages of having what is essentially portable versions of very robust server applications save far more time in the future. Should something ever happen to my NAS I just need to restore the backup files to a new location, click the build button, and I’m back exactly where I left off.

Let me know what you think in the video’s comments! Also be sure to share some of the containers you’ve found to be most useful.

Disclosure: Synology is an occaisional sponsor here on the channel and they provided me with the NAS hardware used in the review free of charge. However they did not sponsor this video nor did they provide any input or approval prior to publishing.

My Latest Self Hosted Synology Projects

My latest video involves some of the home networking projects I’ve been working on recently with my Synology NAS devices.

One of the projects I’ve been working on involves setting up a private network using Tailscale, a great (and free) personal VPN solution that allows you to connect remote devices together without having to expose ports on your router. I covered the basics of Tailscale in a previous video.

I’ve set up Tailscale on my primary NAS at home and another one on a Synology NAS at my mother’s house. Using Synology’s Hyper Backup software, I’ve been able to back up about 3 terabytes of data from my house to hers. This has provided me with a secure and efficient way to store a large amount of data off-site. Now that the initial 3TB is loaded subsequent backups will be much smaller as just the changes will be sent over.

My Mom is running with the 500 megabit symmetrical plan from Frontier and it looks as though the data rates have mostly been as advertised during this very long data transfer.

Another project I’ve been working on involves Docker which runs on the Synology + series devices. Docker containers make it easy to host sophisticated self-hosted web apps with just a few clicks. I’ve been using Docker to host a few applications, including Pingvin, a self-hosted alternative to WeTransfer. This allows me to upload and share large files without having to rely on third-party services.

To ensure the security of my home network, I’ve been using Cloudflare’s Zero Trust Tunnel. This service allows me to expose certain services to the public internet without exposing my home IP address. It’s a safer alternative to opening up a port and provides an additional layer of security.

I’ve also been experimenting with PeerTube, an open-source application that allows you to create your own self-hosted version of YouTube. I’ve been able to host videos on my own server, which has given me a lot of control over my content. The software also uses a peer-to-peer system to distribute videos, which helps reduce bandwidth usage.

These projects have given me a deeper understanding of the potential of home networking for those lucky to have fast fiber optic connections. They’ve allowed me to explore new technologies, improve the security of my network, and gain more control over my data.

I’m excited to continue expanding my “home lab” and sharing my experiences with you. I believe that these projects can provide valuable insights for anyone interested in home networking, and I encourage you to explore these technologies for yourself!