The TuringPi v2 - A New Generation of Power-Efficient Cluster Computing

I’m unfortunately not the first person to come out with a review of the Turing Pi 2; not even the second or third! Jeff Geerling, Techno Tim, and LearnLinuxTV on YouTube have all gotten their hands on the board as well, and they’ve done individually fantastic dives into the hardware, what it can do, and even offered some comparisons with other hardware in Jeff’s video (16:40).

So while I want to touch on the hardware, I don’t want to dive into everything that the other reviewers have already covered. I’d like to help you understand, from my perspective, why the Turing Pi 2 might be one of the most exciting pieces of technology in the past few years, and why it might be my favorite piece of technology in 2022.

Turing Pi 2

What is the Turing Pi 2?

The Turing Pi 2 is a Mini-ITX system board which allows up to four compute modules to be connected. Today, the Raspberry Pi Compute Module 4 and three NVIDIA Jetson CoM (Computer-on-Module) units can be connected to the board. In the future, other compute modules may be compatible, but that future isn’t quite today. The board has several features that are tied to all of the nodes (RTC, Ethernet), while other connectors and components are tied to specific node slots.

To utilize the board to it’s fullest potential, you will want to fully populate all four node slots!

Slot 1 Slot 2 Slot 3 Slot 4
mini-PCIe
SIM-card slot
GPIO 40-pin
HDMI
mini-PCIe 2x SATA III
(6Gbps)
4x USB 3.0 Ports
- 2x on Rear IO
- 2x on Front-Panel Connector
Turing Pi 2

The Turing Pi 2 has dual-Gigabit Ethernet NICs, and a Realtek switch chip which supports Layer 2+ capabilities1 as well as LACP2. With gigabit speeds on the NIC as well as to each compute module, each node will be able to utilize full gigabit speed, which is a major improvement from the Turing Pi 1 and the Compute Module 3/3+, which was limited to 10/100Mbps full-duplex network speeds.

The board has an IP-enabled baseboard management controller3, similar to any enterprise-grade out-of-band management software. This will allow users to manage the network settings, manage power per-node, access serial connections, and flash nodes.

1/2: While the pre-production unit does not have this functionality enabled, it is actively being worked on and will hopefully ship with that functionality enabled.

3: The baseboard management controller is still a Work-in-Progress on the pre-production unit, but is actively in development and should ship with the functionality and features listed above.

Why does the Turing Pi 2 matter?

While the Turing Pi 1 could run seven Raspberry Pi Compute Module 3+ units compared to the Turing Pi 2’s four nodes, the limitation of 1 GiB of memory and the slightly slower CPU felt like more of a proof-of-concept compared to a real product. That’s not a knock against the Turing Pi 1 either; it’s absolutely a knock against the Compute Module 3+ and the limited memory and compute per-SOC module.

The Turing Pi 2 is perhaps the first compute module cluster board that can support truly powerful compute nodes. With a minimal power draw (under 60W under load) and small physical footprint, you can fit a total of 16 CPU Cores and 32 GiB of memory into a Mini-ITX form factor system. For the first time, you can truly build a cluster-in-a-box with nothing more than an ethernet cable and a single power cord. Additionally, as arm64/aarch64 continues to push it’s way into mainstream systems, this cluster board makes it easy to add into any new or existing environment.

As someone who absolutely believes in the future of the ARM64 ISA, this cluster board checks nearly every box I’ve wanted for years. Even Apple is migrating to their custom M1 architecture, which is based on the ARM instruction set architecture (ISA)! With Raspberry Pi CM4 units and this board in conjunction with existing Intel or AMD-based systems (amd64 architecture), you can easily natively build multi-architecture containers, or run just about anything you could ever want between the two sets of systems.

The Turing Pi 2 matters because it’s the first tangible proof that ARM64 is not just for Apple or the enterprise (see: AWS Graviton); ARM64 ISA’s have a real place in the homelab outside of individual Raspberry Pi nodes, and a real place in small and large businesses looking for small form factor (SFF) clustering (see: Chick-Fil-A Edge Computing). I can absolutely see the Turing Pi 2 being fantastic for various segments of the Live Events and Entertainment industries, temporary Covid-19 testing sites in need of local compute, portable technical labs, or for rapidly deploying highly available and resilient systems in many small businesses. Being able to deploy a cluster of two or three of these boards in a significantly smaller footprint than nearly anything else on the market today with minimal cabling and no moving parts is an extraordinarily exciting concept.

Another major benefit of the Turing Pi 2 is that it is absolutely built for hybrid-cloud workloads right out of the box! I’ve been hosting services on the Turing Pi 2 from my homelab with Rancher K3s quite successfully for several weeks as of writing this. Some of the services that have been running are:

Internally, my network is reasonably normal. However, I host several of these services directly on the internet. At a high level, this is what my external network setup looks like:

Turing Pi 2

While the networking looks a little odd (and is worthy of another discussion entirely), here’s what matters:

➜  time curl https://whiteboard.danmanners.com -sI
HTTP/2 200
accept-ranges: bytes
content-type: text/html
date: Fri, 28 Jan 2022 04:26:17 GMT
etag: "61da4226-1d7b"
last-modified: Sun, 09 Jan 2022 02:02:14 GMT
server: nginx/1.21.5
content-length: 7547

curl https://whiteboard.danmanners.com -sI 0.02s user 0.02s system 10% cpu 0.304 total

The Turing Pi 2 nodes are the only nodes in my homelab running the Excalidraw (whiteboard.danmanners.com) software, and over the internet the latency is as low as around 20ms. That’s kind of awesome. Even with a reasonably complex network architecture and many hops, there does not appear to be any noticeable latency when accessing the services hosted on the Turing Pi 2 compute nodes. The nodes are snappy, responsive, and meet my needs perfectly.

Turing Pi 2

I’ve gone through and evaluated an NGFF (mini-PCIe) to NVMe Adapter with a Samsung 980 NVMe SSD, and while performance is not what I would normally expect from a Samsung 980 NVMe drive, I don’t believe it to be a limitation with the Turing Pi at this point; it’s plenty fast enough to act as persistent storage for building arm64 containers natively! In conjunction with Tekton CI/CD or buildah and an NGFF to NVMe adapter for an NVMe SSD, you can even run multi-architecture builds natively on arm64 and amd64 nodes respectively and push the final manifest up to a given container registry by leveraging nodeSelectors in your build pipeline. While that is not in and of itself a feature of the Turing Pi 2, I’ve never had an easier time provisioning and managing a multi-node K3s cluster with ARM64 nodes. The single ethernet and power cable make it quite fast to get all four nodes online.

What’s next?

While the baseboard firmware today has room for improvement, I trust that the Turing Pi software development team will continue to improve upon it. I think that the Turing Pi 2 hardware in its current state is very close to perfect, and I absolutely believe that this will be a wonderful product by the time it lands in the hands of enthusiasts, fans, and Kickstarter backers.

The Turing Pi 2 team will be launching their Kickstarter in the near future, and I cannot wait to purchase a second unit. While I truly believe that the Turing Pi 1 was more of a niche product, pending the availability of Raspberry Pi Compute Module 4 units during the global everything shortage, I think that the Turing Pi 2 could be an absolute home-run of a product for tech enthusiasts, anyone learning about Kubernetes, anyone with an interest in ARM64-based systems, and anyone wanting a low-power and efficient learning environment. Once this becomes available, I’m unsure there’s going to be much else I’ll be able to recommend that can offer so much bang-for-your-buck.

As far as pricing goes, that is yet to be determined as of the writing of this blog post. However, we have some known information and can make some reasonably safe assumptions:

All in, that’ll put an estimated low price for a fully built system around $550 USD, and at a high estimate around $850 USD. While that may seem like a lot of money for a cluster of Raspberry Pi (or NVIDIA Jetson) nodes, once you start comparing it to building a cluster of other systems and boards you’ll find yourself in the same price-range. The biggest challenge that any consumer is going to have these days is going to be actually finding the Raspberry Pi 4 Compute Modules at MSRP. With the global technology market being where it is today and scalping/price gouging being as bad as it is, I’d highly recommend buying Compute Modules whenever and wherever you see them available if they’re MSRP or close to it.

Final Notes & Disclosures

Turing Pi did not financially compensate me for this post; this is 100% because I love this board and what I think it means for the future of low-power cluster computing. They did however send both the pre-production board shown at no cost to me as well as a NVIDIA Jetson Nano for evaluation.

If you’re looking to join the Official Turing Pi Discord Server, want to find Turing Pi on Twitter or visit their website, click on the logos below!

If something I wrote isn’t clear, feel free to ask me a question or ask me to update it! You can ping me at daniel.a.manners@gmail.com, or tweet me @damnanners.