Head in the clouds, boots on the ground.
Self-hosted infrastructure is the first step toward voluntary apotheosis.
–Unknown
When people think of The Cloud(tm), they think of ubiquitous computing. Whatever you need, whenever you need it’s there from the convenience of your mobile, from search engines to storage to chat. However, as the latest Amazon and Cloudflare outages have demonstrated all it takes is a single glitch to knock out half the Internet as we know it.
This is, as they say, utter bollocks. Much of the modern world spent a perfectly good day that could have been spent procrastinating, shitposting, and occasionally doing something productive bereft of Slack, Twitter, Autodesk, Roku, and phone service through Vonage. While thinking about this fragile state of affairs in the shower this morning I realized that, for the somewhat technically inclined and their respective cohorts there are ways to mitigate the risks of letting other people run stuff you need every day. Let us consider the humble single board computer, computing devices the size of two decks of cards at most, a €1 coin at the very least. While this probably won’t help you keep earning a paycheque it would help you worry less about the next time Amazon decides to fall on its face.
Something I never really understood was the phenomenon of people recreating the data center in miniature with stacks of Raspberry Pis shaped like a scaled down telecom rack. There’s nothing wrong with that – it’s as valid a way of building stuff out as anything else. However… do you really have to go this route? Single board computers are small enough that they can be placed anywhere and everywhere in such a manner that they’re practically invisible. At any time you could be surrounded by a thin fog of computing power, doing things you care about, completely out of sight and out of mind, independent of AWS, Azure, or any other provider’s health. That fog could be as large or as small as you need, expanded only as needs dictate, cheap to upgrade or replace, and configured to automatically update and upgrade itself to minimize management. Some visionaries imagine a world in which any random thing you lay eyes upon may have enough inexpensive smarts built in to crunch numbers – why not take a step in that direction?
Starting with a relatively powerful wireless router running OpenWRT for maximum customizability and stability might be a good place to start. Speaking only as someone who’s used it for a couple of years, OpenWRT stuff is largely “set it and forget it,” with only an up-front investment of time measured in an afternoon. Additionally, using it as your home wireless access point in no way compromises getting stuff done every day. If nothing else, it might be more efficient than the crappy wireless access point-cum-modem that your ISP makes you use.
Now onto the flesh and bones of your grand design – where are you going to put however many miniature machines you need? Think back to how you used to hide your contraband. The point of this exercise isn’t to have lots of blinky lights all over the place (nerd cred aside), the point is to have almost ubiquitous computing power that goes without notice unless the power goes out (in which case you’re up a creek, no matter what). Consider the possibility of having one or two SBCs in a hollowed out book, either hand-made or store bought (“book safes”) with holes drilled through the normally not-visible page side of the book to run power lines to a discrete power outlet. Think about using a kitschy ceramic skull on your desk containing a Raspberry Pi 0w, miniature USB hub, and a couple of flash drives. How about a stick PC stuck into the same coffee cup you keep your pens and pencils in?
Maybe a time will come when you need to think bigger. Let’s say that you want to spread your processing power out a bit so it’s not all in the same place. Sure, you could put a machine or two at a friend’s house, your parents’ place, or what have you.. but why not think a little bigger? Consider a RasPi with a USB cellular modem, a pre-paid SIM card, and SSH over Tor (to commit the odd bit of system administration) hanging out on the back of your desk at the office (remember those?) or stashed behind the counter of a friendly coffee shop.
Which moves us right along to the question, what do you actually run on a personal cluster? Normally, people build personal clusters to experiment with containerization technologies like Docker (for encapsulating applications in such a way that they “think” they’re all by their lonesome on a server) and Kubernetes or the cut-down k3s (for doing most of the sysadmin work of juggling those containers). Usually a web panel of some kind is used to manipulate the cluster. This is quite handy due to the myriad of self-hosted applications which happen to be Dockerized. The reason for such a software architecture is that the user can specify that a containerized application should be started with a couple of taps on their mobile’s screen and Kubernetes looks at its cluster, figures out which node has enough memory and disk space available to install the application and its dependencies, and does so without further user intervention. Unfortunately, this comes at the expense of having to do just about everything in a Dockerized or Kubernetes-ized way. In a containerized environment things don’t like to play nicely with traditionally installed stuff, or at least not without a lot of head scratching, swearing, and tinkering.
We can think much bigger, though. Say we’re setting up a virtual space for your family. Or your affinity group. Or a non-profit organization. There are some excellent all-in-one systems out there like Yunohost and Sandstorm which offer supported applications galore (Sandstorm is only available for the x86-64 platform right now, though there’s nothing that says that you can’t add the odd NUC or VPS to your exocortex) which can be yours for a couple of mouse clicks.
How about easy to use storage? It’s always good to have someplace to keep your data as well as back it up (you DO make backups, don’t you?) You could do a lot worse than a handful of 256 GB or 512 GB flash drives plugged into your fog and tastefully scattered around the house. To access them you can let whatever applications you’re running do their thing, or you can stand up a a copy of MinIO on each server (also inside of Docker containers) which, as far as anything you care about will be concerned is just Amazon’s S3 with a funny hostname.
Pontification about where and what safely behind us, the question that now arises is, how do we turn all this stuff into a fog? If you have machines all over the place, and some of them aren’t at home (which means that we can’t necessarily poke holes in any intervening firewalls), how can the different parts of the cluster talk to each other? Unsurprisingly, such a solution already exists in the form of Nebula, which Slack invented to do exactly what we need. There’s a little up-front configuration that has to be done but once the certificates are generated and Nebula is installed on every system you don’t have to mess with it anymore unless there are additional services that you want to expose. It helps to think of it as a cut-down VPN which requires much less fighting and swearing but gives you much more “it just works(tm)” than a lot of things. Sandstorm on a VPS? MinIO running on a deck of cards at your friendly local gaming shop? Nebula can make them look like they’re right next to one another, no muss, no fuss. Sure, you could use a Tor hidden service to accomplish the same thing (and you really should set up one or two, if only so you can log in remotely) but Nebula is a much better solution in this regard.
Setting up this kind of infrastructure might take anywhere from a couple of days to a couple of weeks, depending on your level of skill, ability to travel, availability of equipment, and relative accessibility of where you want to stash your stuff. Of course, not everybody needs such a thing. Some people are more than happy to keep using Google applications or Microsoft 365, or what have you. While some may disagree or distruct these services (with good reason), ultimately people use what they use because it works for them. A certain amount of determination is required to de-FAANGify one’s life, and not everyone has that need or use case. Still, it is my hope that a few people out there make a serious consideration.
By day the hacker known as the Doctor does information security for a large software-as-a-service company (whom he does NOT speak for), with a background in penetration testing, reverse engineering, and systems architecture. By night the Doctor doffs his disguise, revealing his true nature as a transhumanist cyborg (at last measurement, comprised of approximately 30% hardware and software augmentations), technomancer (originally trained in the chaos sorta-tradition), cheerful nihilist, lovable eccentric, and open source programmer. His primary organic terminal has been observed presenting at a number of hacker conventions over the years on a variety of topics. Other semi-autonomous agents of his selfhood may or may not have manifested at other fora. The Doctor is a recognized ambassador of the nation of Magonia, with all of the rights and privileges thereof.
The post Head in the clouds, boots on the ground. appeared first on Mondo 2000.