FreshRSS

Zobrazení pro čtení

Jsou dostupné nové články, klikněte pro obnovení stránky.

Head in the clouds, boots on the ground.

Self-hosted infrastructure is the first step toward voluntary apotheosis.

–Unknown

When people think of The Cloud(tm), they think of ubiquitous computing. Whatever you need, whenever you need it’s there from the convenience of your mobile, from search engines to storage to chat.  However, as the latest Amazon and Cloudflare outages have demonstrated all it takes is a single glitch to knock out half the Internet as we know it. 

This is, as they say, utter bollocks.  Much of the modern world spent a perfectly good day that could have been spent procrastinating, shitposting, and occasionally doing something productive bereft of Slack, Twitter, Autodesk, Roku, and phone service through Vonage.  While thinking about this fragile state of affairs in the shower this morning I realized that, for the somewhat technically inclined and their respective cohorts there are ways to mitigate the risks of letting other people run stuff you need every day.  Let us consider the humble single board computer, computing devices the size of two decks of cards at most, a €1 coin at the very least.  While this probably won’t help you keep earning a paycheque it would help you worry less about the next time Amazon decides to fall on its face.

Something I never really understood was the phenomenon of people recreating the data center in miniature with stacks of Raspberry Pis shaped like a scaled down telecom rack.  There’s nothing wrong with that – it’s as valid a way of building stuff out as anything else.  However… do you really have to go this route?  Single board computers are small enough that they can be placed anywhere and everywhere in such a manner that they’re practically invisible.  At any time you could be surrounded by a thin fog of computing power, doing things you care about, completely out of sight and out of mind, independent of AWS, Azure, or any other provider’s health.  That fog could be as large or as small as you need, expanded only as needs dictate, cheap to upgrade or replace, and configured to automatically update and upgrade itself to minimize management.  Some visionaries imagine a world in which any random thing you lay eyes upon may have enough inexpensive smarts built in to crunch numbers – why not take a step in that direction?

Starting with a relatively powerful wireless router running OpenWRT for maximum customizability and stability might be a good place to start.  Speaking only as someone who’s used it for a couple of years, OpenWRT stuff is largely “set it and forget it,” with only an up-front investment of time measured in an afternoon.  Additionally, using it as your home wireless access point in no way compromises getting stuff done every day.  If nothing else, it might be more efficient than the crappy wireless access point-cum-modem that your ISP makes you use.

Now onto the flesh and bones of your grand design – where are you going to put however many miniature machines you need?  Think back to how you used to hide your contraband.  The point of this exercise isn’t to have lots of blinky lights all over the place (nerd cred aside), the point is to have almost ubiquitous computing power that goes without notice unless the power goes out (in which case you’re up a creek, no matter what).  Consider the possibility of having one or two SBCs in a hollowed out book, either hand-made or store bought (“book safes”) with holes drilled through the normally not-visible page side of the book to run power lines to a discrete power outlet.  Think about using a kitschy ceramic skull on your desk containing a Raspberry Pi 0w, miniature USB hub, and a couple of flash drives.  How about a stick PC stuck into the same coffee cup you keep your pens and pencils in?

Maybe a time will come when you need to think bigger.  Let’s say that you want to spread your processing power out a bit so it’s not all in the same place.  Sure, you could put a machine or two at a friend’s house, your parents’ place, or what have you.. but why not think a little bigger?  Consider a RasPi with a USB cellular modem, a pre-paid SIM card, and SSH over Tor (to commit the odd bit of system administration) hanging out on the back of your desk at the office (remember those?) or stashed behind the counter of a friendly coffee shop.

Which moves us right along to the question, what do you actually run on a personal cluster?  Normally, people build personal clusters to experiment with containerization technologies like Docker (for encapsulating applications in such a way that they “think” they’re all by their lonesome on a server) and Kubernetes or the cut-down k3s (for doing most of the sysadmin work of juggling those containers).  Usually a web panel of some kind is used to manipulate the cluster.  This is quite handy due to the myriad of self-hosted applications which happen to be Dockerized.  The reason for such a software architecture is that the user can specify that a containerized application should be started with a couple of taps on their mobile’s screen and Kubernetes looks at its cluster, figures out which node has enough memory and disk space available to install the application and its dependencies, and does so without further user intervention.  Unfortunately, this comes at the expense of having to do just about everything in a Dockerized or Kubernetes-ized way.  In a containerized environment things don’t like to play nicely with traditionally installed stuff, or at least not without a lot of head scratching, swearing, and tinkering.

We can think much bigger, though.  Say we’re setting up a virtual space for your family.  Or your affinity group.  Or a non-profit organization.  There are some excellent all-in-one systems out there like Yunohost and Sandstorm which offer supported applications galore (Sandstorm is only available for the x86-64 platform right now, though there’s nothing that says that you can’t add the odd NUC or VPS to your exocortex) which can be yours for a couple of mouse clicks.

How about easy to use storage?  It’s always good to have someplace to keep your data as well as back it up (you DO make backups, don’t you?)  You could do a lot worse than a handful of 256 GB or 512 GB flash drives plugged into your fog and tastefully scattered around the house.  To access them you can let whatever applications you’re running do their thing, or you can stand up a a copy of MinIO on each server (also inside of Docker containers) which, as far as anything you care about will be concerned is just Amazon’s S3 with a funny hostname.

Pontification about where and what safely behind us, the question that now arises is, how do we turn all this stuff into a fog?  If you have machines all over the place, and some of them aren’t at home (which means that we can’t necessarily poke holes in any intervening firewalls), how can the different parts of the cluster talk to each other?  Unsurprisingly, such a solution already exists in the form of Nebula, which Slack invented to do exactly what we need.  There’s a little up-front configuration that has to be done but once the certificates are generated and Nebula is installed on every system you don’t have to mess with it anymore unless there are additional services that you want to expose.  It helps to think of it as a cut-down VPN which requires much less fighting and swearing but gives you much more “it just works(tm)” than a lot of things.  Sandstorm on a VPS?  MinIO running on a deck of cards at your friendly local gaming shop?  Nebula can make them look like they’re right next to one another, no muss, no fuss.  Sure, you could use a Tor hidden service to accomplish the same thing (and you really should set up one or two, if only so you can log in remotely) but Nebula is a much better solution in this regard.

Setting up this kind of infrastructure might take anywhere from a couple of days to a couple of weeks, depending on your level of skill, ability to travel, availability of equipment, and relative accessibility of where you want to stash your stuff.  Of course, not everybody needs such a thing.  Some people are more than happy to keep using Google applications or Microsoft 365, or what have you.  While some may disagree or distruct these services (with good reason), ultimately people use what they use because it works for them.  A certain amount of determination is required to de-FAANGify one’s life, and not everyone has that need or use case.  Still, it is my hope that a few people out there make a serious consideration.

By day the hacker known as the Doctor does information security for a large software-as-a-service company (whom he does NOT speak for), with a background in penetration testing, reverse engineering, and systems architecture. By night the Doctor doffs his disguise, revealing his true nature as a transhumanist cyborg (at last measurement, comprised of approximately 30% hardware and software augmentations), technomancer (originally trained in the chaos sorta-tradition), cheerful nihilist, lovable eccentric, and open source programmer. His primary organic terminal has been observed presenting at a number of hacker conventions over the years on a variety of topics. Other semi-autonomous agents of his selfhood may or may not have manifested at other fora.  The Doctor is a recognized ambassador of the nation of Magonia, with all of the rights and privileges thereof.

The post Head in the clouds, boots on the ground. appeared first on Mondo 2000.

Who Is Stellar Blade For?

Stellar Blade's planet Earth lies ravaged and abandoned; skyscrapers puncture its murky sky like monuments of fallen civilization; humankind now struggles for survival on distant space stations ejected by monstrous entities known as Naytiba; however there remains one spark of hope as Eve, an individual wielding an iconic Stellar Blade sword, descends upon it determined to recover what's been taken.

Does Stellar Blade Reminds of Other Games?

Stellar Blade's post-apocalyptic setting, where remnants of civilization struggle for survival amidst monstrous threats, may recall The Last of Us' dark and atmospheric world-building. (Just in case you did not played it - and I do not think there are too many real gamers who did not do it already - you must buy The Last of Us.) Both games feature an earth that has become hostile against humanity's efforts at survival - with abandoned skyscrapers contributing to feelings of desolation and despair that is present throughout both games.

Exploration awaits in Stellar Blade, as Eve navigates treacherous terrain with the help of her trusty rope swing.

Stellar Blade adds its own special twist by including elements of space travel and cosmic entities known as Naytiba into its sci-fi post-apocalyptic narrative, further creating an engaging experience for viewers. Eve's presence wielding Stellar Blade sword sets in motion an extraordinary journey focused on recovering what has been lost against impossible odds. The combat is very different between the games (and we will talk about it soon).

Post-Apocalyptic Survival and Cosmic Exploration

Stellar Blade creats its own identity in the action-adventure genre by mixing elements of post-apocalyptic survival with cosmic exploration, creating a compelling story around Eve's quest for redemption. Is the story engaging? It is engaging enough but quite linear, lacking any depth or mistery, pretty commercial if you ask me. (Check other games with similar combat in our article about exclsuive PS4 and PS5 games.) The Last of Us is a clear masterpiece where Stellar Blade is nothing of this kind, just a slightly above average single-player game that has superlative elements but also minuses in so many ways.

Immerse yourself in the sensory experience of Stellar Blade with the PS5's DualSense controller's haptic feedback technology.

Steps for Making An Informed Deliberation.

So the question stands: is Stellar Blade worth spending space-bucks on? The answer will depend entirely on what appeals to you in a videogame experience.

Stellar Blade for Combat Connoisseurs:

Stellar Blade will offer an engaging combat system, rewarding skill and mastery through parries, evasions and Beta/Burst Gauges to unleash devastating attacks - providing gamers with an immensely enjoyable gameplay experience. Without a doubt, the main reason why someone will buy Stellar Blade is to experience the Sekiro-like challanging combat, the diversity of enemies and tough bosses. Of course, in a totally different setting, however, recognizable in the defense-driven approach of the award-winning Sekiro: Shaddow Die Twice.

Experience the adrenaline-fueled action of Stellar Blade as Eve confronts a formidable foe amidst stunning visual effects.

Stellar Blade for Exploration Enthusiasts:

Stellar Blade offers an intriguing post-apocalyptic world just begging to be explored, perfect for anyone seeking adventure. Vast landscapes filled with crumbling cities and treacherous terrain offer ample thrills and spillover. For the truly daring spirit, this game may provide more than enough excitement!

Stellar Blade for Story Seekers:

Stellar Blade may leave some gamers wanting more, especially if narrative innovation is your top priority because this is a very linear tale lacking deepness, character development, and more. While its worldbuilding is exceptional, predictable plot points and character depth may cause them to disengage with it quickly, so these players are better served if they are looking elsewhere.

Behind-the-scenes glimpse of the development process of Stellar Blade, showcasing the dedication of Shift Up and Sony Interactive Entertainment.

Conclusion

The story develops as Eve attempts to connect with what remains of human civilization while searching for four hypercores guarded by powerful bosses, each unlocking another piece of the puzzle: its secrets behind an apocalyptic event, exodus to space, and Naytiba origins. Combat is the main dish when it comes to Stellar Blade, and even if derivative, it is recognizable and similar to that of Sekiro: Shaddow Die Twice. After all, if you copy someone, choose that someone carefully. Although somewhat predictable in plot development and execution, its narrative doesn't lack charm and Eve is quite an attractive picture - exploring forgotten settlements while discovering stories about past inhabitants add emotional depth - creating a realistic world with crumbling buildings and abandoned street corners feeling lived-in rather than felt during tragedy's heavy weight.

Preview: Ama’s Lullaby (PC – Steam) ~ Hacking The Point-And-Click Genre

Od: NekoJonez

Itch.ioSteam

Back in 2017, a developer from France contacted me about their new point-and-click sci-fi game in the works called Ama’s Lullaby. But, it’s more than a point-and-click game, it’s also a hacking game. Now, this developer works on this game in his free time after his day job and with a small budget. Sometimes these passion projects die due to lack of time, money, motivation and/or just interest. But it looks like Ama’s Lullaby isn’t going to be one of those projects. Earlier this year, a demo of the game got released. Now, I asked the developer if he was interested in streaming this demo with us, and he did. Here is a link to part 1 & part 2. Sadly, due to overheating of Klamath’s computer, it had to be cut into two parts and the ending was quite abrupt. Now, this stream is almost a month ago, and I still wanted to write an article about this game. So, what do I think of the demo? Am I still as impressed when I saw it during the livestream, or is my opinion going to change when I’m not back seating and playing it myself? Let’s find out in this article.

Hacking The Point-And-Click Genre

The story of this demo is quite simple. Ama enters the police station and gets new tasks to aid the space colony she is in. Overall, the story is told more naturally compared to other games. Mostly, we get an opening where the main story of the game is teased, but not in this game. During interactions with the others, we get little glimpses into the world and story. Now, this is a tricky thing to pull off, since either you have to force the player to interact with everybody or risk that some players miss potentially important information. On the other hand, info dumping on the player isn’t always the best solution.

Now, in this space colony, there is an AI that makes a lot of decisions. It turns out that Ama and her dad have created that AI and the software to interact with it. She is one of the ambassadors of the human race. But it doesn’t take too long before strange things start to happen, and you notice that not everything is what you think it is.

The dialogues in this game appear above the character’s their head. When it’s cursive, you know it’s a thought. Not only that, you have simple sound effects that appear to put some additional power to the dialogues and to quickly differentiate between thoughts and spoken dialogues. Currently, there are plans to fully voice act this game, but if those plans fall through, I’d recommend to the developer to have different sound effects for the dialogues for different emotions.

Now, the game cold opens with an old school terminal as a main menu. This might be a bit jarring for new players who aren’t used to working with the command line. Personally, as somebody who knows how a command line works, I really love this touch. Since, this interface is also present in a lot of puzzles in the game. It fits the atmosphere and style of the game as a glove. To be honest, I think that with some minor polishing, it would be perfect.

There are a few things I would change. First, I’d get rid of the case-sensitive commands. The main reason is that a lot of people have the default keybinding for the Steam overlay with is… Shift+Tab. Since I love using autocomplete, it got pretty frustrating when I was holding my shift button and tabbed to autocomplete and my Steam overlay popped up.

A second thing I’d change is to allow the user to enlarge the font of terminal. The reason for that is because it doesn’t really scale pretty well with people who are using larger monitors.

Now, since this game is still in development and this is just the demo… I can totally excuse that there are features not present. Like pushing the up arrow to get the last command, or the help feature not always working correctly in all menus. For example, if you are in the options menu and use “QUALITY HELP”, you get information but if you first write “QUALITY” to see the options you can input and then “QUALITY HELP”… It bugs out and doesn’t give you help at all. Another small bug I noticed is that for some reason, the enter button on my numpad didn’t enter but always selected the whole text. But hey, during the stream the developer said that some of these things are on the list to get fixed for the full game.

Cyberpunk Sci-fi

I was impressed with the visuals of the game when we were playing this game on stream. While I haven’t played the Blade Runner games yet, I have seen a lot of people talk about it and know the visual style of the game. This game really mimics that style extremely well. You really feel like you are in a sci-fi world with some older technology than we have compared to our own technology.

Also, something I really love in this demo is that everything is one big space. You don’t really have “screens” in this game, like in a Broken Sword game for example. No, the camera swings and follows Ama as if she was in a movie. This sells the illusion of the area even more. While I’d have loved to see the details the developer put in every scene more up close sometimes, the more zoomed out look gives you a better overview on the scene. It almost feels like you are watching Ama through security camera’s or a drone camera in a way.

The biggest thing that I want to point out in terms of the visuals is Ama herself. The game goes for a more dark and dimly light environment and with a main character that’s wearing black clothes, it’s extremely easy to lose Ama in the scenery. It wouldn’t surprise me if they gave our main character in Blade Runner a brown coat for that reason, so you can more quickly see the main character without breaking the visual style of the game. But, overall, this is almost a nitpick. Since, it didn’t happen a lot that I lost Ama in the scene. It mostly happened when I was replaying parts of the demo while writing this article.

Now, I want to talk about the command line. The tutorial in this game on how a command line works is actually well done. I love how it doesn’t hold the players hands and tries to force them to input the right thing. It really lets you experiment with it and learn how it works. All the while, a small guide on how things work is displayed on the top of your screen.

This whole command line mechanic in this game is a breath of fresh air. It’s impressive how true to reality the whole command line is. While it uses some creative liberties here and there to make it fit into the game world, overall, it might be a real command line interface that’s open in the game.

In this demo, you have a few tasks to complete. Most of these tasks involve fixing various things. One task is highly dependent on the command line. This was quite easy for me since, like I said, I know how to use a command line. Visually, it’s a bit tricky during the tutorials in the network view since it’s not really clear/easy on how you can scroll up or down while in the network view. Using the mouse mostly scrolls around the network map. I think an easier way to scroll up and down in the terminal could be useful there. Also, when you have to input a command that’s longer than the terminal screen, I’d start a second line. Since, that’s how real life works. Or move the whole thing, and not let the username stay.

Final thoughts and future wishes

Overall, the demo is quite short. If you don’t know what you are doing and exploring everything, it will take you mostly two hours to complete. But if you know what to do, you can finish this in 10 minutes. Yet, the impression I got from the stream hasn’t changed. This game has quite a lot of potential but it needs some polish here and there.

There are some minor things like some objects not being solid and Ama being able to run through them, but there are also more major issues. The elevator bug the developer Marc mentioned during the stream, happened to me. Ama didn’t go up with the elevator and she was stuck. I think it was related to another bug I encountered where the head of IT got stuck in an animation loop. Somehow it was like Ama was near him while Ama was walking in other parts of the station. I don’t know what exactly triggered that, and I have replayed the demo trice to try and get it back into that bugged state, but I was unable to find the cause and I was unable to replicate it.

Currently, there is one way to save the game. There are several terminals in this demo where you can save your game. You only have one save slot. There is also no manual saving of the game. So, remember that. You can also only load from the main menu.

Reviewing a demo is always tricky to do. Especially if the game is still in development, since you never know for sure how the final game is going to look like. Yet, this demo is extremely promising. The puzzles where a lot of fun and after playing the demo, I had the same feeling that Klamath had at the end of the stream. I want to play more or similar games like this.

I could start talking about how the sound effects are amazing but there isn’t enough music yet. But, at one hand, the lack of music really sells the atmosphere of the game a lot more but on the other hand, the music during the terminal sections is really enjoyable. But, I’m sure that in the full game we shall see more music.

Just like I’m convinced that when the full game releases and the players find bugs, they will get fixed. While I was talking with Marc during the stream, I really felt the passion for creating this game and how he wants to make it the best experience it can be for his players. So, if you are interested in this game after reading this article in any way shape or form, I highly recommend that you give this game a chance, play the demo for yourself and give the developer feedback via his Discord or any other of his official channels.

I can’t wait to see and play the final game. Various things got revealed and talked about during the stream and I have to say, it was an amazing experience and conversation. I was already interested in seeing this game when it was on KickStarter but now that I have played the demo, I think we are on a winner here. This game will put an interesting twist on the point-and-click genre and will be interesting to anyone who enjoys adventure games with a sci-fi influence or just enjoy more unique puzzle games.

I want to thank Marc for reaching out to me and talking about his unique project. You can be sure that when the full version releases… me and Klamath will play through it and most likely stream it. And I’ll write a more in-depth article on the final product. Since, I might have not talked quite in-depth in this article but I want to hold off my final opinions when the game is fully released.

If you have read my article, played the demo and/or watched our stream, I’m curious, what did you think about this game? Feel free to talk about it in the comments. Am I overhyping the game or overlooking flaws? Or is there something you’d love to see in the full game?

And with that said, I have said everything about the game I want to say for now. I want to thank you for reading this article and I hope you enjoyed reading it as much as I enjoyed writing it. I hope to be able to welcome you in another article but until then, have a great rest of your day and take care.

Head in the clouds, boots on the ground.

Self-hosted infrastructure is the first step toward voluntary apotheosis.

–Unknown

When people think of The Cloud(tm), they think of ubiquitous computing. Whatever you need, whenever you need it’s there from the convenience of your mobile, from search engines to storage to chat.  However, as the latest Amazon and Cloudflare outages have demonstrated all it takes is a single glitch to knock out half the Internet as we know it. 

This is, as they say, utter bollocks.  Much of the modern world spent a perfectly good day that could have been spent procrastinating, shitposting, and occasionally doing something productive bereft of Slack, Twitter, Autodesk, Roku, and phone service through Vonage.  While thinking about this fragile state of affairs in the shower this morning I realized that, for the somewhat technically inclined and their respective cohorts there are ways to mitigate the risks of letting other people run stuff you need every day.  Let us consider the humble single board computer, computing devices the size of two decks of cards at most, a €1 coin at the very least.  While this probably won’t help you keep earning a paycheque it would help you worry less about the next time Amazon decides to fall on its face.

Something I never really understood was the phenomenon of people recreating the data center in miniature with stacks of Raspberry Pis shaped like a scaled down telecom rack.  There’s nothing wrong with that – it’s as valid a way of building stuff out as anything else.  However… do you really have to go this route?  Single board computers are small enough that they can be placed anywhere and everywhere in such a manner that they’re practically invisible.  At any time you could be surrounded by a thin fog of computing power, doing things you care about, completely out of sight and out of mind, independent of AWS, Azure, or any other provider’s health.  That fog could be as large or as small as you need, expanded only as needs dictate, cheap to upgrade or replace, and configured to automatically update and upgrade itself to minimize management.  Some visionaries imagine a world in which any random thing you lay eyes upon may have enough inexpensive smarts built in to crunch numbers – why not take a step in that direction?

Starting with a relatively powerful wireless router running OpenWRT for maximum customizability and stability might be a good place to start.  Speaking only as someone who’s used it for a couple of years, OpenWRT stuff is largely “set it and forget it,” with only an up-front investment of time measured in an afternoon.  Additionally, using it as your home wireless access point in no way compromises getting stuff done every day.  If nothing else, it might be more efficient than the crappy wireless access point-cum-modem that your ISP makes you use.

Now onto the flesh and bones of your grand design – where are you going to put however many miniature machines you need?  Think back to how you used to hide your contraband.  The point of this exercise isn’t to have lots of blinky lights all over the place (nerd cred aside), the point is to have almost ubiquitous computing power that goes without notice unless the power goes out (in which case you’re up a creek, no matter what).  Consider the possibility of having one or two SBCs in a hollowed out book, either hand-made or store bought (“book safes”) with holes drilled through the normally not-visible page side of the book to run power lines to a discrete power outlet.  Think about using a kitschy ceramic skull on your desk containing a Raspberry Pi 0w, miniature USB hub, and a couple of flash drives.  How about a stick PC stuck into the same coffee cup you keep your pens and pencils in?

Maybe a time will come when you need to think bigger.  Let’s say that you want to spread your processing power out a bit so it’s not all in the same place.  Sure, you could put a machine or two at a friend’s house, your parents’ place, or what have you.. but why not think a little bigger?  Consider a RasPi with a USB cellular modem, a pre-paid SIM card, and SSH over Tor (to commit the odd bit of system administration) hanging out on the back of your desk at the office (remember those?) or stashed behind the counter of a friendly coffee shop.

Which moves us right along to the question, what do you actually run on a personal cluster?  Normally, people build personal clusters to experiment with containerization technologies like Docker (for encapsulating applications in such a way that they “think” they’re all by their lonesome on a server) and Kubernetes or the cut-down k3s (for doing most of the sysadmin work of juggling those containers).  Usually a web panel of some kind is used to manipulate the cluster.  This is quite handy due to the myriad of self-hosted applications which happen to be Dockerized.  The reason for such a software architecture is that the user can specify that a containerized application should be started with a couple of taps on their mobile’s screen and Kubernetes looks at its cluster, figures out which node has enough memory and disk space available to install the application and its dependencies, and does so without further user intervention.  Unfortunately, this comes at the expense of having to do just about everything in a Dockerized or Kubernetes-ized way.  In a containerized environment things don’t like to play nicely with traditionally installed stuff, or at least not without a lot of head scratching, swearing, and tinkering.

We can think much bigger, though.  Say we’re setting up a virtual space for your family.  Or your affinity group.  Or a non-profit organization.  There are some excellent all-in-one systems out there like Yunohost and Sandstorm which offer supported applications galore (Sandstorm is only available for the x86-64 platform right now, though there’s nothing that says that you can’t add the odd NUC or VPS to your exocortex) which can be yours for a couple of mouse clicks.

How about easy to use storage?  It’s always good to have someplace to keep your data as well as back it up (you DO make backups, don’t you?)  You could do a lot worse than a handful of 256 GB or 512 GB flash drives plugged into your fog and tastefully scattered around the house.  To access them you can let whatever applications you’re running do their thing, or you can stand up a a copy of MinIO on each server (also inside of Docker containers) which, as far as anything you care about will be concerned is just Amazon’s S3 with a funny hostname.

Pontification about where and what safely behind us, the question that now arises is, how do we turn all this stuff into a fog?  If you have machines all over the place, and some of them aren’t at home (which means that we can’t necessarily poke holes in any intervening firewalls), how can the different parts of the cluster talk to each other?  Unsurprisingly, such a solution already exists in the form of Nebula, which Slack invented to do exactly what we need.  There’s a little up-front configuration that has to be done but once the certificates are generated and Nebula is installed on every system you don’t have to mess with it anymore unless there are additional services that you want to expose.  It helps to think of it as a cut-down VPN which requires much less fighting and swearing but gives you much more “it just works(tm)” than a lot of things.  Sandstorm on a VPS?  MinIO running on a deck of cards at your friendly local gaming shop?  Nebula can make them look like they’re right next to one another, no muss, no fuss.  Sure, you could use a Tor hidden service to accomplish the same thing (and you really should set up one or two, if only so you can log in remotely) but Nebula is a much better solution in this regard.

Setting up this kind of infrastructure might take anywhere from a couple of days to a couple of weeks, depending on your level of skill, ability to travel, availability of equipment, and relative accessibility of where you want to stash your stuff.  Of course, not everybody needs such a thing.  Some people are more than happy to keep using Google applications or Microsoft 365, or what have you.  While some may disagree or distruct these services (with good reason), ultimately people use what they use because it works for them.  A certain amount of determination is required to de-FAANGify one’s life, and not everyone has that need or use case.  Still, it is my hope that a few people out there make a serious consideration.

By day the hacker known as the Doctor does information security for a large software-as-a-service company (whom he does NOT speak for), with a background in penetration testing, reverse engineering, and systems architecture. By night the Doctor doffs his disguise, revealing his true nature as a transhumanist cyborg (at last measurement, comprised of approximately 30% hardware and software augmentations), technomancer (originally trained in the chaos sorta-tradition), cheerful nihilist, lovable eccentric, and open source programmer. His primary organic terminal has been observed presenting at a number of hacker conventions over the years on a variety of topics. Other semi-autonomous agents of his selfhood may or may not have manifested at other fora.  The Doctor is a recognized ambassador of the nation of Magonia, with all of the rights and privileges thereof.

The post Head in the clouds, boots on the ground. appeared first on Mondo 2000.

Get the superb, XCOM inspired strategy game Invisible, Inc for just $3

Get the superb, XCOM inspired strategy game Invisible, Inc for just $3

There are far too many excellent turn-based strategy games to play them all these days, and with even more coming to PC everyday, going back to the landmark releases gets harder and harder. But if you missed it, Invisble, Inc from Don't Starve developer Klei Entertainment is worth a return, particularly if you miss XCOM 2, and need something to fill the gap before (hopefully) XCOM 3. Oh, and if it helps, you can get it for just $3 right now.

Stray “M1+” : Humanity Cat on Mac, but…

Od: Mat

Did you ever play a cat in the game? Well, yes, many have already—it’s a fantastic Stray game.

Although quite short, it’s a game that will definitely stay with us for a while. I did not even know Stray is already available on the Mac App Store! There is a catch for good-old-Pentium Mac users as this game is only for M1s and, further, sorry, Intelers 🙁

Annapurna Interactive Stray in Mac Store
Click to Pic: Annapurna Interactive Stray in Mac Store

So go and catch with your claws and hunt human darks in this impressive “Cat” game. Not on Mac? No worries you can get it for PlayStation. Xbox . and PC on Steam

Lurking Stray Cat



Nice Stray Game Review on Reddit/MacGaming from Beartalk Video. It’s his first English video, and great! Few SPOILERS ahead: VIDEO

Another Video: Stray Game Trailer on Youtube:

The post Stray “M1+” : Humanity Cat on Mac, but… appeared first on WePlayGames.net: Home for Top Gamers.

Head in the clouds, boots on the ground.

Self-hosted infrastructure is the first step toward voluntary apotheosis.

–Unknown

When people think of The Cloud(tm), they think of ubiquitous computing. Whatever you need, whenever you need it’s there from the convenience of your mobile, from search engines to storage to chat.  However, as the latest Amazon and Cloudflare outages have demonstrated all it takes is a single glitch to knock out half the Internet as we know it. 

This is, as they say, utter bollocks.  Much of the modern world spent a perfectly good day that could have been spent procrastinating, shitposting, and occasionally doing something productive bereft of Slack, Twitter, Autodesk, Roku, and phone service through Vonage.  While thinking about this fragile state of affairs in the shower this morning I realized that, for the somewhat technically inclined and their respective cohorts there are ways to mitigate the risks of letting other people run stuff you need every day.  Let us consider the humble single board computer, computing devices the size of two decks of cards at most, a €1 coin at the very least.  While this probably won’t help you keep earning a paycheque it would help you worry less about the next time Amazon decides to fall on its face.

Something I never really understood was the phenomenon of people recreating the data center in miniature with stacks of Raspberry Pis shaped like a scaled down telecom rack.  There’s nothing wrong with that – it’s as valid a way of building stuff out as anything else.  However… do you really have to go this route?  Single board computers are small enough that they can be placed anywhere and everywhere in such a manner that they’re practically invisible.  At any time you could be surrounded by a thin fog of computing power, doing things you care about, completely out of sight and out of mind, independent of AWS, Azure, or any other provider’s health.  That fog could be as large or as small as you need, expanded only as needs dictate, cheap to upgrade or replace, and configured to automatically update and upgrade itself to minimize management.  Some visionaries imagine a world in which any random thing you lay eyes upon may have enough inexpensive smarts built in to crunch numbers – why not take a step in that direction?

Starting with a relatively powerful wireless router running OpenWRT for maximum customizability and stability might be a good place to start.  Speaking only as someone who’s used it for a couple of years, OpenWRT stuff is largely “set it and forget it,” with only an up-front investment of time measured in an afternoon.  Additionally, using it as your home wireless access point in no way compromises getting stuff done every day.  If nothing else, it might be more efficient than the crappy wireless access point-cum-modem that your ISP makes you use.

Now onto the flesh and bones of your grand design – where are you going to put however many miniature machines you need?  Think back to how you used to hide your contraband.  The point of this exercise isn’t to have lots of blinky lights all over the place (nerd cred aside), the point is to have almost ubiquitous computing power that goes without notice unless the power goes out (in which case you’re up a creek, no matter what).  Consider the possibility of having one or two SBCs in a hollowed out book, either hand-made or store bought (“book safes”) with holes drilled through the normally not-visible page side of the book to run power lines to a discrete power outlet.  Think about using a kitschy ceramic skull on your desk containing a Raspberry Pi 0w, miniature USB hub, and a couple of flash drives.  How about a stick PC stuck into the same coffee cup you keep your pens and pencils in?

Maybe a time will come when you need to think bigger.  Let’s say that you want to spread your processing power out a bit so it’s not all in the same place.  Sure, you could put a machine or two at a friend’s house, your parents’ place, or what have you.. but why not think a little bigger?  Consider a RasPi with a USB cellular modem, a pre-paid SIM card, and SSH over Tor (to commit the odd bit of system administration) hanging out on the back of your desk at the office (remember those?) or stashed behind the counter of a friendly coffee shop.

Which moves us right along to the question, what do you actually run on a personal cluster?  Normally, people build personal clusters to experiment with containerization technologies like Docker (for encapsulating applications in such a way that they “think” they’re all by their lonesome on a server) and Kubernetes or the cut-down k3s (for doing most of the sysadmin work of juggling those containers).  Usually a web panel of some kind is used to manipulate the cluster.  This is quite handy due to the myriad of self-hosted applications which happen to be Dockerized.  The reason for such a software architecture is that the user can specify that a containerized application should be started with a couple of taps on their mobile’s screen and Kubernetes looks at its cluster, figures out which node has enough memory and disk space available to install the application and its dependencies, and does so without further user intervention.  Unfortunately, this comes at the expense of having to do just about everything in a Dockerized or Kubernetes-ized way.  In a containerized environment things don’t like to play nicely with traditionally installed stuff, or at least not without a lot of head scratching, swearing, and tinkering.

We can think much bigger, though.  Say we’re setting up a virtual space for your family.  Or your affinity group.  Or a non-profit organization.  There are some excellent all-in-one systems out there like Yunohost and Sandstorm which offer supported applications galore (Sandstorm is only available for the x86-64 platform right now, though there’s nothing that says that you can’t add the odd NUC or VPS to your exocortex) which can be yours for a couple of mouse clicks.

How about easy to use storage?  It’s always good to have someplace to keep your data as well as back it up (you DO make backups, don’t you?)  You could do a lot worse than a handful of 256 GB or 512 GB flash drives plugged into your fog and tastefully scattered around the house.  To access them you can let whatever applications you’re running do their thing, or you can stand up a a copy of MinIO on each server (also inside of Docker containers) which, as far as anything you care about will be concerned is just Amazon’s S3 with a funny hostname.

Pontification about where and what safely behind us, the question that now arises is, how do we turn all this stuff into a fog?  If you have machines all over the place, and some of them aren’t at home (which means that we can’t necessarily poke holes in any intervening firewalls), how can the different parts of the cluster talk to each other?  Unsurprisingly, such a solution already exists in the form of Nebula, which Slack invented to do exactly what we need.  There’s a little up-front configuration that has to be done but once the certificates are generated and Nebula is installed on every system you don’t have to mess with it anymore unless there are additional services that you want to expose.  It helps to think of it as a cut-down VPN which requires much less fighting and swearing but gives you much more “it just works(tm)” than a lot of things.  Sandstorm on a VPS?  MinIO running on a deck of cards at your friendly local gaming shop?  Nebula can make them look like they’re right next to one another, no muss, no fuss.  Sure, you could use a Tor hidden service to accomplish the same thing (and you really should set up one or two, if only so you can log in remotely) but Nebula is a much better solution in this regard.

Setting up this kind of infrastructure might take anywhere from a couple of days to a couple of weeks, depending on your level of skill, ability to travel, availability of equipment, and relative accessibility of where you want to stash your stuff.  Of course, not everybody needs such a thing.  Some people are more than happy to keep using Google applications or Microsoft 365, or what have you.  While some may disagree or distruct these services (with good reason), ultimately people use what they use because it works for them.  A certain amount of determination is required to de-FAANGify one’s life, and not everyone has that need or use case.  Still, it is my hope that a few people out there make a serious consideration.

By day the hacker known as the Doctor does information security for a large software-as-a-service company (whom he does NOT speak for), with a background in penetration testing, reverse engineering, and systems architecture. By night the Doctor doffs his disguise, revealing his true nature as a transhumanist cyborg (at last measurement, comprised of approximately 30% hardware and software augmentations), technomancer (originally trained in the chaos sorta-tradition), cheerful nihilist, lovable eccentric, and open source programmer. His primary organic terminal has been observed presenting at a number of hacker conventions over the years on a variety of topics. Other semi-autonomous agents of his selfhood may or may not have manifested at other fora.  The Doctor is a recognized ambassador of the nation of Magonia, with all of the rights and privileges thereof.

The post Head in the clouds, boots on the ground. appeared first on Mondo 2000.

Head in the clouds, boots on the ground.

Self-hosted infrastructure is the first step toward voluntary apotheosis.

–Unknown

When people think of The Cloud(tm), they think of ubiquitous computing. Whatever you need, whenever you need it’s there from the convenience of your mobile, from search engines to storage to chat.  However, as the latest Amazon and Cloudflare outages have demonstrated all it takes is a single glitch to knock out half the Internet as we know it. 

This is, as they say, utter bollocks.  Much of the modern world spent a perfectly good day that could have been spent procrastinating, shitposting, and occasionally doing something productive bereft of Slack, Twitter, Autodesk, Roku, and phone service through Vonage.  While thinking about this fragile state of affairs in the shower this morning I realized that, for the somewhat technically inclined and their respective cohorts there are ways to mitigate the risks of letting other people run stuff you need every day.  Let us consider the humble single board computer, computing devices the size of two decks of cards at most, a €1 coin at the very least.  While this probably won’t help you keep earning a paycheque it would help you worry less about the next time Amazon decides to fall on its face.

Something I never really understood was the phenomenon of people recreating the data center in miniature with stacks of Raspberry Pis shaped like a scaled down telecom rack.  There’s nothing wrong with that – it’s as valid a way of building stuff out as anything else.  However… do you really have to go this route?  Single board computers are small enough that they can be placed anywhere and everywhere in such a manner that they’re practically invisible.  At any time you could be surrounded by a thin fog of computing power, doing things you care about, completely out of sight and out of mind, independent of AWS, Azure, or any other provider’s health.  That fog could be as large or as small as you need, expanded only as needs dictate, cheap to upgrade or replace, and configured to automatically update and upgrade itself to minimize management.  Some visionaries imagine a world in which any random thing you lay eyes upon may have enough inexpensive smarts built in to crunch numbers – why not take a step in that direction?

Starting with a relatively powerful wireless router running OpenWRT for maximum customizability and stability might be a good place to start.  Speaking only as someone who’s used it for a couple of years, OpenWRT stuff is largely “set it and forget it,” with only an up-front investment of time measured in an afternoon.  Additionally, using it as your home wireless access point in no way compromises getting stuff done every day.  If nothing else, it might be more efficient than the crappy wireless access point-cum-modem that your ISP makes you use.

Now onto the flesh and bones of your grand design – where are you going to put however many miniature machines you need?  Think back to how you used to hide your contraband.  The point of this exercise isn’t to have lots of blinky lights all over the place (nerd cred aside), the point is to have almost ubiquitous computing power that goes without notice unless the power goes out (in which case you’re up a creek, no matter what).  Consider the possibility of having one or two SBCs in a hollowed out book, either hand-made or store bought (“book safes”) with holes drilled through the normally not-visible page side of the book to run power lines to a discrete power outlet.  Think about using a kitschy ceramic skull on your desk containing a Raspberry Pi 0w, miniature USB hub, and a couple of flash drives.  How about a stick PC stuck into the same coffee cup you keep your pens and pencils in?

Maybe a time will come when you need to think bigger.  Let’s say that you want to spread your processing power out a bit so it’s not all in the same place.  Sure, you could put a machine or two at a friend’s house, your parents’ place, or what have you.. but why not think a little bigger?  Consider a RasPi with a USB cellular modem, a pre-paid SIM card, and SSH over Tor (to commit the odd bit of system administration) hanging out on the back of your desk at the office (remember those?) or stashed behind the counter of a friendly coffee shop.

Which moves us right along to the question, what do you actually run on a personal cluster?  Normally, people build personal clusters to experiment with containerization technologies like Docker (for encapsulating applications in such a way that they “think” they’re all by their lonesome on a server) and Kubernetes or the cut-down k3s (for doing most of the sysadmin work of juggling those containers).  Usually a web panel of some kind is used to manipulate the cluster.  This is quite handy due to the myriad of self-hosted applications which happen to be Dockerized.  The reason for such a software architecture is that the user can specify that a containerized application should be started with a couple of taps on their mobile’s screen and Kubernetes looks at its cluster, figures out which node has enough memory and disk space available to install the application and its dependencies, and does so without further user intervention.  Unfortunately, this comes at the expense of having to do just about everything in a Dockerized or Kubernetes-ized way.  In a containerized environment things don’t like to play nicely with traditionally installed stuff, or at least not without a lot of head scratching, swearing, and tinkering.

We can think much bigger, though.  Say we’re setting up a virtual space for your family.  Or your affinity group.  Or a non-profit organization.  There are some excellent all-in-one systems out there like Yunohost and Sandstorm which offer supported applications galore (Sandstorm is only available for the x86-64 platform right now, though there’s nothing that says that you can’t add the odd NUC or VPS to your exocortex) which can be yours for a couple of mouse clicks.

How about easy to use storage?  It’s always good to have someplace to keep your data as well as back it up (you DO make backups, don’t you?)  You could do a lot worse than a handful of 256 GB or 512 GB flash drives plugged into your fog and tastefully scattered around the house.  To access them you can let whatever applications you’re running do their thing, or you can stand up a a copy of MinIO on each server (also inside of Docker containers) which, as far as anything you care about will be concerned is just Amazon’s S3 with a funny hostname.

Pontification about where and what safely behind us, the question that now arises is, how do we turn all this stuff into a fog?  If you have machines all over the place, and some of them aren’t at home (which means that we can’t necessarily poke holes in any intervening firewalls), how can the different parts of the cluster talk to each other?  Unsurprisingly, such a solution already exists in the form of Nebula, which Slack invented to do exactly what we need.  There’s a little up-front configuration that has to be done but once the certificates are generated and Nebula is installed on every system you don’t have to mess with it anymore unless there are additional services that you want to expose.  It helps to think of it as a cut-down VPN which requires much less fighting and swearing but gives you much more “it just works(tm)” than a lot of things.  Sandstorm on a VPS?  MinIO running on a deck of cards at your friendly local gaming shop?  Nebula can make them look like they’re right next to one another, no muss, no fuss.  Sure, you could use a Tor hidden service to accomplish the same thing (and you really should set up one or two, if only so you can log in remotely) but Nebula is a much better solution in this regard.

Setting up this kind of infrastructure might take anywhere from a couple of days to a couple of weeks, depending on your level of skill, ability to travel, availability of equipment, and relative accessibility of where you want to stash your stuff.  Of course, not everybody needs such a thing.  Some people are more than happy to keep using Google applications or Microsoft 365, or what have you.  While some may disagree or distruct these services (with good reason), ultimately people use what they use because it works for them.  A certain amount of determination is required to de-FAANGify one’s life, and not everyone has that need or use case.  Still, it is my hope that a few people out there make a serious consideration.

By day the hacker known as the Doctor does information security for a large software-as-a-service company (whom he does NOT speak for), with a background in penetration testing, reverse engineering, and systems architecture. By night the Doctor doffs his disguise, revealing his true nature as a transhumanist cyborg (at last measurement, comprised of approximately 30% hardware and software augmentations), technomancer (originally trained in the chaos sorta-tradition), cheerful nihilist, lovable eccentric, and open source programmer. His primary organic terminal has been observed presenting at a number of hacker conventions over the years on a variety of topics. Other semi-autonomous agents of his selfhood may or may not have manifested at other fora.  The Doctor is a recognized ambassador of the nation of Magonia, with all of the rights and privileges thereof.

The post Head in the clouds, boots on the ground. appeared first on Mondo 2000.

❌