Katowice, Poland – August 8, 2024 – Anshar Publishing, an indie boutique publisher renowned for its dedication to unique and innovative games, is thrilled to announce the addition of Sigilfarer to its portfolio. This exciting roguelite adventure, where every decision rests on the roll of your dice, is set to captivate players with its dynamic dice-building system and sprawling, procedurally generated world.Sigilfarer reimagines the deck-building experience, allowing players to forge their own fa
Katowice, Poland – August 8, 2024 – Anshar Publishing, an indie boutique publisher renowned for its dedication to unique and innovative games, is thrilled to announce the addition of Sigilfarer to its portfolio. This exciting roguelite adventure, where every decision rests on the roll of your dice, is set to captivate players with its dynamic dice-building system and sprawling, procedurally generated world.
Sigilfarer reimagines the deck-building experience, allowing players to forge their own fate through a customizable dice system. In this game, your party is represented by a set of dice, with each face shaped by the equipment you choose.
Every sword, shield, and piece of armor becomes a die face, creating a unique blend of attacks, buffs, and strategic possibilities. Navigate through ever-changing dungeons, face powerful foes, and uncover the secrets of the sigils in a world teeming with lore and mystery.
Sigilfarer is scheduled for release in the first half of 2025. Stay tuned for more updates and prepare to embark on an epic journey where every roll of the dice shapes your destiny. For more information, please visit Anshar Publishing Steam Page or contact us via press@ansharpublishing.com.
Enlarge / Hospital staff and community members held a protest in front of Carney Hospital in Boston on August 5 as Steward has announced it will close the hospital. "Ralph" refers to Steward's CEO, Ralph de la Torre, who owns a yacht. (credit: Getty | Suzanne Kreiter)
As the more than 30 hospitals in the Steward Health Care System scrounged for cash to cover supplies, shuttered pediatric and neonatal units, closed maternity wards, laid off hundreds of health care workers, a
As the more than 30 hospitals in the Steward Health Care System scrounged for cash to cover supplies, shuttered pediatric and neonatal units, closed maternity wards, laid off hundreds of health care workers, and put patients in danger, the system paid out at least $250 million to its CEO and his companies, according to a report by The Wall Street Journal.
The newly revealed financial details bring yet more scrutiny to Steward CEO Ralph de la Torre, a Harvard University-trained cardiac surgeon who, in 2020, took over majority ownership of Steward from the private equity firm Cerberus. De la Torre and his companies were reportedly paid at least $250 million since that takeover. In May, Steward, which has hospitals in eight states, filed for Chapter 11 bankruptcy.
Critics—including members of the Senate Committee on Health, Education, Labor, and Pensions (HELP)—allege that de la Torre and stripped the system's hospitals of assets, siphoned payments from them, and loaded them with debt, all while reaping huge payouts that made him obscenely wealthy.
With highly competitive time-to-market and time-to-volume windows, IC suppliers need to be able to release new product to production (NPI) in a timely manner with competitive manufacturing metrics. Manufacturing yield, test time and quality are important metrics in NPI to Manufacturing safe launch. A powerful yield management system is crucial to achieve the goal metrics. In this paper, recommended yield management system selection criteria, data integration methodology and innovative ways of us
With highly competitive time-to-market and time-to-volume windows, IC suppliers need to be able to release new product to production (NPI) in a timely manner with competitive manufacturing metrics. Manufacturing yield, test time and quality are important metrics in NPI to Manufacturing safe launch. A powerful yield management system is crucial to achieve the goal metrics. In this paper, recommended yield management system selection criteria, data integration methodology and innovative ways of using selected yield management system to benefit safe launch efficiency are introduced. Three examples of using cloud yield tool to expedite yield learning, test time reduction (TTR) and quality enhancement are presented.
Itch.io – Steam
Back in 2017, a developer from France contacted me about their new point-and-click sci-fi game in the works called Ama’s Lullaby. But, it’s more than a point-and-click game, it’s also a hacking game. Now, this developer works on this game in his free time after his day job and with a small budget. Sometimes these passion projects die due to lack of time, money, motivation and/or just interest. But it looks like Ama’s Lullaby isn’t going to be one of those projects. Earlier
Back in 2017, a developer from France contacted me about their new point-and-click sci-fi game in the works called Ama’s Lullaby.But, it’s more than a point-and-click game, it’s also a hacking game. Now, this developer works on this game in his free time after his day job and with a small budget. Sometimes these passion projects die due to lack of time, money, motivation and/or just interest. But it looks like Ama’s Lullaby isn’t going to be one of those projects. Earlier this year, a demo of the game got released. Now, I asked the developer if he was interested in streaming this demo with us, and he did. Here is a link to part 1 & part 2. Sadly, due to overheating of Klamath’s computer, it had to be cut into two parts and the ending was quite abrupt. Now, this stream is almost a month ago, and I still wanted to write an article about this game. So, what do I think of the demo? Am I still as impressed when I saw it during the livestream, or is my opinion going to change when I’m not back seating and playing it myself? Let’s find out in this article.
Hacking The Point-And-Click Genre
The story of this demo is quite simple. Ama enters the police station and gets new tasks to aid the space colony she is in. Overall, the story is told more naturally compared to other games. Mostly, we get an opening where the main story of the game is teased, but not in this game. During interactions with the others, we get little glimpses into the world and story. Now, this is a tricky thing to pull off, since either you have to force the player to interact with everybody or risk that some players miss potentially important information. On the other hand, info dumping on the player isn’t always the best solution.
Now, in this space colony, there is an AI that makes a lot of decisions. It turns out that Ama and her dad have created that AI and the software to interact with it. She is one of the ambassadors of the human race. But it doesn’t take too long before strange things start to happen, and you notice that not everything is what you think it is.
The dialogues in this game appear above the character’s their head. When it’s cursive, you know it’s a thought. Not only that, you have simple sound effects that appear to put some additional power to the dialogues and to quickly differentiate between thoughts and spoken dialogues. Currently, there are plans to fully voice act this game, but if those plans fall through, I’d recommend to the developer to have different sound effects for the dialogues for different emotions.
Now, the game cold opens with an old school terminal as a main menu. This might be a bit jarring for new players who aren’t used to working with the command line. Personally, as somebody who knows how a command line works, I really love this touch. Since, this interface is also present in a lot of puzzles in the game. It fits the atmosphere and style of the game as a glove. To be honest, I think that with some minor polishing, it would be perfect.
There are a few things I would change. First, I’d get rid of the case-sensitive commands. The main reason is that a lot of people have the default keybinding for the Steam overlay with is… Shift+Tab. Since I love using autocomplete, it got pretty frustrating when I was holding my shift button and tabbed to autocomplete and my Steam overlay popped up.
A second thing I’d change is to allow the user to enlarge the font of terminal. The reason for that is because it doesn’t really scale pretty well with people who are using larger monitors.
Now, since this game is still in development and this is just the demo… I can totally excuse that there are features not present. Like pushing the up arrow to get the last command, or the help feature not always working correctly in all menus. For example, if you are in the options menu and use “QUALITY HELP”, you get information but if you first write “QUALITY” to see the options you can input and then “QUALITY HELP”… It bugs out and doesn’t give you help at all. Another small bug I noticed is that for some reason, the enter button on my numpad didn’t enter but always selected the whole text. But hey, during the stream the developer said that some of these things are on the list to get fixed for the full game.
Cyberpunk Sci-fi
I was impressed with the visuals of the game when we were playing this game on stream. While I haven’t played the Blade Runner games yet, I have seen a lot of people talk about it and know the visual style of the game. This game really mimics that style extremely well. You really feel like you are in a sci-fi world with some older technology than we have compared to our own technology.
Also, something I really love in this demo is that everything is one big space. You don’t really have “screens” in this game, like in a Broken Sword game for example. No, the camera swings and follows Ama as if she was in a movie. This sells the illusion of the area even more. While I’d have loved to see the details the developer put in every scene more up close sometimes, the more zoomed out look gives you a better overview on the scene. It almost feels like you are watching Ama through security camera’s or a drone camera in a way.
The biggest thing that I want to point out in terms of the visuals is Ama herself. The game goes for a more dark and dimly light environment and with a main character that’s wearing black clothes, it’s extremely easy to lose Ama in the scenery. It wouldn’t surprise me if they gave our main character in Blade Runner a brown coat for that reason, so you can more quickly see the main character without breaking the visual style of the game. But, overall, this is almost a nitpick. Since, it didn’t happen a lot that I lost Ama in the scene. It mostly happened when I was replaying parts of the demo while writing this article.
Now, I want to talk about the command line. The tutorial in this game on how a command line works is actually well done. I love how it doesn’t hold the players hands and tries to force them to input the right thing. It really lets you experiment with it and learn how it works. All the while, a small guide on how things work is displayed on the top of your screen.
This whole command line mechanic in this game is a breath of fresh air. It’s impressive how true to reality the whole command line is. While it uses some creative liberties here and there to make it fit into the game world, overall, it might be a real command line interface that’s open in the game.
In this demo, you have a few tasks to complete. Most of these tasks involve fixing various things. One task is highly dependent on the command line. This was quite easy for me since, like I said, I know how to use a command line. Visually, it’s a bit tricky during the tutorials in the network view since it’s not really clear/easy on how you can scroll up or down while in the network view. Using the mouse mostly scrolls around the network map. I think an easier way to scroll up and down in the terminal could be useful there. Also, when you have to input a command that’s longer than the terminal screen, I’d start a second line. Since, that’s how real life works. Or move the whole thing, and not let the username stay.
Final thoughts and future wishes
Overall, the demo is quite short. If you don’t know what you are doing and exploring everything, it will take you mostly two hours to complete. But if you know what to do, you can finish this in 10 minutes. Yet, the impression I got from the stream hasn’t changed. This game has quite a lot of potential but it needs some polish here and there.
There are some minor things like some objects not being solid and Ama being able to run through them, but there are also more major issues. The elevator bug the developer Marc mentioned during the stream, happened to me. Ama didn’t go up with the elevator and she was stuck. I think it was related to another bug I encountered where the head of IT got stuck in an animation loop. Somehow it was like Ama was near him while Ama was walking in other parts of the station. I don’t know what exactly triggered that, and I have replayed the demo trice to try and get it back into that bugged state, but I was unable to find the cause and I was unable to replicate it.
Currently, there is one way to save the game. There are several terminals in this demo where you can save your game. You only have one save slot. There is also no manual saving of the game. So, remember that. You can also only load from the main menu.
Reviewing a demo is always tricky to do. Especially if the game is still in development, since you never know for sure how the final game is going to look like. Yet, this demo is extremely promising. The puzzles where a lot of fun and after playing the demo, I had the same feeling that Klamath had at the end of the stream. I want to play more or similar games like this.
I could start talking about how the sound effects are amazing but there isn’t enough music yet. But, at one hand, the lack of music really sells the atmosphere of the game a lot more but on the other hand, the music during the terminal sections is really enjoyable. But, I’m sure that in the full game we shall see more music.
Just like I’m convinced that when the full game releases and the players find bugs, they will get fixed. While I was talking with Marc during the stream, I really felt the passion for creating this game and how he wants to make it the best experience it can be for his players. So, if you are interested in this game after reading this article in any way shape or form, I highly recommend that you give this game a chance, play the demo for yourself and give the developer feedback via his Discord or any other of his official channels.
I can’t wait to see and play the final game. Various things got revealed and talked about during the stream and I have to say, it was an amazing experience and conversation. I was already interested in seeing this game when it was on KickStarter but now that I have played the demo, I think we are on a winner here. This game will put an interesting twist on the point-and-click genre and will be interesting to anyone who enjoys adventure games with a sci-fi influence or just enjoy more unique puzzle games.
I want to thank Marc for reaching out to me and talking about his unique project. You can be sure that when the full version releases… me and Klamath will play through it and most likely stream it. And I’ll write a more in-depth article on the final product. Since, I might have not talked quite in-depth in this article but I want to hold off my final opinions when the game is fully released.
If you have read my article, played the demo and/or watched our stream, I’m curious, what did you think about this game? Feel free to talk about it in the comments. Am I overhyping the game or overlooking flaws? Or is there something you’d love to see in the full game?
And with that said, I have said everything about the game I want to say for now. I want to thank you for reading this article and I hope you enjoyed reading it as much as I enjoyed writing it. I hope to be able to welcome you in another article but until then, have a great rest of your day and take care.
Steam store page – Wikipedia – Official website
So, when I’m writing this, it is 2024. I turned 31 years old back in February. I still love playing video games and surfing the internet since I was a young lad. Besides that, I also have a fascination for anything that has to do with dreams and their meanings. And then a game called Hypnospace Outlaw turns up on my radar. A game that promises to bring back the early years of the internet that I remember. Not only that, we are going to have
So, when I’m writing this, it is 2024. I turned 31 years old back in February. I still love playing video games and surfing the internet since I was a young lad. Besides that, I also have a fascination for anything that has to do with dreams and their meanings. And then a game called Hypnospace Outlaw turns up on my radar. A game that promises to bring back the early years of the internet that I remember. Not only that, we are going to have to moderate the internet with a new technology that allows people to surf the internet while they are dreaming. We have to play as an unnamed enforcer to keep the internet safe and on top of that, we can create our own pages and mod this game easily. But before we start spending time on that, let’s find out if the base game is actually good and if it’s worth to start playing this game or if it’s something we should skip. Also, feel free to leave your thoughts and/or opinions on this article and/or the game in the comment section down below. Besides, dear enforcer and MerchantSoft, this isn’t harassment, this is a fair review/critique of the game. Removing this from HypnoOS isn’t the solution.
Dreaming Up Nostalgic Investigations
In this game, you play as an unnamed enforcer for MerchantSoft. A company that developed a headband that allows users to surf the web in their dreams. Your goal is to clean up the HypnoSpace for everybody. You start in late 1999, where your first case is assigned. When your first case is assigned, you are left to your own devices, and you can explore the internet by yourself. And let me tell you, there is a lot of internet to explore.
The story of this game is fascinating. You get to dive and explore through various pages on the internet about various things. A long time before social media was a thing and everybody had a website for their own creations. The HypnoSpace has several zones, with each their own theme. If you remember AOL, you will know what I’m talking about.
If you want to get the most out of this game, I highly advise you to take your time with this game. Don’t rush it at all. This game is sadly rather short if you only follow the main story of the game. It’s only 6 hours long and shorter if you know what you are doing. I mean, the speedruns are only around 11 minutes. The strength of this game is the depth it has. This game has three main chapters, and there are clear triggers that separate the chapters.
The deeper you dig and the more you read up, the more interesting lore gets revealed. I actually started a second playthrough to try and find the things I missed. And honestly, this game is one that gets ruined by playing it with a guide in any sort or form. Do not play this game with a guide. It’s a lot less rewarding if you play it with a guide in your first or second playthrough. The wonder of getting lost in all of these pages is just so nostalgic.
Now, while I was playing, I was wondering if it would appeal to the younger players out there. I’m somewhat on the fence about that. While it tackles a lot of subjects that are still somewhat relevant, I honestly think that it’ll mostly click with those who grew up with the internet of the ’90 to early ’00. With that said, I think that it still might click with the younger people, but know that the internet was very different back then.
Point-And-Click Detective
This game is a point-and-click adventure game in any sense of the word. You get a case, and you have to explore the internet to see if anyone broke the rules or not.
Each infraction you find, will reward you with HypnoCoin. You can use these coins to buy various things in the Hypnospace. This can go from stickers, wallpapers, themes, applications to so much more. But be careful, it’s quite possible that some of these downloads are infected with malware. And back then, malware was a lot more visual and less aimed at serving you a lot of ads or stealing your information.
The controls of this game are quite easy. You mostly click with your mouse and input things sometimes in the search bar. If you know how to do basic things with a computer, you’ll very quickly find your way around with this game as well. While I sometimes struggled with opening apps, I didn’t have too much trouble with the controls. Thankfully, there are some options to tweak the controls to your liking, like disabling that double-clicking opens apps. But, I’m a Windows user and the double click to open apps is just hardwired in my brain.
Visually, this game really looks like you are playing with the old internet. When I noticed that there was a mod that changed the OS into Windows 95, oh boy, I was sold. There are various themes for the OS in this game, and they go from amazing to silly. There is even a fast food theme. Now, if you read that this game is mostly created by a team of 5 people, it’s even more impressive. Not only that, one of the main designers of Dropsy is part of the team.
The creativity of this game never ceased to amaze me. Let me continue on the trend of the visuals and say that the little details on how the webpages look is just so realistic. The little typo’s here and there, the rabbit holes you can jump down, the crazy visuals on various pages… Even the “help me, I can’t remove this” and “Test 1 2 3″… I made me crack up and remember my early days when I used to write webpages in plain HTML with barely any coding knowledge as a young teen.
While I knew that wiggling the mouse sped up the loading of the webpages, I just never really did. I just enjoyed the webpages loading slowly and having that experience again when I was a teenager before Facebook or any other big social media started to take over. Yes, even before MySpace. While I only experienced the late “pre-social media internet”, I do have amazing memories of it.
On top of that, you have the amazing wallpapers and sticker packs you can buy and play around with. With this, you can really make your desktop your own. But, something that really triggered memories for me were the viruses you can encounter. Back as a young teen, I was a lot less careful in what I downloaded and seeing the visual mess some viruses can create in this game, it triggered some nasty memories.
Memories like how one time, I got a very nasty variant of the SASSER worm and each time I installed something new, my computer would lock up and crash. Yes, even when you tried to re-install Windows, it locked up and crashed the installer. After a lot of digging, I found that it was caused by a program starting with boot and I had to screw out my hard drive, connect it with somebody’s computer and then remove the start-up file from there. I also had a piece of malware that looked like the ButtsDisease virus in this game. Where it started to change all the text on a webpage to another word. Oh man, those were the days.
So, during your investigations you can encounter various things. Things like people breaking the rules, and you have to report those. You mostly need to focus on one of 5 categories. Copyright infringement, harassment, illegal downloads/malware, extra illegal commerce and illegal activity. Each law gets several infractions, and you do have to look for them. At one moment, I really that to take notes. I really have to say, taking notes for this game is really helpful, you even have the notes’ app in HypnoOS.
Sticking in your brain
Now, something I have to commend the developers for in this game is that they also took accessibility into account. Something I have to commend the developers for as well is the amount of content in this game, even when the main story is extremely short. I already talked about the visuals and how much I love them, but the music in this game is something else.
Some of the music tracks are really stuck in my mind and I wouldn’t be surprised that if I ever write another article in my favorite game music series, some of them will pop up in that. Some tracks are real earworms and got stuck in my brain. The music for some of the parody products in this game is so good, that I wish they were real.
The music in this game is a mixture of various styles, and I find some of them more catchy than the others, but it’s really impressive at how many styles there are in this game. If you know that this game has over 4 hours of music in it, that’s an amazing feat.
There is even a whole suite where you can create your own pages, music and mods released by one of the main developers of this game. It works only on Windows and you can read more about it on the itch.io page of Jay Tolen here. There were even various community events where your stuff could appear as an Easter egg in the main game. Yet, these tools are now part of the main game and are in your installation folder.
Speaking about this, modding this game is extremely easy. There is even a build in mod browser, and it’s a piece of cake to install and downloads mods. If you use the in-game mod menu, you don’t have to reboot the game for most mods to take effect. Just go to the main menu, choose the mods button and install the mods you want. Now, there are a lot more mods out there then just what you can find in the in-game mod browser, so check them out here.
The game has an autosave, it doesn’t really show when the game gets saved. There are three save slots, so if you want to replay the game, you can pick another save slot. Now, if there is one mod I highly advise is he expanded endgame cases mod. This mod expands the game quite naturally and is a lot of fun and additional challenge. But don’t read the description when you haven’t finished the game, since it contains quite a lot of spoilers.
This game can be quite tricky. Sometimes the solution isn’t the easiest to find. It’s even possible you don’t find the solution to every puzzle out there. Now, there is a built in hint system for this game. It’s somewhat hidden to avoid immersion breaking, but for a small HypnoCoin fee, you can get a hint to progress. I really love this system, since I rather have you getting a crowbar to get yourself unstuck than you getting a guide where it’s very easy to other things and spoil the whole experience. Since the fun of this genre depends highly on solving the puzzles with what’s given to you. If you want to get a hint, just search hint.
Overall I have been extremely positive about this game, and I have to say that overall this game is extremely well-made. I rarely found any moments where I thought, this isn’t right. But does that mean that this game doesn’t have any negatives? Well, sadly enough there are a few things I didn’t like about my experience and that I want to talk about.
First of all, I wish the default text-to-speech voice wasn’t the default language of your system if you aren’t English. I’m from Belgium and my text-to-speech voice reads English extremely weird. Thankfully, I had the English soundpack installed on my computer so after I went into the BIOS settings, I was able to quickly change it to the English one and it sounds a lot more natural and better.
Secondly, this is an issue in general with point-and-click games but the replay value just isn’t here. Once you explored everything, you have seen everything. There are various mini-games, but those are quickly beaten. While I personally don’t really see this is a negative, since not every game needs high replay value and sometimes playing it once and having the whole experience engulf you is the idea… I want to mention it, if somebody is looking for replayable games.
Third, you can find more infractions than what’s required to close the case. While I can understand that the game doesn’t tell you how many other things there are out there for immersion reasons, as somebody who wanted to experience everything, I was sometimes a bit annoyed that I couldn’t make sure I found everything. If only there was an option you could toggle to see completion percentage or something of that nature. Since, because of this, it’s possible to lock yourself out of achievements or content in this game.
Yes, this game has achievements and some of them are extremely tricky to get. It took me a lot of researching and exploring in HypnoSpace to find all the material. Thankfully, taking notes really helped me to find it all.
And the final thing is that the final chapters of this game feel a bit rushed and undercooked. One of the final cases is a breeze to solve if you have written notes during your playthrough and it feels like there is content cut out of the game. The ending comes a bit out of nowhere and if you didn’t explore everything or didn’t register certain things, the ending won’t make sense to you and it will loose it’s impact. Thankfully, the mod I shared earlier resolves this to a degree.
That’s all the negative I could say about this game, in my honest opinion. When this game clicks with you, it clicks really well and doesn’t let go at all. But, I’ll leave my final thoughts after the summary of this review. So, I think it’s high time for that since I have touch upon everything I wanted to in this review.
Summary
The bad:
-Text-to-speech should use English by default
-It’s possible to miss content or lock yourself out of it.
-The game is rather short.
-Rushed ending.
The good:
+ Amazing nostalgic trip
+ Amazing music
+ Fantastic writing
+ Easy to use mod tools
+ Great puzzles
+ Great controls
+ …
Final thoughts:
Hypnospace Outlaw is an amazing nostalgic point-and-click adventure trip through the late ’90’s internet. This game might not be for everyone, but when it clicks… Oh boy does it really click. Now, this is also a game you shouldn’t rush. The charm of this game is in all the little details and references that are hidden in the pages and the world building of this game.
While the game is rather at the shortside for point-and-click games, I don’t see it as a big problem to be honest. The journey that this game took me on was a lot more worth it to me than having a long game. Since, I think it would have lost it’s charm if this game kept going and going.
While I personally have more memories with the internet time period that came right after it, the developers are already working on the sequel to this game called Dreamsettler. I honestly can’t wait to play that one, since the quality that this game has is just top notch. The music is catchy, the visuals are amazing and it alls comes together in an amazing nostalgic trip that makes you want to play more.
There are some minor blemishes on this game, but you can work with them. Like I said before, when this game clicks, it really does click extremely well. I’d compare my experience with games like There Is No Game or SuperLiminal. Amazing small titles that leave a lasting impact on those who play it. All of these games are passion projects that turned out amazing and get a recommendation from me.
If you enjoy playing unique point-and-click games and/or if you have nostalgia for the old ’90’s internet, I highly recommend that you give this game a try. While this game is on multiple platforms, I highly recommend that you play the PC version since it has mod support that gives you even more toys to play with and expands the game even more.
I had a blast with this game and it’s a breath of fresh air for me. I’m angry at myself that I rushed my playthrough, but now I have installed several mods and I’m so going to replay this game after I have published this article. I also want to earn every achievement in this game, since I really want to see everything. I’m also extremely hyped for the sequel to this game and I can’t wait to start playing that, since that is going to be an even bigger nostalgic trip for me than this game. And with the amazing set of developers behind this game, I think we get another gem in our hands.
And with that said, I think it’s high time to wrap this article up. I want to thank you so much for reading and I hope you enjoyed reading it as much as I enjoyed writing it. I hope to be able to welcome you in another article, but until then, have a great rest of your day and take care.
Nintendo.co.uk microsite – Wikipedia page
Next year, I’ll be blogging for 15 years. I have taken a look at quite a lot of games. Now, if you go back to the start of this blog, you might notice that I only started in May 2013. The three years before that, I wrote a personal life blog in my native language. I have since deleted that for personal reasons and started blogging in English in 2013. On my Dutch blog, I wrote an article about Another Code – Two Memories, but I haven’t written one
Next year, I’ll be blogging for 15 years. I have taken a look at quite a lot of games. Now, if you go back to the start of this blog, you might notice that I only started in May 2013. The three years before that, I wrote a personal life blog in my native language. I have since deleted that for personal reasons and started blogging in English in 2013. On my Dutch blog, I wrote an article about Another Code – Two Memories, but I haven’t written one for my English blog. Yet, I have mentioned it in 2014 in a top 25 list of my favorite DS games of all time. I have written an article on the Wii sequel called Another Code: R – A Journey Into Lost Memories in 2013. While my old articles aren’t up to my personal standards anymore, I still leave them up to see the growth I have gone through over the years. Now, these two titles became classics in my eyes. When Cing went under, I didn’t hold up hope of these games ever seeing a sequel or a remake. But, we got a big surprise this year. Suddenly, both games were coming to the Nintendo Switch and not only that, they were remade from the ground up. Did these two games grow like I did in my writing, or is it something that should be better left to the past? Well, that’s what I’m going to discover with you in this article. Feel free to leave a comment in the comment section with your thoughts and/or opinions on the game and/or the content of the article, but now, let’s dive right in.
Editorial note: shameless self-promotion: if you want to see me and my buddy Klamath playing through this title… We started streaming it. So, more opinions can be found in the streams. Here is a link to the playlist.
The Remembering Of A Remake
In this game, we follow the adventures of Ashley Mizuki Robins. In the first part of the game, Ashley got a letter from her presumed dead father to come to Blood Edward island to meet him on the day right before her 14th birthday. On that journey, she meets a ghost named D, who has lost his memories.
In the second part of the game, we fast-forward two years. Ashley takes a camping trip to a lake. When she arrives at lake Juliet, she gets flashbacks from when she was very little. Not only that, she meets a young boy whose father wanted to build a holiday resort at that lake but was blamed for the pollution of the lake.
Since this game is a point-and-click game and is quite story depended, I’m not going to talk more about the story than the two small blurbs above. In terms of the story, this game tells a very heartfelt story with very nice life lessons. The writing in this game is extremely well done. The build up towards the ending of the story is very natural and stays true to the themes of the game. The biggest theme in this game is memories and history. Overall, this game is quite relaxing, and the story is never really in a rush to move forward.
New in this version is that there is voice acting. While not the whole game is voice acted, most of it is and the non voice acted scenes have little grunts and vocalizations to indicate the emotions of what’s being told. I have to say that the voice acting in this game is fantastic. I wish the voice actors of this game had more of an online presence, since I had a hard time finding other works by these voice actors. The fact that these voice actors didn’t really promote that they worked on this game on their socials is a shame.
The voice acting in this game brings so much charm to the game. For this article, I replayed parts of the original DS and Wii game and I kept hearing those characters talk in the voice of the remakes. They fit the characters like a glove, which is a hard thing to do since when you have voiceless characters… Everybody has their voice in their head, and that doesn’t always match up with the official voice acting.
Now, in terms of differences between the original games and this remake… There are quite a lot of things. On the Cing wiki, there is a long list of changes. But I would highly advise you don’t read that before you finished the game. Since, it contains a lot of spoilers. I can say this without spoiling anything. The list of changes on the game article page has no real spoilers. If you haven’t played the originals, you won’t really notice a lot of the changes. Especially because most of the changes are done to improve the flow of the game and the story. Other changes have been done because some puzzles used the special features of the Nintendo DS or the Nintendo Wii in unique ways.
Arc System Works worked together with several members of the original development team, and I have to say that it really feels like this is the definitive way to experience these stories. Both stories now flow into each other, and it feels more like one big story. If you didn’t know better, you could think it’s just one huge game with those major chapters. They have done an amazing job of translating the story into a modern area without destroying the original messages and atmosphere of the story.
Fuzzy memories make imperfections
In terms of visuals, this game goes for a cel shaded look. This makes the remake of the original DS game look more in line with the Wii title. In the original DS game, the game was played as a top-down puzzle game, with some moments you could see a 2D scene that you could explore.
Visually, this game is quite detailed and looks amazing. Yet, I have noticed some rough models here and there. A book here, a window there. Some of them really stick out like a sore thumb. Now, I might be very critical on these things since I review games as a hobby. But let me tell you this as well. Overall, this game looks amazing. Timeless even. There are only a handful of objects that could use some touching up.
I have the same opinion on the animations. Overall, the animations are fantastic. Seeing the first game in 3D was breathtaking. It brought the game to life in such a different way, and I’m all for it. There were a few stiff animations, but if you aren’t looking for them, I can guarantee you that you won’t notice most of them. I especially love the comic book style cutscenes where the characters speaking go inside their own square next to each other. The animations in these cutscenes add some charm to this game, it makes the more relaxing nature of this game shine even brighter.
The controls of this game are excellent. Sometimes the motion control puzzles are a little bit wonky, but overall they work perfectly. The only thing I really don’t like is how, by the press of a button, you can see the orientation of Ashley. Now, what do I dislike about this? Well, it has a sort of build in walkthrough attached to it. This is something that’s too easily accessible, and I have pressed the button too many times.
Something I’m mixed about is how the additional lore spots are now somewhat easier to find. In the original DS game, you could find special cartridges with additional story lore on them. In this game, the hiding spot is located on your map. So, if you have missed one, you can quickly see on your map in which room you need to look. Now, some of them are hidden in very tricky places. During the stream, I have seen Klamath walk past two of them several times. If you want all the additional lore, you will have to keep your eyes peeled.
If you have played any point-and-click adventure game, you’ll know what to expect here. Personally, I compare this game quite a lot to Broken Sword 3, but without the platforming. You can explore the environment, and you have to solve various puzzles. Something unique is that you can also take pictures. And let me tell you, keep every mechanic the game teaches you in mind. The fact you can take pictures is something that is going to be quite helpful during the solving of the puzzles.
The only complaint I have is that solving some puzzles have a bit too much menu work involved. I especially remember one puzzle in the first part of the game where you have to weigh coins. Instead of them being all five on the table, you have to take them from your inventory each and every time. And the annoying part is that the last two you used, move to the last spot in your inventory. There are a handful of puzzles where some quality of life improvements would be very welcome.
Relaxing with puzzles
There are some amazing new features in this game as well. One of my favorite things is that you can access a big board where all the relationships between the characters are mapped out. Not only that, when you open the profile, you can read a small note about them. If you click on Ashley’s profile, you will read a small hint on what to do next. So, if you put this game down for a while, you can catch yourself up quite quickly.
Also, something I adore is the attention to detail in this game. For example, in one of the puzzles, Ashley digs into a building blocks box. After she found what she was looking for, you will notice a small building she built next to the box with the blocks she took out. There are various other moments like this, and it adds to the charm and realism of this game quite a lot.
The more relaxing nature of this game not only comes through the visuals and gameplay, but also through the music. The music in this game is a rather calming and relaxing soundtrack. The main motive is piano through the whole soundtrack. Other major instruments are violin and acoustic guitar. The soundtracks fit this game like a glove. Now, it is tense when it needs to be, but it never steps out of its lane. It keeps being that relaxing soundtracks that brings this game more to life, and I have no complaints about it.
The biggest strength of this game is the charm of it all. The writing, the music, the sound effects, the puzzles… It all flows together so well. While the game is only roughly 15 hours long, if you know what you are doing, it’s a very enjoyable time to play through. In this remake, the game also auto saves now but outside of cutscenes, you can save at any time in 15 different save slots.
Currently, I’m over midway in the second part of the game and I have been enjoying it quite a lot. While the game has it’s minor shortcomings like some rough object models and some annoying menu’ing during puzzles… I’m falling in love with these titles all over again. If you would ask me if the remakes or the originals are better, I’d have to say both. Both versions still have their charm but if you want to experience both these titles, I’d really advice to go for the Switch version. Since, it brings both titles together in a lot better way.
I mostly have minor complaints about these remakes. Like how silly it is that you can only have ten pictures saved and deleting them is a bit too fincky. But overall, the issues I have with this game are mostly minor. Maybe a bit more time in the oven or a polishing patch will bring this game to perfection.
A lot of other reviewers are giving this game lower marks since it’s slower paced or it’s a remake of a rather obscure duology. I personally disagree with these lower scores. These two games deserve another chance in the lime light since they are quite amazing games. I personally don’t mind the slower paced gameplay, since it’s refreshing to be able to wind down with a slower game. On top of that, if you look at the care the developers put into remaking this game and bringing it to modern audiences while not chaging too much to alienate fans of the original is such a fine line to walk on… And they never fell off that line in my opinion.
I can totally understand that this game isn’t everybody’s cup of tea. But, the complaints that this game is linear and doesn’t have a lot of replay value, I find ridiculous. I mean, does every game need to have a lot of replay value and let you explore a wide open world? No, it’s okay to play a game where you need to go from point A to B. It’s okay that the story looses some of it’s charm because you know how it’s going to end. It’s how that experience impacts you, that’s what matters.
The reason why I’m so happy to see remakes of these DS and Wii titles is because we now have remakes of amazing titles like this one and Ghost Trick for example. Now, because these two games have been remade, I’m holding out hope that Cing’s other titles like the amazing Hotel Dusk and it’s sequels are being remade as well. And if they are, I hope the same team is working on them since the love and care they placed into remaking these two titles is amazing.
I remember Klamath’s reaction when I suggested this game for streaming. He was worried that it was going to have low numbers and not a lot of interest. But, after our first stream, he started calling this game a hidden gem. I mean, if this game can have that kind of an impact on somebody who loves point-and-click games and the fact that we had a very high number of viewers watching our streams, it must mean something.
This game has a lot of impact and I hope that others who enjoy puzzle, adventure and/or point-and-click games give this game a chance. It’s something different especially since it’s slower paced but if you let it take you by the hand and if you walk along the journey, you won’t regret the powerful journey you are going on. It’s a journey that will stick with you and sometimes a memory will pop back into your head. You’ll remember the fun and relaxing times you had with this game. While the game isn’t perfect, the positives far outweigh the negatives and it’s one of those games where going along with the ride is the most important. Since, the ride of this game is one of the best point-and-click games I have ever played.
And with that said, I have said everything I wanted to say about this game for now. I want to thank you so much for reading this article and I hope you enjoyed reading it as much as I enjoyed writing it. I’m curious to hear what you thought about this game and/or the content of this article. So, feel free to leave a comment in the comment section down below. I also hope to welcome you in another article, but until then have a great rest of your day and take care.
Activision has published a 25-page white paper exploring the impact of skill-based matchmaking (SBMM) on its multiplayer lobbies, determining that SBMM is better for all players.As spotted by indie game developer and consultant Rami Ismail, the report – which can be read in full on Activision's official website – outlines an "amazing A/B test" where Activision "secretly progressively turned off SBMM and monitored retention… and turns out everyone hated it, with more quitting, less playing, and
Activision has published a 25-page white paper exploring the impact of skill-based matchmaking (SBMM) on its multiplayer lobbies, determining that SBMM is better for all players.
As spotted by indie game developer and consultant Rami Ismail, the report – which can be read in full on Activision's official website – outlines an "amazing A/B test" where Activision "secretly progressively turned off SBMM and monitored retention… and turns out everyone hated it, with more quitting, less playing, and more negative blowouts".
Activision announced plans to launch the series of white papers back in April, and has already considered the impact connections and Time to Match has on online play.
A galactic worm gobbles stars. A plasma whale slides across the sun‘s surface. And an eerie dragon dances with an aurora. It’s not the plot to a fantasy novel, it’s our incredible universe captured in stunning detail.
The Royal Observatory Greenwich has announced the shortlisted images for the 2024 Astronomy Photographer of the Year. The finalists were selected from more than 3,500 images submitted from professional and amateur photographers from 58 countries. The winner will be announced Se
A galactic worm gobbles stars. A plasma whale slides across the sun‘s surface. And an eerie dragon dances with an aurora. It’s not the plot to a fantasy novel, it’s our incredible universe captured in stunning detail.
The Royal Observatory Greenwich has announced the shortlisted images for the 2024 Astronomy Photographer of the Year. The finalists were selected from more than 3,500 images submitted from professional and amateur photographers from 58 countries. The winner will be announced September 12 and an exhibition of the top images will be on display in London at the National Maritime Museum starting September 14.
When you learn about the moon in school, you’re generally taught that its gravity is insufficient to capture and retain any significant atmosphere. The moon is nonetheless surrounded by a thin, ephemeral halo of gasses—an exosphere.
This surprising fact was first discovered using instruments carried by astronauts who visited the moon with the Apollo program. The moon’s weak gravity means that the exosphere’s constituent atoms are constantly draining away into space—and, as such, its continuou
When you learn about the moon in school, you’re generally taught that its gravity is insufficient to capture and retain any significant atmosphere. The moon is nonetheless surrounded by a thin, ephemeral halo of gasses—an exosphere.
This surprising fact was first discovered using instruments carried by astronauts who visited the moon with the Apollo program. The moon’s weak gravity means that the exosphere’s constituent atoms are constantly draining away into space—and, as such, its continuous presence means that the supply of these atoms is being constantly replenished.
A new study published in Science Advances on August 2 looks at exactly how this replenishment happens. It examines a group of elements whose presence in the lunar atmosphere might come as a surprise to anyone who’s studied chemistry: alkali metals.
Alkali metals form the first group of the periodic table, and include lithium, sodium, potassium, rubidium, and caesium (along with francium, which is never found in macroscopic quantities because it’s so radioactive). Why is their presence a surprise? On Earth, they’re famous for their reactivity, as evidenced by the classic high school demonstration of what a piece of sodium does when it encounters water. On the moon, however, things are very different.
As Prof. Nicole Nie, lead author of the paper, tells Popular Science, “In lunar soils and rocks, alkali metals are bound in minerals, forming stable chemical bonds with oxygen and other elements. But when they are released from the surface, they usually become neutral atoms. There is no liquid water or substantial atmosphere [on the moon], so these metals can remain in their elemental form—[and] because the number of atoms in the lunar atmosphere is so small, the atoms can travel a long distance freely without colliding with one another.”
This does, however, raise the question of how the atoms are released from the surface in the first place. The paper seeks to answer this question—and, specifically, the relative contributions of three processes known collectively as “space weathering.” The uniting factor in these three processes is that they involve something striking the lunar surface and knocking the alkali metal elements out of the mineral compounds in which they’re bound. (These processes also release other elements, but the volatility of alkali metals makes them particularly easy to liberate.)
The first of these processes is micrometeorite impacts, where tiny pieces of space debris rain down with sufficient force to vaporize a small piece of the lunar surface and launch its component atoms into orbit. The second is ion sputtering, where charged particles driven by the solar wind strike the lunar surface. And finally there’s photon-stimulated desorption, where it’s high-energy photons from the sun that knock the alkali metals loose.
As the paper notes, while each process has been well-characterized, previous research has “not conclusively disentangled their [relative] contributions” to the lunar atmosphere. To go about doing this, Nie and her team went right back to the source of the question: the Apollo program. The various crewed missions to the moon in the late 1960s and early ‘70s brought back a total of 382 kg of lunar soil samples, and decades later, these samples are still revealing their secrets to researchers. Nie’s study involved examining 10 samples from five different Apollo missions, including several from Apollo 8, the first manned moon landing.
The team used these samples to look at the relative proportions of different isotopes of potassium and rubidium in the soil. (Sodium and cesium only have one stable isotope each, while lithium is less volatile than its heavier cousins.) As Nie explains to Popular Science, “Lighter isotopes of an element are preferentially released during these processes, leaving the lunar soils with relatively heavier isotopic compositions. For elements that are affected by space weathering, we would expect lunar soils to show heavy isotopic compositions, compared to deeper rocks that are not affected by the process.”
The different space weathering processes produce different ratios of isotopes, and the team’s results indicate that it appears that micrometeorite impacts make the largest contribution to the lunar atmosphere, “likely contributing more than 65% of atmospheric [potassium] atoms, with ion sputtering accounting for the rest.”
This provides a valuable insight into how the moon’s atmosphere has evolved over billions of years—while its composition may well vary over shorter timescales, these results suggest that in the long run, micrometeorite impacts play the dominant role in the constant replenishment of the atmosphere. The study also points to how similar research might be carried out on other objects similar to the moon, like Phobos, one of Mars’s two satellites.
Say I have three entities: Player, Spikes, and Zombie. All of them are just rectangles and they can collide with each other. All of them have the BoxCollision component.
So, the BoxCollison system would look something like this:
function detectCollisions () {
// for each entity with box collision
// check if they collide
// then do something
}
The issue is, the sole purpose of the BoxCollision component is to detect collision, and that's it. Where should I put the game rules, such
Say I have three entities: Player, Spikes, and Zombie. All of them are just rectangles and they can collide with each other. All of them have the BoxCollision component.
So, the BoxCollison system would look something like this:
function detectCollisions () {
// for each entity with box collision
// check if they collide
// then do something
}
The issue is, the sole purpose of the BoxCollision component is to detect collision, and that's it. Where should I put the game rules, such as "if the Player collided with Spikes, diminish its health" or "if the Zombie collided with Spikes, instantly kill the Zombie"?
I came up with the idea that each Entity should have its onCollision function.
Programming languages such as Javascript and F# have high-order functions, so I can easily pass functions around. So when assembling my Player entity, I could do something like:
function onPlayerCollision (player) {
return function (entity) {
if (entity.tag === 'Zombie') {
player.getComponent('Health').hp -= 1
} else if (entity.tag === 'Spikes') {
player.getComponent('Health').hp -= 5
}
}
}
const player = new Entity()
player.addComponent('Health', { hp: 100 })
player.addComponent('BoxCollision', { onCollision: onPlayerCollision(player) }
// notice I store a reference to a function here, so now the BoxCollision component will execute this passing the entity the player has collided with
function detectCollisions () {
// for each entity with box collision
// check if they collide
onCollision(entity)
onPlayerCollision is a curried/closure function that receives a player, and then returns a new function that wants another Entity.
Are there any flaws with this? Is it okay for components to store references to functions? What are other ways of avoiding game rules in components? Events?
Enlarge (credit: Marco Verch Professional Photographer and Speaker)
Hackers delivered malware to Windows and Mac users by compromising their Internet service provider and then tampering with software updates delivered over unsecure connections, researchers said.
The attack, researchers from security firm Volexity said, worked by hacking routers or similar types of device infrastructure of an unnamed ISP. The attackers then used their control of the devices to poison domain na
Hackers delivered malware to Windows and Mac users by compromising their Internet service provider and then tampering with software updates delivered over unsecure connections, researchers said.
The attack, researchers from security firm Volexity said, worked by hacking routers or similar types of device infrastructure of an unnamed ISP. The attackers then used their control of the devices to poison domain name system responses for legitimate hostnames providing updates for at least six different apps written for Windows or macOS. The apps affected were the 5KPlayer, Quick Heal, Rainmeter, Partition Wizard, and those from Corel and Sogou.
These aren’t the update servers you’re looking for
Because the update mechanisms didn’t use TLS or cryptographic signatures to authenticate the connections or downloaded software, the threat actors were able to use their control of the ISP infrastructure to successfully perform machine-in-the-middle (MitM) attacks that directed targeted users to hostile servers rather than the ones operated by the affected software makers. These redirections worked even when users employed non-encrypted public DNS services such as Google’s 8.8.8.8 or Cloudflare’s 1.1.1.1 rather than the authoritative DNS server provided by the ISP.
Activision has published a 25-page white paper exploring the impact of skill-based matchmaking (SBMM) on its multiplayer lobbies, determining that SBMM is better for all players.As spotted by indie game developer and consultant Rami Ismail, the report – which can be read in full on Activision's official website – outlines an "amazing A/B test" where Activision "secretly progressively turned off SBMM and monitored retention… and turns out everyone hated it, with more quitting, less playing, and
Activision has published a 25-page white paper exploring the impact of skill-based matchmaking (SBMM) on its multiplayer lobbies, determining that SBMM is better for all players.
As spotted by indie game developer and consultant Rami Ismail, the report – which can be read in full on Activision's official website – outlines an "amazing A/B test" where Activision "secretly progressively turned off SBMM and monitored retention… and turns out everyone hated it, with more quitting, less playing, and more negative blowouts".
Activision announced plans to launch the series of white papers back in April, and has already considered the impact connections and Time to Match has on online play.
A galactic worm gobbles stars. A plasma whale slides across the sun‘s surface. And an eerie dragon dances with an aurora. It’s not the plot to a fantasy novel, it’s our incredible universe captured in stunning detail.
The Royal Observatory Greenwich has announced the shortlisted images for the 2024 Astronomy Photographer of the Year. The finalists were selected from more than 3,500 images submitted from professional and amateur photographers from 58 countries. The winner will be announced Se
A galactic worm gobbles stars. A plasma whale slides across the sun‘s surface. And an eerie dragon dances with an aurora. It’s not the plot to a fantasy novel, it’s our incredible universe captured in stunning detail.
The Royal Observatory Greenwich has announced the shortlisted images for the 2024 Astronomy Photographer of the Year. The finalists were selected from more than 3,500 images submitted from professional and amateur photographers from 58 countries. The winner will be announced September 12 and an exhibition of the top images will be on display in London at the National Maritime Museum starting September 14.
When you learn about the moon in school, you’re generally taught that its gravity is insufficient to capture and retain any significant atmosphere. The moon is nonetheless surrounded by a thin, ephemeral halo of gasses—an exosphere.
This surprising fact was first discovered using instruments carried by astronauts who visited the moon with the Apollo program. The moon’s weak gravity means that the exosphere’s constituent atoms are constantly draining away into space—and, as such, its continuou
When you learn about the moon in school, you’re generally taught that its gravity is insufficient to capture and retain any significant atmosphere. The moon is nonetheless surrounded by a thin, ephemeral halo of gasses—an exosphere.
This surprising fact was first discovered using instruments carried by astronauts who visited the moon with the Apollo program. The moon’s weak gravity means that the exosphere’s constituent atoms are constantly draining away into space—and, as such, its continuous presence means that the supply of these atoms is being constantly replenished.
A new study published in Science Advances on August 2 looks at exactly how this replenishment happens. It examines a group of elements whose presence in the lunar atmosphere might come as a surprise to anyone who’s studied chemistry: alkali metals.
Alkali metals form the first group of the periodic table, and include lithium, sodium, potassium, rubidium, and caesium (along with francium, which is never found in macroscopic quantities because it’s so radioactive). Why is their presence a surprise? On Earth, they’re famous for their reactivity, as evidenced by the classic high school demonstration of what a piece of sodium does when it encounters water. On the moon, however, things are very different.
As Prof. Nicole Nie, lead author of the paper, tells Popular Science, “In lunar soils and rocks, alkali metals are bound in minerals, forming stable chemical bonds with oxygen and other elements. But when they are released from the surface, they usually become neutral atoms. There is no liquid water or substantial atmosphere [on the moon], so these metals can remain in their elemental form—[and] because the number of atoms in the lunar atmosphere is so small, the atoms can travel a long distance freely without colliding with one another.”
This does, however, raise the question of how the atoms are released from the surface in the first place. The paper seeks to answer this question—and, specifically, the relative contributions of three processes known collectively as “space weathering.” The uniting factor in these three processes is that they involve something striking the lunar surface and knocking the alkali metal elements out of the mineral compounds in which they’re bound. (These processes also release other elements, but the volatility of alkali metals makes them particularly easy to liberate.)
The first of these processes is micrometeorite impacts, where tiny pieces of space debris rain down with sufficient force to vaporize a small piece of the lunar surface and launch its component atoms into orbit. The second is ion sputtering, where charged particles driven by the solar wind strike the lunar surface. And finally there’s photon-stimulated desorption, where it’s high-energy photons from the sun that knock the alkali metals loose.
As the paper notes, while each process has been well-characterized, previous research has “not conclusively disentangled their [relative] contributions” to the lunar atmosphere. To go about doing this, Nie and her team went right back to the source of the question: the Apollo program. The various crewed missions to the moon in the late 1960s and early ‘70s brought back a total of 382 kg of lunar soil samples, and decades later, these samples are still revealing their secrets to researchers. Nie’s study involved examining 10 samples from five different Apollo missions, including several from Apollo 8, the first manned moon landing.
The team used these samples to look at the relative proportions of different isotopes of potassium and rubidium in the soil. (Sodium and cesium only have one stable isotope each, while lithium is less volatile than its heavier cousins.) As Nie explains to Popular Science, “Lighter isotopes of an element are preferentially released during these processes, leaving the lunar soils with relatively heavier isotopic compositions. For elements that are affected by space weathering, we would expect lunar soils to show heavy isotopic compositions, compared to deeper rocks that are not affected by the process.”
The different space weathering processes produce different ratios of isotopes, and the team’s results indicate that it appears that micrometeorite impacts make the largest contribution to the lunar atmosphere, “likely contributing more than 65% of atmospheric [potassium] atoms, with ion sputtering accounting for the rest.”
This provides a valuable insight into how the moon’s atmosphere has evolved over billions of years—while its composition may well vary over shorter timescales, these results suggest that in the long run, micrometeorite impacts play the dominant role in the constant replenishment of the atmosphere. The study also points to how similar research might be carried out on other objects similar to the moon, like Phobos, one of Mars’s two satellites.
Enlarge (credit: Getty Images)
Often times, when I am researching something about computers or coding that has been around a very long while, I will come across a document on a university website that tells me more about that thing than any Wikipedia page or archive ever could.
It's usually a PDF, though sometimes a plaintext file, on a .edu subdirectory that starts with a username preceded by a tilde (~) character. This is typically a document that a professor, faced with th
Often times, when I am researching something about computers or coding that has been around a very long while, I will come across a document on a university website that tells me more about that thing than any Wikipedia page or archive ever could.
It's usually a PDF, though sometimes a plaintext file, on a .edu subdirectory that starts with a username preceded by a tilde (~) character. This is typically a document that a professor, faced with the same questions semester after semester, has put together to save the most time possible and get back to their work. I recently found such a document inside Princeton University's astrophysics department: "An Introduction to the X Window System," written by Robert Lupton.
X Window System, which turned 40 years old earlier this week, was something you had to know how to use to work with space-facing instruments back in the early 1980s, when VT100s, VAX-11/750s, and Sun Microsystems boxes would share space at college computer labs. As the member of the AstroPhysical Sciences Department at Princeton who knew the most about computers back then, it fell to Lupton to fix things and take questions.
In reference to the snapshot and delta compression approach popularized by Quake 3, but with ECS.
Understand the delta should only contain changes — makes sense.
However, if a snapshot delta no longer contains some component x, how does the client know if it’s because there is no state change or because the server removed the component? Same for entities, an entity may just not have any state change over the last few ticks, or, it may have been destroyed on the server, perhaps it was killed but
In reference to the snapshot and delta compression approach popularized by Quake 3, but with ECS.
Understand the delta should only contain changes — makes sense.
However, if a snapshot delta no longer contains some component x, how does the client know if it’s because there is no state change or because the server removed the component? Same for entities, an entity may just not have any state change over the last few ticks, or, it may have been destroyed on the server, perhaps it was killed but we missed that state update.
Now, I could:
Encode this into the actual diff as proposed in the comments, e.g. component x from previous snapshot no longer exists in latest snapshot. This of course would not be compatible with how my deltas are currently being auto generated by XORing two snapshots.
Include ALL — in view — entity IDs and component types in a given snapshot delta packet even if the corresponding components are zero'd out (no change). Packet size concerns here as we’re no longer sending ‘only changes’ — e.g. if entity ID is 1 byte, component tag is 1 byte; 100 entities each with 5 networked components is already 600 bytes uncompressed even if the component data is unchanged and zeroed out (such as XOR).
Use a separate command system to relay key entity or component events e.g. entity x destroyed/spawned, etc., but that seems like overkill since it could be handled implicitly via state updates.
Example: P2 -- standing in view of P1 -- goes AFK for a few seconds and so the last x snapshots for P1 didn't contain the P2 entity. P1 doesn't know if P2 is dead and so it’s entity needs destroying, or it's actually just doing nothing. The reverse is also true, i.e. P2 could have been killed and the server destroyed that entity, but P1 missed the state update that indicated the death.
A somewhat related note but please feel free to stop reading: Quake 3 as far as I can see here will include all entities in the snapshot including an indicator if the entity is null (destroyed I assume). In the ECS world where we don’t have one struct representing an entity, but instead have many (components), if the same approach were to be followed for an entity’s components it could lead to a bulky delta — as per my original concern. Whilst not related to my question, diff is also much easier with the single struct approach since it would have many fields and you could simply XOR with a previous version and RLE out the zeros — also doesn’t work for components which would defeat the RLE.
Today, Airship Syndicate announced the return of its online action-RPG Wayfinder, albeit with a few key differences. Though the game was pulled from Steam while its previous publisher, Digital Extremes (Warframe), was transferring the rights to developer Airship, it is set to land on the storefront again on June 11…Read more...
Today, Airship Syndicate announced the return of its online action-RPG Wayfinder, albeit with a few key differences. Though the game was pulled from Steam while its previous publisher, Digital Extremes (Warframe), was transferring the rights to developer Airship, it is set to land on the storefront again on June 11…
Experts at the Table: The automotive ecosystem is undergoing a transformation toward software-defined vehicles, spurring new architectures with more software. Semiconductor Engineering sat down to discuss the impact of these changes with Suraj Gajendra, vice president of products and solutions in Arm‘s automotive line of business; Chuck Alpert, R&D automotive fellow at Cadence; Steve Spadoni, zone controller and power distribution application manager at Infineon; Rebeca Delgado, chief techno
Experts at the Table: The automotive ecosystem is undergoing a transformation toward software-defined vehicles, spurring new architectures with more software. Semiconductor Engineering sat down to discuss the impact of these changes with Suraj Gajendra, vice president of products and solutions in Arm‘s automotive line of business; Chuck Alpert, R&D automotive fellow at Cadence; Steve Spadoni, zone controller and power distribution application manager at Infineon; Rebeca Delgado, chief technology officer and principal AI engineer at Intel Automotive; Cyril Clocher, senior director in the automotive product line for high-performance computing at Renesas; David Fritz, vice president, hybrid and virtual systems at Siemens EDA; and Marc Serughetti, senior director, systems design group at Synopsys. What follows are excerpts of that discussion.
SE: The automotive ecosystem is undergoing a technology evolution the likes of which has not been seen, including the move to software-defined vehicles. To set a baseline for this discussion, what is your definition of an SDV?
Gajendra: A software-defined vehicle is a concept, a trend, an idea, where the whole ecosystem can drive new capabilities and new user experiences into the car, even after it rolls out of the showroom or dealership. It’s a pretty loaded concept. There’s a lot of infrastructure that needs to come together, such as software development in the cloud, seamless deployment of that software development onto the car, the whole deployment of over-the-air updates, and the connectivity. In short, the concept of a software-defined vehicle is expecting a world where we can drive new experiences, new capabilities, and new features into the car throughout its lifetime.
Alpert: In thinking about what SDV means, one example is the battery — especially in an EV. I’m not talking about the technology of the battery that’s evolved, but rather the idea that in the past when you wanted to charge your car in your garage and you were worried about starting a fire, you’d think, ‘No, don’t do that because your whole house could burn down.’ The idea is that in the past, maybe we might put a temperature sensor on the battery, but now we actually have software that can monitor it. It might even have AI to predict if the battery is reaching some state that might cause a fire in the future. You also might have something that connects to the power grid and learns when is a good time to charge, because it’s a low-usage period so it’s cheaper. This is just one part of the car, but you can imagine a whole bunch of software that you want to put on top of it in order to connect to the universe. You need a software-defined vehicle platform in order for this, or in all the other parts of your car, to communicate with the world and provide the best user experience.
Spadoni: Infineon’s definition of a software-defined vehicle is a redefining of architecture — specifically, electrical and electronic architecture, feature allocation, and the entire topology of the vehicle, from power generation and storage to power distribution and high compute. It really means new electrical architectures, and it has consequences for the business model of every OEM and Tier 1 involved. It’s a major change to previous methodologies in the last 30 years.
Delgado: Software-defined vehicle is not just over-the-air updates. It’s truly a new methodology and a new philosophy for how to architect every ingredient of the vehicle to continue to deliver value over time, in which the value is very tightly attached to the software that delivers the user experience. Ultimately, this architecture must enable the different practices on how to deliver this new value over time. What’s very interesting is that these practices of moving to software-defined architecture has been done by many other industries already. Intel has a ton of heritage, and actually helped those industries transform. That transformation is truly what we’re observing here. It’s an incredible opportunity, and possibly a crisis if not done right.
Clocher: To apply an analogy here, the car is the new smartphone. But for us, it’s more than that. I’ve heard about the platform, yes, and it’s the major architecture evolution that we’ll see in the next decade. For us at Renesas, it will be a journey that will take time to enhance the user experience, to generate new revenue streams for the industry as it moves from decentralized to centralized classic compute with zonal architecture. We can apply all those buzzwords to a software-defined vehicle. Those platform will need big computers and heavy complex hardware solutions and this will generate evolutions, upgrades to the car during its entire lifetime, but underneath we know — at least at Renesas, and certainly at some other players and silicon vendors — that this will need a huge amount of hardware resources to manage what we have in mind to deploy this platform.
Fritz: I see software-defined vehicles a bit differently than what’s been mentioned so far. For many years, you’d have the hardware team doing their design, and the software team doing their design, and it all needs to come together. There’s an English natural language discussion about what needs to happen, and as we all know, that never really goes terribly well. In automotive that becomes an integration storm, and it is a nightmare. With the new compute requirements that have been mentioned already, that just compounds the issue. So the way I see this is that we tend, as people who have an engineering background, to dive into how we’re going to do things. We hear ‘software-defined vehicle,’ we immediately think about how to do that. There’s not a lot of thought about why it needs to be done, and what needs to happen. We jump into the ‘how’ too early, and a lot of the discussion here is exemplary of that kind of approach. When I’m looking at software-defined vehicles, I’m looking at why it’s important that the software needs to run effectively on a piece of hardware. And for that hardware, why is it important for it to actually operate properly on the software? Then you can decide how to put together a new methodology that’s going to bring those things together. In the past, it’s been called hardware/software co-design. There have been attempts many times, and as has been mentioned, other industries have made this transition. What’s unique about automotive is that it’s not just one transition that needs to happen. It’s hundreds or thousands of transitions. The ecosystem needs to be turned upside down, which we’re seeing happen right now, and you need to bring all that together. It really is a methodology where you need the tooling, you need the processes, you need the thinking, you need the organizations to change so that they can make this transition in a realistic way. SDV is a huge transition. It is a way for the automotive industry to morph into something that has longevity and can meet customer expectations, which it really hasn’t met for some time now.
Serughetti: At the end of the day, if we look starting at the top from our perspective, SDV is a means to bring and enhance the car experience for the customer. That’s the end result that the OEMs look at, but they look at it from the perspective of how that improves the OEM efficiencies, and how that creates new business opportunities. The way we look at it, and what’s important, is the impact it has on the industry, the impact on the processes, on the methodologies, on the people, on the ecosystem, on the technology. It’s really a transformation of the automotive market that is going to fundamentally change how the industry moves forward and bring the OEM into a world in which they are really looking at how they become efficient in delivering cars, how they bring new features, but at the same time, how they evolve their business as well.
SE: As you’ve all described, SDV requires many inter-dependencies, and the entire ecosystem has to have an understanding of the ‘why,’ which should then lead back to laying out the plan for how to get there. Where does the ecosystem stand today in terms of realizing SDV?
Fritz: OEMs have decided in the last few years that they’ve got to take control of their own destiny. They cannot simply take what the suppliers provide. They need a methodology — like this whole SDV concept, and any tooling necessary to provide that — to push down into their suppliers, such that, ‘Here’s what I need. If you can’t do this for me, I will go find someone that will.’ This is not the old ecosystem that bubbled up from the IP to the Tier 2s, to the Tier 1s, and then to the OEMs, which gave them limited choices to go from. So when I say, “Turn the ecosystem upside down,” that’s what is happening. But every OEM has their own ecosystem, and they’re not all in the same place. Even region-to-region, they can be very different.
Delgado: This is a critical discussion, and effectively where the industry has to eventually settle. The magnitude of the transformation of the ecosystem includes roles in the technology evolution. The silicon content is expected to quadruple over the next few years in the vehicle for defining the in-cabin experience of the end user. At the end of the day, the complexity of the transition of roles is of such magnitude that the proprietary, fragmented, and broken approaches that David articulated are really not going to enable the industry to transform at the speed it requires to deliver and meet the experiences. But more than anything, they are not going to address the actual technology changes necessary to implement and allow for this value delivery mechanism. At the end of the day, this is where Intel really believes collaboration is key, and anybody who wants to participate in this ecosystem must provide scalability — also known as top-to-bottom support of the different product lines that our OEMs and Tier 1s are having to support, versus a broken-up approach on these ever-evolving higher performance and higher performance compute needs. It has to be future-proof, because you’re going to launch the vehicle eventually. So certain hardware has to be future-proofed to a certain affordability envelope, and there has to be a strategy around that. And then the ecosystem and that collaboration must be able to deliver that aggregation. It has to be done with certain anchoring technology that will allow us to deliver that performance. Collaboration is key in the sense that these technologies cannot be single-handedly owned, developed, let alone owned, defined, developed, and integrated by OEMs in silos with a proprietary end-to-end architecture definition. There obviously will be differentiations on the actual implementation, but the technologies at large have to have a sense of reuse, particularly from other verticals that have already done software-defined transformations and then tuned in the right ways toward the automotive requirements.
Spadoni: There are probably a wide variety of implementations. At Infineon, we partner with OEMs and Tier 1s and we see different approaches. For example, General Motors has more of a modular approach that emulates what happened in in the mobile phone space. It seems that Ford has a more pragmatic approach, along with Stellantis, but all of them are facing very similar challenges in that affordability has become a big problem. There are multiple generations of implementations that are going to occur, and you’ll see a striving toward how to pay for this extra hardware. It leads to tradeoffs in implementations of other systems that have to have savings in order for them to afford these vehicles. No one ever goes into a dealership and says, ‘Give me a software-defined vehicle.’ Everyone’s looking for value, and you can see it now with volumes going down. There’s a saturation of people buying at the high level. The OEMs want to get more sales, which means they’ll have to go to the lower-cost-value vehicles, and that’s going to affect the electrical and electronic architectures and the software-defined vehicle.
Clocher: What we’re seeing I would summarize as the impact on the ecosystem. We’re moving to an OEM-centric ecosystem. One size does not fit all, meaning OEMs will have their different tastes, their different definitions of levels of integration they want to have in their software-defined vehicle — especially given more complex tasks that we all have to do, rather than the challenge we have to solve, because we’re not talking about a common umbrella of software-defined vehicle. But it really does mean different implementations and different meanings for OEM A from OEM B. I would fully agree with David and Steve that we are far from having a common understanding of, at least, the market itself. And that’s fine, because this will bring differentiation, and ultimately that’s why a customer will go to Dealership A versus Dealership B. This is what the industry wants to see — continue to differentiate, continue to add value to the ultimate product, which is the car.
Serughetti: The important point in all this is, of course, you’re breaking the model that exists today. That’s one of the big challenges. We used to have Tier 1s that were building boxes, and delivering software. This was a complete black box. When it would go to integration, there were all sorts of problems. And now you’re going to break this? The challenge for the OEM is how they do this. They want to control software, but are they equipped to do this today? We see the problems today that some of the legacy OEMs have in setting up their software organizations, the challenges of CARIAD and all such organizations that are trying to do this. It’s not easy to change those companies. Of course, the new entrants don’t have this problem because they are coming from a brand new design versus the ones that deal with legacy. So for the OEM, it’s about how to take control of the software. What does that mean in terms of the processes, in terms of agile development, digital twins, and all of these technologies everybody’s talking about? The other side is, ‘It’s all nice, this software,’ but this software runs on all the companies that are delivering hardware, and that becomes essential to it. You can have the best software, but if your hardware is not there to support performance, power, and all of those aspects, you’re not going to be successful. So the ecosystem is evolving how hardware, software, and all of this comes together. The OEM wants to be the central point. That’s what we’re talking about in terms of the process methodology aspects that are making this transition evolve.
Gajendra: Where are we in this journey? How far have we come? And where are we going? Going back to the point that David mentioned earlier about supply chain evolving and the supply chain turned upside down, five years ago, if we sat here in this sort of a panel and discussed software-defined vehicles, the conversation would have been entirely different. It would have been stuck with the traditional supply chain that we’ve seen for the last 35 or 40 years in the automotive industry. There are fundamentally two aspects here. The supply chain is evolving, and the infrastructure that we, as a community — this team, for example, and many others in the community — are trying to enable is going to be key to making our EDA partners happy. The use of virtual platforms today in the cloud to try and shift left and develop and validate some of these technologies and software wasn’t even there five years ago, so we’ve come a long way. We’ve made a lot of progress together as an industry. Yes, we have a long way to go until we actually have a truly software-defined vehicle. We can go and ask for a software-defined vehicle in the dealership. But the changes we are seeing in terms of all sorts of technology providers trying to make sure that the technology that we eventually will have in the hardware is provided in some sort of virtual form, be it fast models or whatever it is in the cloud, for the vast majority of software ecosystem in automotive this is a big change. I was at Embedded World, and the amount of virtual platforms and the demos that people were actually showing — silicon partners like we have here, Intel, Renesas, Infineon, EDA companies — pointed to a strong movement of, ‘Let’s build the infrastructure that we can build, and then provide that infrastructure to the OEMs to take it from there.’ There is a lot of work going on. Together we will make the infrastructure across the board, be it virtual platform or others, richer and more capable.
Alpert: For sure, OEMs have to control their own destiny. In the past, they would do it by differentiating maybe because they had better engine performance, or some other feature. But going forward, the differentiation is going to be their software. Whoever can make software that will provide additional value, and brand it, that’s going to be the differentiator and that’s the trend. In terms of how you get there, a shared ecosystem is important. SOAFEE is a potential way that, together with virtual platforms, you can provide a shared ecosystem for development, but still allow everyone to differentiate and plug-and-play. That’s one reason we’re working closely with Arm on trying to have a reference design specifically for this purpose. But again, we’re not saying, ‘This is the design you use. This is how you do it.’ That’s not it. The point is, let’s start somewhere, and then people can start swapping out pieces and doing different things. As long as OEMs can plug-and-play, then they can still differentiate. But they don’t have to invent everything themselves, which would be too costly.
Experts at the Table: The automotive ecosystem is undergoing a transformation toward software-defined vehicles, spurring new architectures with more software. Semiconductor Engineering sat down to discuss the impact of these changes with Suraj Gajendra, vice president of products and solutions in Arm‘s automotive line of business; Chuck Alpert, R&D automotive fellow at Cadence; Steve Spadoni, zone controller and power distribution application manager at Infineon; Rebeca Delgado, chief techno
Experts at the Table: The automotive ecosystem is undergoing a transformation toward software-defined vehicles, spurring new architectures with more software. Semiconductor Engineering sat down to discuss the impact of these changes with Suraj Gajendra, vice president of products and solutions in Arm‘s automotive line of business; Chuck Alpert, R&D automotive fellow at Cadence; Steve Spadoni, zone controller and power distribution application manager at Infineon; Rebeca Delgado, chief technology officer and principal AI engineer at Intel Automotive; Cyril Clocher, senior director in the automotive product line for high-performance computing at Renesas; David Fritz, vice president, hybrid and virtual systems at Siemens EDA; and Marc Serughetti, senior director, systems design group at Synopsys. What follows are excerpts of that discussion.
SE: The automotive ecosystem is undergoing a technology evolution the likes of which has not been seen, including the move to software-defined vehicles. To set a baseline for this discussion, what is your definition of an SDV?
Gajendra: A software-defined vehicle is a concept, a trend, an idea, where the whole ecosystem can drive new capabilities and new user experiences into the car, even after it rolls out of the showroom or dealership. It’s a pretty loaded concept. There’s a lot of infrastructure that needs to come together, such as software development in the cloud, seamless deployment of that software development onto the car, the whole deployment of over-the-air updates, and the connectivity. In short, the concept of a software-defined vehicle is expecting a world where we can drive new experiences, new capabilities, and new features into the car throughout its lifetime.
Alpert: In thinking about what SDV means, one example is the battery — especially in an EV. I’m not talking about the technology of the battery that’s evolved, but rather the idea that in the past when you wanted to charge your car in your garage and you were worried about starting a fire, you’d think, ‘No, don’t do that because your whole house could burn down.’ The idea is that in the past, maybe we might put a temperature sensor on the battery, but now we actually have software that can monitor it. It might even have AI to predict if the battery is reaching some state that might cause a fire in the future. You also might have something that connects to the power grid and learns when is a good time to charge, because it’s a low-usage period so it’s cheaper. This is just one part of the car, but you can imagine a whole bunch of software that you want to put on top of it in order to connect to the universe. You need a software-defined vehicle platform in order for this, or in all the other parts of your car, to communicate with the world and provide the best user experience.
Spadoni: Infineon’s definition of a software-defined vehicle is a redefining of architecture — specifically, electrical and electronic architecture, feature allocation, and the entire topology of the vehicle, from power generation and storage to power distribution and high compute. It really means new electrical architectures, and it has consequences for the business model of every OEM and Tier 1 involved. It’s a major change to previous methodologies in the last 30 years.
Delgado: Software-defined vehicle is not just over-the-air updates. It’s truly a new methodology and a new philosophy for how to architect every ingredient of the vehicle to continue to deliver value over time, in which the value is very tightly attached to the software that delivers the user experience. Ultimately, this architecture must enable the different practices on how to deliver this new value over time. What’s very interesting is that these practices of moving to software-defined architecture has been done by many other industries already. Intel has a ton of heritage, and actually helped those industries transform. That transformation is truly what we’re observing here. It’s an incredible opportunity, and possibly a crisis if not done right.
Clocher: To apply an analogy here, the car is the new smartphone. But for us, it’s more than that. I’ve heard about the platform, yes, and it’s the major architecture evolution that we’ll see in the next decade. For us at Renesas, it will be a journey that will take time to enhance the user experience, to generate new revenue streams for the industry as it moves from decentralized to centralized classic compute with zonal architecture. We can apply all those buzzwords to a software-defined vehicle. Those platform will need big computers and heavy complex hardware solutions and this will generate evolutions, upgrades to the car during its entire lifetime, but underneath we know — at least at Renesas, and certainly at some other players and silicon vendors — that this will need a huge amount of hardware resources to manage what we have in mind to deploy this platform.
Fritz: I see software-defined vehicles a bit differently than what’s been mentioned so far. For many years, you’d have the hardware team doing their design, and the software team doing their design, and it all needs to come together. There’s an English natural language discussion about what needs to happen, and as we all know, that never really goes terribly well. In automotive that becomes an integration storm, and it is a nightmare. With the new compute requirements that have been mentioned already, that just compounds the issue. So the way I see this is that we tend, as people who have an engineering background, to dive into how we’re going to do things. We hear ‘software-defined vehicle,’ we immediately think about how to do that. There’s not a lot of thought about why it needs to be done, and what needs to happen. We jump into the ‘how’ too early, and a lot of the discussion here is exemplary of that kind of approach. When I’m looking at software-defined vehicles, I’m looking at why it’s important that the software needs to run effectively on a piece of hardware. And for that hardware, why is it important for it to actually operate properly on the software? Then you can decide how to put together a new methodology that’s going to bring those things together. In the past, it’s been called hardware/software co-design. There have been attempts many times, and as has been mentioned, other industries have made this transition. What’s unique about automotive is that it’s not just one transition that needs to happen. It’s hundreds or thousands of transitions. The ecosystem needs to be turned upside down, which we’re seeing happen right now, and you need to bring all that together. It really is a methodology where you need the tooling, you need the processes, you need the thinking, you need the organizations to change so that they can make this transition in a realistic way. SDV is a huge transition. It is a way for the automotive industry to morph into something that has longevity and can meet customer expectations, which it really hasn’t met for some time now.
Serughetti: At the end of the day, if we look starting at the top from our perspective, SDV is a means to bring and enhance the car experience for the customer. That’s the end result that the OEMs look at, but they look at it from the perspective of how that improves the OEM efficiencies, and how that creates new business opportunities. The way we look at it, and what’s important, is the impact it has on the industry, the impact on the processes, on the methodologies, on the people, on the ecosystem, on the technology. It’s really a transformation of the automotive market that is going to fundamentally change how the industry moves forward and bring the OEM into a world in which they are really looking at how they become efficient in delivering cars, how they bring new features, but at the same time, how they evolve their business as well.
SE: As you’ve all described, SDV requires many inter-dependencies, and the entire ecosystem has to have an understanding of the ‘why,’ which should then lead back to laying out the plan for how to get there. Where does the ecosystem stand today in terms of realizing SDV?
Fritz: OEMs have decided in the last few years that they’ve got to take control of their own destiny. They cannot simply take what the suppliers provide. They need a methodology — like this whole SDV concept, and any tooling necessary to provide that — to push down into their suppliers, such that, ‘Here’s what I need. If you can’t do this for me, I will go find someone that will.’ This is not the old ecosystem that bubbled up from the IP to the Tier 2s, to the Tier 1s, and then to the OEMs, which gave them limited choices to go from. So when I say, “Turn the ecosystem upside down,” that’s what is happening. But every OEM has their own ecosystem, and they’re not all in the same place. Even region-to-region, they can be very different.
Delgado: This is a critical discussion, and effectively where the industry has to eventually settle. The magnitude of the transformation of the ecosystem includes roles in the technology evolution. The silicon content is expected to quadruple over the next few years in the vehicle for defining the in-cabin experience of the end user. At the end of the day, the complexity of the transition of roles is of such magnitude that the proprietary, fragmented, and broken approaches that David articulated are really not going to enable the industry to transform at the speed it requires to deliver and meet the experiences. But more than anything, they are not going to address the actual technology changes necessary to implement and allow for this value delivery mechanism. At the end of the day, this is where Intel really believes collaboration is key, and anybody who wants to participate in this ecosystem must provide scalability — also known as top-to-bottom support of the different product lines that our OEMs and Tier 1s are having to support, versus a broken-up approach on these ever-evolving higher performance and higher performance compute needs. It has to be future-proof, because you’re going to launch the vehicle eventually. So certain hardware has to be future-proofed to a certain affordability envelope, and there has to be a strategy around that. And then the ecosystem and that collaboration must be able to deliver that aggregation. It has to be done with certain anchoring technology that will allow us to deliver that performance. Collaboration is key in the sense that these technologies cannot be single-handedly owned, developed, let alone owned, defined, developed, and integrated by OEMs in silos with a proprietary end-to-end architecture definition. There obviously will be differentiations on the actual implementation, but the technologies at large have to have a sense of reuse, particularly from other verticals that have already done software-defined transformations and then tuned in the right ways toward the automotive requirements.
Spadoni: There are probably a wide variety of implementations. At Infineon, we partner with OEMs and Tier 1s and we see different approaches. For example, General Motors has more of a modular approach that emulates what happened in in the mobile phone space. It seems that Ford has a more pragmatic approach, along with Stellantis, but all of them are facing very similar challenges in that affordability has become a big problem. There are multiple generations of implementations that are going to occur, and you’ll see a striving toward how to pay for this extra hardware. It leads to tradeoffs in implementations of other systems that have to have savings in order for them to afford these vehicles. No one ever goes into a dealership and says, ‘Give me a software-defined vehicle.’ Everyone’s looking for value, and you can see it now with volumes going down. There’s a saturation of people buying at the high level. The OEMs want to get more sales, which means they’ll have to go to the lower-cost-value vehicles, and that’s going to affect the electrical and electronic architectures and the software-defined vehicle.
Clocher: What we’re seeing I would summarize as the impact on the ecosystem. We’re moving to an OEM-centric ecosystem. One size does not fit all, meaning OEMs will have their different tastes, their different definitions of levels of integration they want to have in their software-defined vehicle — especially given more complex tasks that we all have to do, rather than the challenge we have to solve, because we’re not talking about a common umbrella of software-defined vehicle. But it really does mean different implementations and different meanings for OEM A from OEM B. I would fully agree with David and Steve that we are far from having a common understanding of, at least, the market itself. And that’s fine, because this will bring differentiation, and ultimately that’s why a customer will go to Dealership A versus Dealership B. This is what the industry wants to see — continue to differentiate, continue to add value to the ultimate product, which is the car.
Serughetti: The important point in all this is, of course, you’re breaking the model that exists today. That’s one of the big challenges. We used to have Tier 1s that were building boxes, and delivering software. This was a complete black box. When it would go to integration, there were all sorts of problems. And now you’re going to break this? The challenge for the OEM is how they do this. They want to control software, but are they equipped to do this today? We see the problems today that some of the legacy OEMs have in setting up their software organizations, the challenges of CARIAD and all such organizations that are trying to do this. It’s not easy to change those companies. Of course, the new entrants don’t have this problem because they are coming from a brand new design versus the ones that deal with legacy. So for the OEM, it’s about how to take control of the software. What does that mean in terms of the processes, in terms of agile development, digital twins, and all of these technologies everybody’s talking about? The other side is, ‘It’s all nice, this software,’ but this software runs on all the companies that are delivering hardware, and that becomes essential to it. You can have the best software, but if your hardware is not there to support performance, power, and all of those aspects, you’re not going to be successful. So the ecosystem is evolving how hardware, software, and all of this comes together. The OEM wants to be the central point. That’s what we’re talking about in terms of the process methodology aspects that are making this transition evolve.
Gajendra: Where are we in this journey? How far have we come? And where are we going? Going back to the point that David mentioned earlier about supply chain evolving and the supply chain turned upside down, five years ago, if we sat here in this sort of a panel and discussed software-defined vehicles, the conversation would have been entirely different. It would have been stuck with the traditional supply chain that we’ve seen for the last 35 or 40 years in the automotive industry. There are fundamentally two aspects here. The supply chain is evolving, and the infrastructure that we, as a community — this team, for example, and many others in the community — are trying to enable is going to be key to making our EDA partners happy. The use of virtual platforms today in the cloud to try and shift left and develop and validate some of these technologies and software wasn’t even there five years ago, so we’ve come a long way. We’ve made a lot of progress together as an industry. Yes, we have a long way to go until we actually have a truly software-defined vehicle. We can go and ask for a software-defined vehicle in the dealership. But the changes we are seeing in terms of all sorts of technology providers trying to make sure that the technology that we eventually will have in the hardware is provided in some sort of virtual form, be it fast models or whatever it is in the cloud, for the vast majority of software ecosystem in automotive this is a big change. I was at Embedded World, and the amount of virtual platforms and the demos that people were actually showing — silicon partners like we have here, Intel, Renesas, Infineon, EDA companies — pointed to a strong movement of, ‘Let’s build the infrastructure that we can build, and then provide that infrastructure to the OEMs to take it from there.’ There is a lot of work going on. Together we will make the infrastructure across the board, be it virtual platform or others, richer and more capable.
Alpert: For sure, OEMs have to control their own destiny. In the past, they would do it by differentiating maybe because they had better engine performance, or some other feature. But going forward, the differentiation is going to be their software. Whoever can make software that will provide additional value, and brand it, that’s going to be the differentiator and that’s the trend. In terms of how you get there, a shared ecosystem is important. SOAFEE is a potential way that, together with virtual platforms, you can provide a shared ecosystem for development, but still allow everyone to differentiate and plug-and-play. That’s one reason we’re working closely with Arm on trying to have a reference design specifically for this purpose. But again, we’re not saying, ‘This is the design you use. This is how you do it.’ That’s not it. The point is, let’s start somewhere, and then people can start swapping out pieces and doing different things. As long as OEMs can plug-and-play, then they can still differentiate. But they don’t have to invent everything themselves, which would be too costly.
An original prototype of Nintendo’s Super Famicom, Japan’s (superior) version of the SNES, was up for auction with recent bids pushing the grey piece of gaming history north of $3.2 million. Then, earlier today, the auction was abruptly pulled offline, potentially over fraudulent bids. Read more...
An original prototype of Nintendo’s Super Famicom, Japan’s (superior) version of the SNES, was up for auction with recent bids pushing the grey piece of gaming history north of $3.2 million. Then, earlier today, the auction was abruptly pulled offline, potentially over fraudulent bids.
The AYANEO Pocket S is a handheld game console with a 6 inch display, support for up to 16GB of RAM and 1TB of storage, and Android 13 software featuring AYA’s custom launcher app and performance tuning utility. It’s also one of the first devices to feature Qualcomm’s Snapdragon G3x Gen 2 processor, a chip […]
The post AYANEO Pocket S handheld game console launches via Indiegogo for $399 and up (Snapdragon G3x Gen 2, Android 13, and up to 16GB RAM) appeared first on Liliputing.
The AYANEO Pocket S is a handheld game console with a 6 inch display, support for up to 16GB of RAM and 1TB of storage, and Android 13 software featuring AYA’s custom launcher app and performance tuning utility. It’s also one of the first devices to feature Qualcomm’s Snapdragon G3x Gen 2 processor, a chip […]
Automotive processors are rapidly adopting advanced process nodes. NXP announced the development of 5 nm automotive processors in 2020 [1], Mobileye announced EyeQ Ultra using 5 nm technology during CES 2022 [2], and TSMC announced its “Auto Early” 3 nm processes in 2023 [3]. In the past, the automotive industry was slow to adopt the latest semiconductor technologies due to reliability concerns and lack of a compelling need. Not anymore.
The use of advanced processes necessitates the use of adva
Automotive processors are rapidly adopting advanced process nodes. NXP announced the development of 5 nm automotive processors in 2020 [1], Mobileye announced EyeQ Ultra using 5 nm technology during CES 2022 [2], and TSMC announced its “Auto Early” 3 nm processes in 2023 [3]. In the past, the automotive industry was slow to adopt the latest semiconductor technologies due to reliability concerns and lack of a compelling need. Not anymore.
The use of advanced processes necessitates the use of advanced packaging as seen in high performance computing (HPC) and mobile applications because [4][5]:
While transistor density has skyrocketed, I/O density has not increased proportionally and is holding back chip size reductions.
Processors have heterogeneous, specialized blocks to support today’s workloads.
Maximum chip sizes are limited by the slowdown of transistor scaling, photo reticle limits and lower yields.
Cost per transistor improvements have slowed down with advanced nodes.
These have been drivers for the use of advanced packages like fan-out in mobile and 2.5D/3D in HPC. In addition, these drivers are slowly but surely showing up in automotive compute units in a variety of automotive architectures as well (see figure 1).
Fig. 1: Vehicle E/E architectures. (Image courtesy of Amkor Technology)
Vehicle electrical/electronic (E/E) architectures have evolved from 100+ distributed electronic control units (ECUs) to 10+ domain control units (DCUs) [6]. The most recent architecture introduces zonal or zone ECUs that are clustered in physical locations in cars and connect to powerful central computing units for processing. These newer architectures improve scalability, cost, and reliability of software-defined vehicles (SDVs) [7]. The processors in each of these architectures are more complex than those in the previous generation.
Multiple cameras, radar, lidar and ultrasonic sensors and more feed data into the compute units. Processing and inferencing this data require specialized functional blocks on the processor. For example, the Tesla Full Self-Driving (FSD) HW 3.0 system on chip (SoC) has central processing units (CPUs), graphic processing units (GPUs), neural network processing units, Low-Power Double Data Rate 4 (LPDDR4) controllers and other functional blocks – all integrated on a single piece of silicon [8]. Similarly, Mobileye EyeQ6 has functional blocks of CPU clusters, accelerator clusters, GPUs and an LPDDR5 interface [9]. As more functional blocks are introduced, the chip size and complexity will continue to increase. Instead of a single, monolithic silicon chip, a chiplet approach with separate functional blocks allows intellectual property (IP) reuse along with optimal process nodes for each functional block [10]. Additionally, large, monolithic pieces of silicon built on advanced processes tend to have yield challenges, which can also be overcome using chiplets.
Current advanced driver-assistance systems (ADAS) applications require a DRAM bandwidth of less than 60GB/s, which can be supported with standard double data rate (DDR) and LPDDR solutions. However, ADAS Level 4 and Level 5 will need up to 1024 GB/s memory bandwidth, which will require the use of solutions such as Graphic DDR (GDDR) or High Bandwidth Memory (HBM) [11][12].
Automotive processors have been using Flip Chip BGA (FCBGA) packages since 2010. FCBGA has become the mainstay of several automotive SoCs, such as EyeQ from Mobileye, Tesla FSD and NVIDIA Drive. Consumer applications of FCBGA packaging started around 1995 [13], so it took more than 15 years for this package to be adopted by the automotive industry. Computing units in the form of multichip modules (MCMs) or System-in-Package (SiP) have also been in automotive use since the early 2010s for infotainment processors. The use of MCMs is likely to increase in automotive compute to enable components like the SoC, DRAM and power management integrated circuit (PMIC) to communicate with each other without sending signals off-package.
As cars move to a central computing architecture, the SoCs will become more complex and run into size and cost challenges. Splitting these SoCs into chiplets becomes a logical solution and packaging these chiplets using fan-out or 2.5D packages becomes necessary. Just as FCBGA and MCMs transitioned into automotive from non-automotive applications, so will fan-out and 2.5D packaging for automotive compute processors (see figure 2). The automotive industry is cautious but the abovementioned architecture changes are pushing faster adoption of advanced packages. Materials, processes, and factory controls are key considerations for successful qualification of these packages in automotive compute applications.
In summary, the automotive industry is adopting advanced semiconductor technologies, such as 5 nm and 3 nm processes, which require the use of advanced packaging due to limitations in I/O density, chip size reductions, and memory bandwidth. Processors in the latest vehicle E/E architectures are more complex and require specialized functional blocks to process data from multiple sensors. As cars move to the central computing architecture, the SoCs will become more complex and run into size and cost challenges. Splitting these SoCs into chiplets becomes a logical solution and packaging these chiplets using fan-out or 2.5D technology becomes necessary.
Ziadeh, Bassam. “Driving Adoption of Advanced IC Packaging in Automotive Applications.” Presentation at IMAPS DPC, March 2023. General Motors, Fountain Hills AZ, March 16, 2023.
K Matthias Jung and Norbert Wehn. “Driving Against the Memory Wall: The Role of Memory for Autonomous Driving.” Fraunhofer IESE, Kaiserslautern, Germany, and Microelectronic Systems Design Research Group, University of Kaiserslautern, Kaiserslautern, Germany. Kluedo, https://kluedo.ub.rptu.de/frontdoor/deliver/index/docId/5286/file/_memory.pdf.
Automotive processors are rapidly adopting advanced process nodes. NXP announced the development of 5 nm automotive processors in 2020 [1], Mobileye announced EyeQ Ultra using 5 nm technology during CES 2022 [2], and TSMC announced its “Auto Early” 3 nm processes in 2023 [3]. In the past, the automotive industry was slow to adopt the latest semiconductor technologies due to reliability concerns and lack of a compelling need. Not anymore.
The use of advanced processes necessitates the use of adva
Automotive processors are rapidly adopting advanced process nodes. NXP announced the development of 5 nm automotive processors in 2020 [1], Mobileye announced EyeQ Ultra using 5 nm technology during CES 2022 [2], and TSMC announced its “Auto Early” 3 nm processes in 2023 [3]. In the past, the automotive industry was slow to adopt the latest semiconductor technologies due to reliability concerns and lack of a compelling need. Not anymore.
The use of advanced processes necessitates the use of advanced packaging as seen in high performance computing (HPC) and mobile applications because [4][5]:
While transistor density has skyrocketed, I/O density has not increased proportionally and is holding back chip size reductions.
Processors have heterogeneous, specialized blocks to support today’s workloads.
Maximum chip sizes are limited by the slowdown of transistor scaling, photo reticle limits and lower yields.
Cost per transistor improvements have slowed down with advanced nodes.
These have been drivers for the use of advanced packages like fan-out in mobile and 2.5D/3D in HPC. In addition, these drivers are slowly but surely showing up in automotive compute units in a variety of automotive architectures as well (see figure 1).
Fig. 1: Vehicle E/E architectures. (Image courtesy of Amkor Technology)
Vehicle electrical/electronic (E/E) architectures have evolved from 100+ distributed electronic control units (ECUs) to 10+ domain control units (DCUs) [6]. The most recent architecture introduces zonal or zone ECUs that are clustered in physical locations in cars and connect to powerful central computing units for processing. These newer architectures improve scalability, cost, and reliability of software-defined vehicles (SDVs) [7]. The processors in each of these architectures are more complex than those in the previous generation.
Multiple cameras, radar, lidar and ultrasonic sensors and more feed data into the compute units. Processing and inferencing this data require specialized functional blocks on the processor. For example, the Tesla Full Self-Driving (FSD) HW 3.0 system on chip (SoC) has central processing units (CPUs), graphic processing units (GPUs), neural network processing units, Low-Power Double Data Rate 4 (LPDDR4) controllers and other functional blocks – all integrated on a single piece of silicon [8]. Similarly, Mobileye EyeQ6 has functional blocks of CPU clusters, accelerator clusters, GPUs and an LPDDR5 interface [9]. As more functional blocks are introduced, the chip size and complexity will continue to increase. Instead of a single, monolithic silicon chip, a chiplet approach with separate functional blocks allows intellectual property (IP) reuse along with optimal process nodes for each functional block [10]. Additionally, large, monolithic pieces of silicon built on advanced processes tend to have yield challenges, which can also be overcome using chiplets.
Current advanced driver-assistance systems (ADAS) applications require a DRAM bandwidth of less than 60GB/s, which can be supported with standard double data rate (DDR) and LPDDR solutions. However, ADAS Level 4 and Level 5 will need up to 1024 GB/s memory bandwidth, which will require the use of solutions such as Graphic DDR (GDDR) or High Bandwidth Memory (HBM) [11][12].
Automotive processors have been using Flip Chip BGA (FCBGA) packages since 2010. FCBGA has become the mainstay of several automotive SoCs, such as EyeQ from Mobileye, Tesla FSD and NVIDIA Drive. Consumer applications of FCBGA packaging started around 1995 [13], so it took more than 15 years for this package to be adopted by the automotive industry. Computing units in the form of multichip modules (MCMs) or System-in-Package (SiP) have also been in automotive use since the early 2010s for infotainment processors. The use of MCMs is likely to increase in automotive compute to enable components like the SoC, DRAM and power management integrated circuit (PMIC) to communicate with each other without sending signals off-package.
As cars move to a central computing architecture, the SoCs will become more complex and run into size and cost challenges. Splitting these SoCs into chiplets becomes a logical solution and packaging these chiplets using fan-out or 2.5D packages becomes necessary. Just as FCBGA and MCMs transitioned into automotive from non-automotive applications, so will fan-out and 2.5D packaging for automotive compute processors (see figure 2). The automotive industry is cautious but the abovementioned architecture changes are pushing faster adoption of advanced packages. Materials, processes, and factory controls are key considerations for successful qualification of these packages in automotive compute applications.
In summary, the automotive industry is adopting advanced semiconductor technologies, such as 5 nm and 3 nm processes, which require the use of advanced packaging due to limitations in I/O density, chip size reductions, and memory bandwidth. Processors in the latest vehicle E/E architectures are more complex and require specialized functional blocks to process data from multiple sensors. As cars move to the central computing architecture, the SoCs will become more complex and run into size and cost challenges. Splitting these SoCs into chiplets becomes a logical solution and packaging these chiplets using fan-out or 2.5D technology becomes necessary.
Ziadeh, Bassam. “Driving Adoption of Advanced IC Packaging in Automotive Applications.” Presentation at IMAPS DPC, March 2023. General Motors, Fountain Hills AZ, March 16, 2023.
K Matthias Jung and Norbert Wehn. “Driving Against the Memory Wall: The Role of Memory for Autonomous Driving.” Fraunhofer IESE, Kaiserslautern, Germany, and Microelectronic Systems Design Research Group, University of Kaiserslautern, Kaiserslautern, Germany. Kluedo, https://kluedo.ub.rptu.de/frontdoor/deliver/index/docId/5286/file/_memory.pdf.
Last night, Stephen Colbert criticized the Supreme Court's decision to delay Donald Trump's January 6 insurrection trial by agreeing to hear his immunity claim. Colbert declared February 29 to be Trump Day, "that one magical day you can do anything you want because no laws apply, evidently, according to the Supreme Court" due to the court's decision to hear Trump's immunity defense, which will cause a significant delay in the trial. — Read the rest
The post Stephen Colbert – corrupt Supreme Co
Last night, Stephen Colbert criticized the Supreme Court's decision to delay Donald Trump's January 6 insurrection trial by agreeing to hear his immunity claim. Colbert declared February 29 to be Trump Day, "that one magical day you can do anything you want because no laws apply, evidently, according to the Supreme Court" due to the court's decision to hear Trump's immunity defense, which will cause a significant delay in the trial. — Read the rest
Experts at the Table: Semiconductor Engineering sat down to talk about the challenges of establishing a commercial chiplet ecosystem with Frank Schirrmeister, vice president solutions and business development at Arteris; Mayank Bhatnagar, product marketing director in the Silicon Solutions Group at Cadence; Paul Karazuba, vice president of marketing at Expedera; Stephen Slater, EDA product management/integrating manager at Keysight; Kevin Rinebold, account technology manager for advanced packagi
Experts at the Table: Semiconductor Engineering sat down to talk about the challenges of establishing a commercial chiplet ecosystem with Frank Schirrmeister, vice president solutions and business development at Arteris; Mayank Bhatnagar, product marketing director in the Silicon Solutions Group at Cadence; Paul Karazuba, vice president of marketing at Expedera; Stephen Slater, EDA product management/integrating manager at Keysight; Kevin Rinebold, account technology manager for advanced packaging solutions at Siemens EDA; and Mick Posner, vice president of product management for high-performance computing IP solutions at Synopsys. What follows are excerpts of that discussion.
SE: There’s a lot of buzz and activity around every aspect of chiplets today. What is your impression of where the commercial chiplet ecosystem stands today?
Schirrmeister: There’s a lot of interest today in an open chiplet ecosystem, but we are probably still quite a bit away from true openness. The proprietary versions of chiplets are alive and kicking out there. We see them in designs. We vendors are all supporting those to make it reality, like the UCIe proponents, but it will take some time to get to a fully open ecosystem. It’s probably at least three to five years before we get to a PCI Express type exchange environment.
Bhatnagar: The commercial chiplet ecosystem is at a very early stage. Many companies are providing chiplets, are designing them, and they’re shipping products — but they’re still single-vendor products, where the same company is designing all the pieces. I hope that with the advancements the UCIe standard is making, and with more standardization, we eventually can get to a marketplace-like environment for chiplets. We are not there.
Karazuba: The commercialization of homogeneous chiplets is pretty well understood by groups like AMD. But for the commercialization of heterogeneous chiplets, which is chiplets from multiple suppliers, there are still a lot of questions out there about that.
Slater: We participate in a lot of the board discussions, and attend industry events like TSMC’s OIP, and there’s a lot of excitement out there at the moment. I see a lot of even midsize and small customers starting to think about their development plans for what chiplet should be. I do think those that are going to be successful first will be those that are within a singular foundry ecosystem like TSMC’s. Today if you’re selecting your IP, you’ve got a variety of ways to be able to pick and choose which IP, see what’s been taped out before, how successful it’s been so you have a way to manage your risk and your costs as you’re putting things together. What we’ll see in the future will be that now you have a choice. Are you purchasing IP, or are you purchasing chiplets? Crucially, it’s all coming from the same foundry and put together in the same manner. The technical considerations of things like UCIe standard packaging versus advanced packaging, and the analysis tool sets for high-speed simulation, as well as for things like thermal, are going to just become that much more important.
Rinebold: I’ve been doing this about 30 years, so I can date back to some of the very earliest days of multi-chip modules and such. When we talk about the ecosystem, there are plenty of examples out there today where we see HBM and logic getting combined at the interposer level. This works if you believe HBM is a chiplet, and that’s a whole other argument. Some would argue that HBM falls into that category. The idea of a true LEGO, snap-together mix and match of chiplets continues to be aspirational for the mainstream market, but there are some business impediments that need to get addressed. Again, there are exceptions in some of the single-vendor solutions, where it’s more or less homogeneous integration, or an entirely vertically integrated type of environment where single vendors are integrating their own chiplets into some pretty impressive packages.
Posner: Aspirational is the word we use for an open ecosystem. I’m going to be a little bit more of a downer by saying I believe it’s 5 to 10 years out. Is it possible? Absolutely. But the biggest issue we see at the moment is a huge knowledge gap in what that really means. And as technology companies become more educated on really what that means, we’ll find that there will be some acceleration in adoption. But within what we call ‘captive’ — within a single company or a micro-ecosystem — we’re seeing multi-die systems pick up.
SE: Is it possible to define the pieces we have today from a technology point of view, to make a commercial chiplet ecosystem a reality?
Rinebold: What’s encouraging is the development of standards. There’s some adoption. We’ve already mentioned UCIe for some of the die-to-die protocols. Organizations like JEDEC announced the extension of their JEP30 PartModel format into the chiplet ecosystem to incorporate chiplet-style data. Think about this as an electronic data sheet. A lot of this work has been incorporated into the CDX working group under Open Compute. That’s encouraging. There were some comments a little bit earlier about having an open marketplace. I would agree we’re probably 3 to 10 years away from that coming to fruition. The underlying framework and infrastructure is there, but a lot of the licensing and distribution issues have to get resolved before you see any type of broad adoption.
Posner: The infrastructure is available. The EDA tools to create, to package, to analyze, to simulate, to manufacture — those tools are all there. The intellectual property that sits around it, either UCIe or some of the more traditional die-to-die interfaces, all of that’s there. What’s not established are full methodology and flows that lead to interoperability. Everything within captive is possible, but a broader ecosystem, a marketplace, is going to require silicon interoperability, simulation, packaging, all of that. That’s the area that we believe is missing — and still building.
Schirrmeister: Do we know what’s required? We probably can define that reasonably well. If the vision is an open ecosystem with IP on chiplets that you can just plug together like LEGO blocks, then the IP industry informs us of what’s required, and then there are some gaps on top of them. I hear people from the hard-coded IP world talking about the equivalent of PDKs for chiplets, but today’s IP ecosystem and the IP deliverables are informing us it doesn’t work like LEGO blocks yet. We are improving every year. But this whole, ‘I take my whiteboard and then everything just magically functions together’ is not what we have today. We need to think really hard about what the additional challenges are when you disaggregate that into chiplets and protocols. Then you get big systemic issues to deal with, like how do you deal with coherency across chiplets? It was challenging enough to get it done on a chip. Now you potentially have to deal with other partnerships you don’t even own. This is not a captive environment in an open ecosystem. That makes it very challenging, and it creates job security for at least 5 to 10 years.
Bhatnagar: On the technical side, what’s going well is adoption. We can see big companies like Intel, and then of course, IP providers like us and Synopsys. Everybody’s working toward standardizing chiplet integration, and that is working very well. EDA tools are also coming up to support that. But we are still very far from a marketplace because there are many issues that are not sorted out, like licensing and a few other things that need a bit more time.
Slater: The standards bodies and networking groups have excited a lot of people, and we’re getting a broad set of customers that are coming along. And one point I was thinking, is this only for very high-end compute? From the companies that I see presenting in those types of forums, it’s even companies working in automotive or aerospace/defense, planning out their future for the next 10 years or more. In the automotive case, it was a company that was thinking about creating chiplets for internal consumption — so maybe reorganizing how they look at creating many different variations or evolutions of their products, trying to do it as more modular chiplet types of blocks. ‘If we take the microprocessor part of it, would we sell that as a chiplet externally for other customers to integrate together into a bigger design?’ For me, the aha moment was seeing how broad the application would be. I do think that the standards work has been moving very fast, and that’s worked really well. For instance, at Keysight EDA, we just released a chiplet PHY designer. It’s a simulation for the high-speed digital link for UCIe, and that only comes about by having a standard that’s published, so an EDA company can take a look at it and realize what they need to do with it. The EDA tools are ready to handle these kinds of things. And maybe then, on to the last point is, in order to share the IP, in order to ensure that it’s available, database and process management is going to become all the more important. You need to keep track of which chip is made on which process, and be able to make it available inside the company to other potential users of that.
SE: What’s in place today from a business perspective, and what still needs to be worked out?
Karazuba: From a business perspective, speaking strictly of heterogeneous chiplets, I don’t think there’s anything really in place. Let me qualify that by asking, ‘Who is responsible for warranty? Who is responsible for testing? Who is responsible for faults? Who is responsible for supply chain?’ With homogeneous chiplets or monolithic silicon, that’s understood because that’s the way that this industry has been doing business since its inception. But when you talk about chiplets that are coming from multiple suppliers, with multiple IPs — and perhaps different interfaces, made in multiple fabs, then constructed by a third party, put together by a third party, tested by a fourth party, and then shipped — what happens when something goes wrong? Who do you point the finger at? Who do you go to and talk to? If a particular chiplet isn’t functioning as intended, it’s not necessarily that chiplet that’s bad. It may be the interface on another chiplet, or on a hub, whatever it might be. We’re going to get there, but right now that’s not understood. It’s not understood who is going to be responsible for things such as that. Is it the multi-chip module manufacturer, or is it the person buying it? I fear a return to the Wintel issue, where the chipmaker points to the OS maker, which points at the hardware maker, which points at the chipmaker. Understanding of the commercial side is is a real barrier to chiplets being adopted. Granted, the technical is much more difficult than the commercial, but I have no doubt the engineers will get there quicker than the business people.
Rinebold: I completely agree. What are the repercussions, warranty-related issues, things like that? I’d also go one step further. If you look at some of the larger silicon foundries right now, there is some concern about taking third-party wafers into their facilities to integrate in some type of heterogeneous, chiplet-type package. There are a lot of business and logistical issues that have to get addressed first. The technical stuff will happen quickly. It’s just a lot of these licensing- and distribution-type issues that need to get resolved. The other thing I want to back up to involves customers in the defense/industrial space. The trust and traceability and the province tracking of IP is going to be key for them, because they have so much expectation of multi-die or chiplet-type packaging as an alternative to monolithic scaling. Just look at all the government programs out there right now, with RESHAPE [Reshore Ecosystem for Secure Heterogeneous Advanced Packaging Electronics] and NGMM [Next-Generation Microelectronics Manufacturing] and such. They’re all in on this chiplet perspective, but they’re going to require a lot of security measures to understand who has touched the IP, where it comes from, how to you verify that.
Posner: Micro-ecosystems are forming because of all these challenges. If you naively think you can just go pick a die off the shelf and put it into your device, how do you warranty that? Who owns it? These micro-ecosystems are building up to fundamentally sort that out. So within a couple of different companies, be it automotive or high-performance compute, they’ll come to terms that are acceptable across all of them. And it’s these micro-ecosystems that are really going to end up driving open chiplets, and I think it’s going to be an organic type of growth. Chiplets are available for a specific application today, but we have this vision that someone else could use it, and we see that with the multiple modes being built into the dies. One mode is, ‘I’m connecting to myself. It’s a very tight, low-latency link.’ But I have this vision in the future that I’m going to need to have an interface or protocol that is more open and uses standard available stacks, and which can be bought off the shelf and integrated. That’s one side of the logistics. I want to mention two more things. It is possible to do interoperability across nodes. We demonstrated our TSMC N3 UCIe with Intel’s in-house UCIe, all put together on an Intel process. This was two separate companies working together, showing the first physical interoperability, so it’s possible. But going forward that’s still just a small part of the overall effort. In the IP space we’ve lived with an IP model of, ‘Build once, sell many.’ With the chiplet marketplace, unless there is a revenue stream from that chiplet, it will break that model. Companies think, ‘I only have to buy the IP once, and then I’m selling my silicon.’ But the infrastructure, the resources that are required to build all of this does not go away. There has to be money at the end of that tunnel for all of these different companies to be investing.
Schirrmeister: Mick is 100% right, but we may have a definition issue here with what we really mean by an ‘open’ chiplet ecosystem. I have two distinct conversations when I talk to partners and customers. On the one hand, you have board designers who are doing more and more integration, and they look at you with a wrinkled forehead and say, ‘We’ve been doing this for years. What are you talking about?’ It may not have been 3D-IC in the classic sense of all items, but they say, ‘Yeah, there are issues with warranties, and the user figures it out.’ The board people arrive from one side of the equation at chiplets because that’s the next evolution of integration. You need to be very efficient. That’s not what we call an open ecosystem of chiplets here. The idea is that you have this marketplace to mix things up, and you have the economies of scale by selling the same chiplet to multiple people. That’s really what the chip designers are thinking about, and some of them think even further because if you do it all in true 3D-IC fashion, then you actually have to co-design those chiplets in a way, and that’s a whole other dimension that needs to be sorted out. To pick a little bit on the big companies that have board and chip design groups in house, you see this even within the messaging of these companies. You have people who come from the board side, and for them it’s not a solved problem. It always has been challenging, but they’re going to take it to the next level. The chip guys are looking at this from a perspective of one interface, like PCI Express, now being UCIe. And then I think about this because the networks on chip need to become super NoCs across chiplets, which poses its own challenges. And that all needs to work together. But those are really chiplets designed for the purpose of being in a chiplet ecosystem. And to that end, Mick’s estimation of longer than five years is probably correct because those purpose-built chiplets, for the purpose of being in an open ecosystem, have all these challenges the board guys have already been dealing with for quite some time. They’re now ‘just getting smaller’ in the amount of integration they do.
Slater: When you put all these chiplets together and start to do that integration, in what order do you start placing the components down? You don’t want to throw away one very expensive chiplet because there was an issue with one of the smaller cheaper ones. So, there are now a lot of thoughts about how to go about doing almost like unit tests on individual chiplets first, but then you want to do some form of system test as you go along. That’s definitely something we need to think about. On the business front, who is going to be most interested in purchasing a chiplet-style solution. It comes down to whether you have a yield problem. If your chips are getting to the size where you have yield concerns, then definitely it makes sense to think about using chiplets and breaking it up into smaller pieces. Not everything scales, so why move to the lowest process node when you could purchase something at a different process node that has less risk and costs less to manufacture, and then put it all together. The ones that have been successful so far — the big companies like Intel, AMD — were already driven to that edge. The chips got to a size that couldn’t fit on the reticle. We think about how many companies fit into that category, and that will factor into whether or not the cost and risk is worth it for them.
Bhatnagar: From a business perspective, what is really important is the standardization. Inside of the chiplets is fine, but how it impacts other chiplets around it is important. We would like to be able to make something and sell many copies of it. But if there is no standardization, then either we are taking a gamble by going for one thing and assuming everybody moves to it, or we make multiple versions of the same thing and that adds extra costs. To really justify a business case for any chiplet or, or any sort of IP with the chiplet, the standardization is key for the electrical interconnect, packaging, and all other aspects of a system.
Fig. 1: A chiplet design. Source: Cadence.
Related Reading Chiplets: 2023 (EBook)
What chiplets are, what they are being used for today, and what they will be used for in the future. Proprietary Vs. Commercial Chiplets
Who wins, who loses, and where are the big challenges for multi-vendor heterogeneous integration.
During today’s Nintendo Partner Showcase, which confirmed that former Xbox exclusives Grounded and Pentiment will launch on the Switch this year, the company also announced that five Rare classics are downloadable right now for folks with an active Nintendo Switch Online membership. It’s a blast from the past, y’all,…Read more...
During today’s Nintendo Partner Showcase, which confirmed that former Xbox exclusives Grounded and Pentiment will launch on the Switch this year, the company also announced that five Rare classics are downloadable right now for folks with an active Nintendo Switch Online membership. It’s a blast from the past, y’all,…
Nintendo Directs are often different across different regions. The order of games presented may be a bit rearranged, or there may even be wholly different titles announced in one region than another. Today’s Partner Showcase may have featured the single biggest such discrepancy of all time. Why? Because we did it,…Read more...
Nintendo Directs are often different across different regions. The order of games presented may be a bit rearranged, or there may even be wholly different titles announced in one region than another. Today’s Partner Showcase may have featured the single biggest such discrepancy of all time. Why? Because we did it,…