AMD’s Zen 5 is a missed opportunity in messaging
Last week AMD did their 'Tech Day' and it was anything but.
Read more ▶
The post AMD’s Zen 5 is a missed opportunity in messaging appeared first on SemiAccurate.
Last week AMD did their 'Tech Day' and it was anything but.
Read more ▶
The post AMD’s Zen 5 is a missed opportunity in messaging appeared first on SemiAccurate.
Qualcomm's new AI/Copilot PCs has overwhelming hype but their actions point to a far murkier picture.
Read more ▶
The post Qualcomm AI/Copilot PCs don’t live up to the hype appeared first on SemiAccurate.
Last week ARM unveiled their new IP that will be featured in the devices consumers buy in 2025.
Read more ▶
The post ARM Outs Their New IP lineup for 2024 appeared first on SemiAccurate.
A Gameobject sprite, a Texture2D which is written into using "Texture2D.ReadPixels" works/is visible in the Editor/Play Mode but is not/is invisible when built.
The Texture2D is written to with the RT at timed intervals, and when the game is first built, before the interval where RT is written to by the game, the RT seems to work, but as soon as it's written to by the script, it disappears.
What's odd is, I've tried replacing the Sprite with an Image and instead of using the Texture2D using a material with the RenderTexture itself, but that is invisible when built as well.
The fact that the Texture2D sprite appears before being written to would make me think it's not an inability to render, but an error with the Texture2D being written to, but that doesn't explain why the RenderTexture itself doesn't appear when used as an Image.
Overall, I'm just really confused and don't know what's going on or what to do.
Part of the code where the RT is written into the Texture2D:
public Texture2D CameraFeed;
[...]
IEnumerator RefreshCamera(float RefreshRate)
{
yield return new WaitForSeconds(RefreshRate);
yield return new WaitForEndOfFrame();
CameraFeed.Reinitialize(1, 1);
CameraFeed.SetPixel(1,1, Color.black);
CameraFeed.Reinitialize(510, 492);
if(SelectedCamera == DiningRoom && CameraPowers.DiningRoomPower > 0)
{
RenderTexture.active = DiningRoomRT;
CameraFeed.ReadPixels(new Rect(0, 0, DiningRoomRT.width, DiningRoomRT.height), 0, 0);
CameraFeed.Apply();
}
[...]
//Terminal
TerminalLinesVisible = 0;
TerminalTimer = 0;
//
StartCoroutine(RefreshCamera(RefreshRate));
}
I have an old project that I need to run on Unity 2022.3.40. It uses post process behavior and post process works absolutely fine in the editor, just as before migrating to new Unity. But when I make an Android build and run it, bloom makes bright surfaces black or bluish. From what I found, post process behavior shows artifacts on Unity 2018 and newer. I can't switch to post process stack because I'm not able to recreate the effects in there.
Here is how it is in the build:
and here in the editor:
The settings for bloom in the post processing profile:
I'm using post process behavior 1.0.4
The artifacts only happen on mobile. According to RenderDoc, uber shader caused this.
GameFromScratch.com
SDF – The Future of 3D Modelling
SDF, Signed Distance Fields or Signed Distance Functions are a relatively new approach to computer graphics. Described on Wikipedia as: Let Ω be a subset of a metric space X with metric d, and ∂Ω be its boundary. The distance between a point x of X and the subset ∂Ω of X is defined as usual asd(x,∂Ω)=infy∈∂Ωd(x,y), where inf denotes the infimum. The signed distance function from a point x of X to Ω is defined byf(x)={d(x,∂Ω)if x∈Ω−d(x,∂Ω)if x∉Ω. In much simpler terms, […]
The post SDF – The Future of 3D Modelling appeared first on GameFromScratch.com.
It’s now possible to quickly and easily play the original 1997 Diablo on your PC or phone via a simple website. Just load it up on your browser and you can start killing demons and skeletons like it’s the ‘90s all over again.
.
Read the full article on GamingOnLinux.
NBA 2K24 runs on the new ProPLAY system, which translates real-life NBA footage into the game as animations. The smoothness and realism of the game that we have repeatedly praised in this review can be attributed to this change. The new animations we see from this system include dribble moves, jump shots, dunks, and even celebrations. This makes every single NBA player feel unique to use. Because 2K is ultimately a video game, the animations they used have generally been a lot more fast-paced when compared to its real-life counterpart to make it match with the programming. This time around, the animations are exactly how they are in real life, slowing the game down a little bit and further resembling real basketball games.
Despite being the best basketball game right now, NBA 2K24 is priced starting at $69.99. This is not the kind of title for players used to buy cheap PS5 games, however, some retailers manage to lower the price a little, so you can benefit from the reduction. It may seem like a reasonable price, and to most it is. However, players who enjoy online MyPlayer game modes are once again going to have to go through either a tedious grind or a 99OVR-sized hole in their pockets. It’s excellent value for the money for players who enjoy their quick-play games and MyLeagues, but there’s no doubt that the MyPlayer game modes are going to require big investments, whether it be time-wise or financially.
Despite the annual release pattern that 2K still implements, NBA 2K24 has succeeded in taking the right steps for the development of basketball games. Apart from improvements in terms of visuals, the gameplay here feels so authentic and fluid thanks to the responsive controls. Even though there are changes, veteran players will still feel familiar with the gameplay presented. If you are one of those people who buy annual games only when there are big changes, now is the time for you to buy NBA 2K24.
The NBA 2K24 Gameplay reveal is live, marking the first of many expected to arrive over the next three-plus weeks. As many fans know, gameplay is everything in video games and consists of the very essence each caters. In sports video games, this is especially the case, where the smallest of blemishes can stick out like a sore thumb and cause an unpleasant experience. NBA 2K24's soundtrack harmonizes seamlessly with the on-screen action, elevating the gaming experience to a sensory masterpiece that resonates with every pass and slam dunk. Everything works together to provide the best sports simulation possible.
Overall, I'll say NBA 2K24 delivers an enjoyable gameplay experience. Both casual and hardcore players will have fun (and some complaints) playing NBA 2K24. While it doesn't introduce drastic changes, this decision aligns with the incremental delivery philosophy, or in other words: "If it ain't broke, don't fix it." Maintaining the core gameplay experience seems to be the guiding idea, and it is a good one, guaranteeing a predictable result. As it is, NBA 2K24 is among the best sports video games today. However, a minor issue arises when players occasionally move too quickly into the paint, leading to turnovers with minimal input. While this doesn't happen frequently, it's worth noting.
One of my favorite activities in Minecraft is going deep inside the caves and just exploring them. A few years ago, the developers behind Cave Digger reached out to me and asked me to review their game. Not too long after, the sequel got released and looked like it would be a VR exclusive. Until I noticed that it appeared on the Nintendo Switch eShop. So, I thought, maybe it also released on Steam, since after playing the Switch version, I felt like this game was better played with keyboard and mouse. Now, a non VR version is on Steam now… But is it worth it? Well, after playing the first sections of this game, I want to talk about it. The latest update was on May 28th, 2024 when writing this article. Now, before we dive right into it, I want to invite to you leave a comment in the comment section with your thoughts and/or opinions on this game and/or the content of this article.
Risk of Staleness
In this game, we play as an unnamed miner who is throwing into the deep end, when his digger broke. You arrive at a mysterious valley. In this valley, a hardy explorer once did his research. But why? Which secrets are in these valleys and the accompanying mines? That’s for our miner to figure out. Now, the story is being told by various comic book pages you can uncover and, according to the Steam store page, has multiple endings. I’m quite curious where it’s going to go.
So far, I haven’t gotten too deep into the story. But, from what I can read on the Steam store page, I think it has potential. I have my doubts on how the multiple endings will work. Since comic books mostly have one ending, right? Unless, it all depends on which page(s) you find or in which order or where. That’s something I’ll discover when I’m deeper into the game.
If this game is like the original game, the story overall will take a backseat for the gameplay. And after 5 hours in, that’s the case. The original game didn’t have a lot of story to begin with, but more story in a game like this can be interesting.
There is one voice actor in this game. He does a pretty fine job and brings some life to the atmosphere. I replayed a bit of the first game and I have to be honest, I appreciate the small voice lines during the exploration. Even when you quickly hear every different line, it’s a nice break since they aren’t spammed and don’t appear that often.
One of the biggest changes in this game is that the cave this time around is randomly generated each time you enter. So, this game becomes a rouge like to a degree. But, you can always exit via the lifts to safety. Since, dying in the caves means that at least half of your obtained loot is dropped. The atmosphere this time around is very cohesive. This game presents itself as a sci-fi western game, and it really feels like that. Something I really like in this game is that it doesn’t go overboard in the sci-fi genre and stays grounded. The technology could realistically exist today, apart from the unique enemies in the cave, that is.
With the story taking more of a backseat, it’s quite important that the gameplay loop is enjoyable. The gameplay loop is simple, you have to explore the caves with 4 chosen tools. The three slots above the entrance give you a hint on which tools you will need to bring to gather the most loot. You take the lift down and gather loot, while fighting enemies and avoiding pitfalls to survive. The goal is also to find the other elevator that takes you down to the next level to gather even more valuable ores to bring to the top. You have to fill in the ores you gathered into the grinder to buy upgrades to your tools and environment to progress.
The big risk with this kind of gameplay loop is that this is just a different numbers game. What I mean by that is that, apart from maybe the visuals changing, the core concept is always the same. This risks that the game becomes stale and repetitive. It’s possible that it is just a “me thing”, but I enjoy games like this more when there are some variations on the gameplay or some different puzzles. Thankfully, this game has that. There are a lot of things you can upgrade and improve to make each run feel rewarding, and each type of cave you can visit has different enemies types and unique lay-outs to keep you on your toes. In a way, I dare to compare the idea a bit to Cult of the Lamb in a degree.
The music in this game is also a blast. It fits the atmosphere of each area like a glove. My favorite track is the track that plays in the lake caves. It sounds like you image a typical track like that to sound. And it gets more intense while you are fighting enemies down there. Now, the silent moments when the music doesn’t play feel a bit long, but I always know that there is more music coming and that it fits the atmosphere perfectly and draws me more into the game. Sadly enough, this isn’t the only problem with this game, and I’d like to talk about them.
No feedback
This game has an addictive gameplay loop, and I’m really curious how the multiplayer works. I haven’t tested the multiplayer in this game, but it looks like fun. Now, this game can be played solo perfectly fine.
Now, I don’t know if VRKiwi took the VR version as a base for the non VR version, since I have the impression, that is the case. I especially notice that with the controls in this game. It feels a bit floaty, like you aren’t really connected to the ground. It also feels a bit stiff, like you have to move your mouse like you would a VR headset. You really have to play with the settings until you hit that sweetspot that feels right for you. For me, I had to lower the sensitivity to 80, amongst other things. I highly recommend that you tweak the settings to your liking, since on the Nintendo Switch version, I had to lower the sensitivity to 40 before it felt right.
Still, the character control doesn’t feel right. At first, I thought it was because the controls felt floaty… But, after some testing, I think I found a few other problems with the character control that might cause it to not feel quite right. First, the jump in this game is just silly. You can’t really rely on it, since it doesn’t always trigger when you hit the spacebar, and it’s just a pathetic jump. You can’t even jump out of ankle high water sometimes.
Secondly, there are no sound effects for walking on most floors. You feel like you are floating, and it’s jarring when you suddenly hear a sound effect when you walk over a table or a railway. Thirdly, climbing on ropes amongst other things is just insanely picky. There is also no real feedback or sound to show you grabbed the rope. Fourthly, the scroll order between tools is extremely weird. You get numbers on the wheel counter clock wise. But you go down, right, left, up. Which still confuses me after 6 hours of playing this game.
And finally, some things are extremely picky. For example, there are safe riddles you can solve down in the caves. But to rotate the letter wheels to make pick the right letter is more difficult to do. All of these things give you a feeling that you aren’t always in control of your character and that you don’t get the feedback as a player on what’s happening. Making you unsure what’s happening and doubt if you are doing the right thing.
Prompts like “Use W/S to use the crank” should be “Hold W/S to use the crank”. Since, you need to hold the key instead of pressing it. Small things like that could also improve this game and it’s controls quite a lot. Overall, the controls are good, but they lack feedback to the player sometimes. Either with sound effects or with some visual effects. Like with the hammer, you barely have any sound effects when you use it, and it has some wind up animation, making you unsure if you are using it or not.
That is one of the biggest flaws in this game. The lack of feedback on your actions. Things like not knowing how many bullets are still left in your revolver or a sound effect when you hit an actual enemy. I think if there is one thing I’d use the built-in feedback tool is to report various cases/moments when I expect feedback from the game, like a sound effect or visual effect. Maybe they appear in the form of rumble effects… But, I’m not playing this game with a controller.
When you read this section of the article, I wouldn’t blame you if you think that this game isn’t good. Small bugs like the text of “Press R to reload” when your gun isn’t equipped or the bullets not leaving from the gun but from the player model don’t improve things either. Yet, I find myself looking past these problems since the core gameplay still works. I find myself getting used to the jank in this game and finding a very rough diamond. If the developers keep up with their promise of improving this game, I think that more action feedback will bring a lot to the game and maybe fixing the small bugs like in this paragraph as well.
Things like the animation of the shovel looking weird sometimes. The animation looks like the arms go through each other after a dig. Speaking of the shovel, the last dig is annoying since you have to move a pixel or two for it to count and give you your goodies. But the bug I’d love to see fixed most is the freeze for several seconds when you pick up something new or get a new codec entry. The game locks up like it’s about the crash, but it doesn’t.
What’s next for us?
Usually, I’m not really picky when it comes to the visuals of a game. As long as a game looks consistent, I’m quite happy. It needs to have a certain style so that you can quickly identify what’s what and enjoy the game.
Yet, for this game, I do have some things that I not really like in terms of the visuals. Firstly, the contrast of some ores and the floor isn’t clear enough. Sometimes I was passing up on ores since I wasn’t able to notice them on the ground.
There are also a lot of objects to give more details to the cave, but you can barely interact with them. I’d love to see lilly pads in lakes to move a bit when you walk past them or something more than just being able to clip through them. As well, a sound effect when you hit a wall you can’t mine. You get shouted at when you use the wrong or a too weak tool on something, so when not for the rest?
I think the biggest mistake that the visuals make is that it has an identity crisis. What I mean by that is that it isn’t a cohesive style. There is a lot of shell shading going on, but there is also a lot of details that give off a more realistic vibe. Some textures aren’t detailed enough and strechted too wide giving wrong impression the rest of the visuals that look more modern. The floor textures sometimes suffer most from this issue.
Looking back at this article, I think I’m being very critical for this game. I have played a lot worse and broken games for 15€. But, in this game you even have customisation options for your character and thee developers are extremely open for feedback. This game has a lot going for it. Fun achievements to hunt for, bosses at the end of runs and an amazing auto save system.
Apart from improving the character controls and adding some feedback on actions, I think this game is pretty decent. Yes, there is some polish missing like not having a tooltip with the lever at the cave entrance on what that lever does. I personally feel less conflicted about this game compared to the original. The growth in this title is immense and brings me a lot of hope for either some amazing updates, DLC or a new entry in the series.
The basis of for an amazing title is here and if you look past it’s short comings, this game is a blast to play. Maybe it’s a bit too repetitive for some and can be more fun in short bursts. But, when this game sinks it’s hooks into you, it really clicks. There is some polishing left to do and for a rather new VR focused developer, this is amazing. It’s their second non VR game and it shows a lot of promise.
The game is a perfect relaxing game to wind down, since it isn’t too difficult. The game is rather forgiving. I wouldn’t be surprised that I play this game after work to wind down and try and finish it slowly. Then again, while I’m writing this, I have summer holidays and I wouldn’t be surprised that I finish most of this game during my summer break.
Like I said earlier, I feel less conflicted about this game compared to the previous title. This game has a lot more going for it compared to the original. It’s less repetitive and it has a lot more going for it. It has it’s problems, yes. But, if you enjoy games like Minecraft, Steamworld Dig or Cave Digger, give the demo of this game a chance. The demo gives a very good idea on what you can expect from this game and if you enjoy it, buy the game. I’m enjoying myself quite a lot with this game and I’m happy that I have chosen the PC version over the Switch version since I feel like it just plays better. But maybe, if I get used to the Switch controls, I might enjoy it on Switch as well.
With that said, I have said everything I wanted to say about this game for now. Maybe when I finish this game, I might write a full review with the final thoughts and opinions on this game. But for now, I think the best conclusion for this game is that it’s an amazing step up from the original and besides some unpolished things… It’s a great game and comes recommend from me.
So, it’s time to wrap up this article with my usual outro. I hope you enjoyed reading it as much as I enjoyed writing it. I hope to be able to welcome you in another article, but until then have a great rest of your day and take care.
Meta is announcing their new AITemplate framework for GPUs.
Read more ▶
The post Meta unveils it’s AITemplate GPU framework appeared first on SemiAccurate.
What is going on with Intel's GPU program? The chatter is negative but what is really happening?
Read more ▶
The post Why is Intel’s GPU program having problems? appeared first on SemiAccurate.
Why does AMD keep pissing off the press and inflicting needless wounds on their reputation?
Read more ▶
The post AMD Teases Ryzen 7000 appeared first on SemiAccurate.
There has been a lot of talk lately about Intel delaying their upcoming Meteor Lake CPU but what is really happening?
Read more ▶
The post Did Intel delay Meteor Lake? appeared first on SemiAccurate.
For Baldur’s Gate 3’s biggest fans who haven’t stopped talking and thinking about it since it launched, it probably doesn’t feel like Larian Studio’s Dungeons & Dragons RPG was released a year ago. Yet, here we are, a full trip around the sun since the RPG left early access and was finally unleashed on the world. For…
GameFromScratch.com
Clip Studio Paint and Poser 12 Humble Bundle
There are two new bundles of interest to game developers, the Creativity Collection: Paint, Draw, Illustrate Humble Bundle which would probably be better called the Clip Studio Paint bundle. This bundle contains a one year subscription to Clip Studio Paint as well as several brushes, models, panels and more. The […]
The post Clip Studio Paint and Poser 12 Humble Bundle appeared first on GameFromScratch.com.
I am trying to understand the camera API (applicable to perspective camera ONLY) of LiBGDX.
It really does not make sense that you can call rotate and translate on many different properties of the camera. I would like to know what is the difference between them?
Here is the list of rotate and translate methods that act on the LibGDX camera:
To my understanding, the camera.view is the actual Frustrum of the camera what can be seen on the screen! What is the difference of rotating(translating) the camera's direction, as compared to rotation(translation) of the camera's view?
What about I just translate or rotate the camera and NOT the view of the camera OR the direction OR the position of the camera? What effect will that have?
I have read the documentation and its really lacking in helping us understand! Please someone to help demystified these camera concepts!
I've been beginning to work with different kinds of splines (e.g. Bezier, Hermite, B-splines) in some of my work with computer graphics and I am trying to get a grasp of how they work (mathematically and programmatically). I know that generally (at least for cubic splines), you will have 4 control points that influence the position of points along an interpolated line. For example, with Bezier curves (as far as I understand), there can be two end points where the line will begin and terminate and two control points that influence the direction of the line.
I've been programming Bezier, Hermite, and B-Splines (in javascript on an html5 canvas) and what I don't understand is how to choose these 4 points in order to see a specific curve rendered on the screen. So far, I have only been able to render at least part of a curve by randomly playing around with different numbers for each point.
This is one of the many questions I have about the inner workings of splines so could someone provide an overview on how they work and how they can be programmed (most of the examples online are more theoretical and rather unspecific)?
K|NGP|N GPUs are coming back
A well-known overclocker Vince "K|NGP|N" Lucido is making a comeback, switching from EVGA to PNY. Hopefully, we'll see some interesting RTX 50 series GPUs with the K|NGP|N stamp in the future.
In case you missed it, Vince "K|NGP|N" Lucido launched a lot of high-end/enthusiast graphics cards and motherboards under the EVGA brand, and with EVGA ditching Nvidia, or the other way around, we did not see such products for quite some time. Now, according to an interview with Vince "K|NGP|N" Lucido from Gamersnexus, we could again see those products under the PNY brand.
Of course, developing such graphics cards takes both time and money, so we don't expect them anytime soon, but hopefully, we'll see some NVIDIA Geforce RTX 50 series GPUs with the K|NGP|N design in the near future.
https://www.youtube.com/watch?v=3ZQyNvZy5do
Larger footprint and hidden power connector
Gigabyte has silently launched the new RTX 4070 Ti SUPER Windforce MAX, a much larger version of the original RTX 4070 Ti SUPER Windforce OC launched earlier this year.
The new model gets a few updates, including a higher GPU clock, a larger Windforce cooler and a neat hidden power connector feature, which is a welcomed addition. As spotted by Videocardz.com, the new Gigabyte RTX 4070 Ti SUPER Windforce MAX is 7cm longer and 1cm wider compared to the original one, and also features a slightly higher 2,655MHz GPU clock. If you missed it, the RTX 4070 Ti SUPER comes with 8,448 CUDA cores and 16GB of GDDR6X memory.
In addition to the larger footprint, the new Gigabyte RTX 4070 Ti SUPER Windforce MAX placed the power connector behind the cooler, which both prevents bending of the connector and hides it pretty well, allowing much neater cable management.
Unfortunately, the new Gigabyte RTX 4070 Ti SUPER Windforce MAX has yet to show up on retail/e-tail shelves so there is no word on the price or the availability date.
Qualcomm's new AI/Copilot PCs has overwhelming hype but their actions point to a far murkier picture.
Read more ▶
The post Qualcomm AI/Copilot PCs don’t live up to the hype appeared first on SemiAccurate.
Last week ARM unveiled their new IP that will be featured in the devices consumers buy in 2025.
Read more ▶
The post ARM Outs Their New IP lineup for 2024 appeared first on SemiAccurate.
I'm developing a LOD selection system and I would like to select a LOD level based on screen-space error. Could someone explain how I can accurately compute the screen-space error?
Only two years after it appeared it rides off into the sunset
Intel's decided it's time to start saying goodbye to its Ponte Vecchio HPC GPUs.
According to Toms Hardware Chipzilla is winding them down just a couple of years after they first hit the scene, all because Intel's got its eye on the next big thing – Falcon Shores and the new Gaudi 2 and 3 AI accelerators.
Intel's not yet ditching Ponte Vecchio completely, but they're not pushing them anymore. They'll keep making them, but only for those who already have them. New customers will have to see what AMD and Nvidia offer.
The HPC and AI game is mega competitive these days, what with the AI craze and Ponte Vecchio's old news now, especially with AMD's shiny new MI300 and Nvidia's B200 Blackwell on the block.
Ponte Vecchio first popped up in 2022, and it was the biggest GPU Intel's ever made. Packed with over 100 billion transistors across 47 tiles, made on five different process nodes. Depending on how you set it up, a Ponte Vecchio GPU could have up to eight compute tiles and four HBMe2 stacks, all based on Intel's beefy Xe HPC architecture – that's even more hardcore than the standard Xe stuff in their Arc Alchemist GPUs.
Ponte Vecchio hasn't exactly set the world on fire. Take the Aurora Supercomputer – it's not exactly leading the pack, guzzling more juice and delivering less oomph than AMD's Frontier.
The latest Top500 list has Aurora in second place with 1,012 petaflops and using 38,698 kW, while the older Frontier's still ahead with 1,206 petaflops and only 22,786 kW. Sure, Aurora's got its moments with certain tasks, but it was meant to be hitting 2 exaflops – it's only halfway there.
Ponte Vecchio's looking a bit long in the tooth next to AMD's MI300 and Nvidia's B200. And
Falcon Shores was meant to be out in 2024, mixing it up with CPU and GPU cores to take on AMD's MI300 APU and Nvidia's Grace Hopper Superchip. However, Intel pushed it back to 2025 and scale it down to just a GPU.
Kicking off Ponte Vecchio's retirement before Falcon Shores rocks up means Intel can throw more at getting Falcon Shores out the door quicker. It might help them catch up a bit with AMD and Nvidia, but they're still playing catch-up, since the other two are already cooking up the next gen of HPC hardware.
Supergiant Games’ roguelike Hades 2 has been out in early access for a little over a week and a half and it’s been a pretty wonderful (and disgustingly hot) time so far. However, some players have had difficulty adjusting to protagonist Melinoë’s dash, as well as her ability to sprint. Luckily for those folks,…
The ACEMAGIC M2A Starship is a compact desktop computer with the power of a high-performance gaming laptop from a few years ago. It supports up to a 45-watt Intel Core i9-12900H 14-core, 20-thread processor and NVIDIA GeForce RTX 3080M graphics featuring 16GB of GDDR6 memory. It’s also a weird looking little computer that looks more like a gaming […]
The post ACEMAGIC M2A mini PC features Intel Core Alder Lake-H processor, NVIDIA RTX 30 series graphics and a sci-fi inspired design appeared first on Liliputing.
I am trying to generate the height gradient (the slope) for a heightmap of mine using a compute shader in Unity, and the result has weird ringing artifacts and I completely lost what could be wrong. I thought about it being a precision issue, but increasing the magnitudes of the height values or moving to doubles from floats didn't solve the issue... Height map: Resulting gradient map, with values of the Y axis in the green channel pictured here:
I also exported the texture to disk to check in Photoshop, and the artifacts are also there so this isn't a display issue...
My compute shader is super simple:
SamplerState samplerInput
{
Filter = MIN_MAG_LINEAR_MIP_NEAREST;
AddressU = Clamp;
AddressV = Clamp;
};
float3 calcHeightandGrad(int posX, int posY)
{
float x = posX;
float y = posY;
float res = 2048 - 1;
double sample = Input.SampleLevel(samplerInput, (float2(x, y) / res), 0).x * 100;
double topSample = Input.SampleLevel(samplerInput, (float2(x, y - 1) / res), 0).x * 100;
double rightSample = Input.SampleLevel(samplerInput, (float2(x - 1, y) / res), 0).x * 100;
double dx = rightSample - sample;
double dy = topSample - sample;
return float3(sample, dx, dy);
}
I'm trying to come up with my own game using Java AWT after watching a few video tutorials. However, I encountered a problem where I cannot draw an external image file that I loaded using the BufferedImage object.
The problem seems to be on the method that I'm using to draw the image on the screen, where I'm using Graphics2D.drawImage() method to draw.
Here is part of my code (I modified and skipped parts that seemed irrelevant to the topic):
Window Class
public class Window extends JPanel{
public Window(int width, int height) {
JFrame frame = new JFrame("Zerro's Game");
frame.setPreferredSize(new Dimension(width, height));
frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
frame.setLocationRelativeTo(null);
frame.setResizable(false);
frame.setVisible(true);
frame.pack();
}
}
Game Class
public class Game extends JFrame implements Runnable {
// Dimension for Main Frame
private static final int WIDTH = 640;
private static final int HEIGHT = 480;
// Image
private BufferedImage image;
private Graphics2D g;
public void run() {
this.requestFocus();
// For rendering purposes
image = new BufferedImage(WIDTH, HEIGHT, BufferedImage.TYPE_INT_RGB);
g = (Graphics2D) image.getGraphics();
//Game Loop
long now;
long updateTime;
long wait;
final int TARGET_FPS = 60;
final long OPTIMAL_TIME = 1000000000 / TARGET_FPS;
while (isRunning) {
now = System.nanoTime();
update();
render();
updateTime = now - System.nanoTime();
wait = (OPTIMAL_TIME - updateTime) / 1000000;
try {
Thread.sleep(wait);
} catch (Exception e) {
e.printStackTrace();
}
}
}
private void render() {
if(gameState == STATE.GAME) {
handler.render(g);
} else if(gameState == STATE.MENU) {
menu.render(g);
}
}
Menu Class
public class Menu extends KeyAdapter{
private BufferedImage image;
public void render(Graphics2D g) {
try {
image = ImageIO.read(getClass().getResource("/Pixel_Background.png"));
g.drawImage(image, 0, 0, null);
} catch(Exception e) {
e.printStackTrace();
}
}
}
This code results in making empty frame without any content inside. I confirmed that it properly loads an image by using System.out.println(image.getHeight()) which prints out the exact height of an image that I'm using.
I've seen some comments in the internet that I need to use paintComponent() method. However, I'm just wondering if it's not a good idea to use paintComponent() in game development as the video tutorials (3 of them) that I watched didn't use such method to draw images.
Also, I'm wondering about the reason why we pass in the Graphics2D object as parameter in all the render() methods.
I must say that I am really confused by how a view matrix is constructed and works.
First, there are 3 terms: view matrix, lookat matrix, and camera transformation matrix. Are those 3 the same, or different things. From what I undestand, the camera transformation matrix is basically the model matrix of the camera, and the view matrix is the inverse of that. The lookat matrix is basically for going from world space to view space, and I think I undestand how it works (doing dot products for projecting a point into another coordinate system).
I am also confused by the fact that sometimes, it seems like the view matrix is built by translation and dot products, and some other times, it is built with translation and rotation (with cos and sin).
There are also quaternions. When you convert a quaternion to a matrix, what kind of matrix is this?
Can someone explain to me how it really works, or point me towards a good resource.
Thank you.
Hades 2 is here, bringing with it all kinds of questions about how to gather materials, unlock the ward, and a whole lost of other things. Well, we’ve got answers. We’ll also shine a little light on the darkness of Shadow Moses with some fun tips for Metal Gear Solid, get you started right in V Rising, and more.
AMD just let out some of their MI300 plans albeit in a rather backhanded way.
Read more ▶
The post AMD outs MI300 plans… sort of appeared first on SemiAccurate.
GameFromScratch.com
Inkscape 1.4 Beta Released
The open source vector graphics application Inkscape just released Inkscape 1.4 beta. Inkscape is available for Windows, Mac and Windows. Highlight features of the 1.4 release include: Key Links Inkscape Homepage Inkscape 1.4 Beta Download Inkscape 1.4 Release Notes Splashscreen Contest You can learn more about the new features in […]
The post Inkscape 1.4 Beta Released appeared first on GameFromScratch.com.
AMD just let out some of their MI300 plans albeit in a rather backhanded way.
Read more ▶
The post AMD outs MI300 plans… sort of appeared first on SemiAccurate.
Hades 2 sees the cast of the first game waging a war against the god of time, Chronos, who’s sacked the underworld and the family that oversaw it, including the first game’s protagonist, Zagreus, and its main antagonist, Hades himself. In order to do that, you’ve got to be a pretty big deal, and when you finally take…
Hades 2 is jam-packed with places to discover, and not all of them start at Erebus. If you’ve already unlocked the ward at the Crossroads, you’ll know that the surface is available as an alternative path to take, filled with its own vistas, rewards, and challenges. Sadly, protagonist Melinoë won’t be able to stay…
Aspyr has confirmed it has restored posters that were "inadvertently" removed from Tomb Raider 1-3 Remastered in its last patch.
Despite them being flagged by the official Tomb Raider website as a detail to go find, a set of Lara Croft pin-up style posters were mysteriously pulled from the game's Remastered graphical mode recently with no warning.
Aspyr now says this removal was "inadverent" and happened when the team was making "several texture and graphical updates to the HD version".
To offer much better performance
According to the latest rumor, AMD might implement a completely new Ray Tracing (RT) engine with its upcoming RDNA 4 architecture GPUs that are expected to debut with the Radeon RX 8000 graphics card series. The new RT engine could give a much higher performance compared to the current RDNA 3-based GPUs.
Although AMD managed to roll out a competitive product series with its Radeon RX 7000 graphics cards, it has been struggling to keep up with Nvidia when it comes to ray tracing support and performance. According to the latest rumor coming from Kepler_L2 over at Twitter, AMD is now working on a completely new Ray Tracing engine. AMD's RDNA 3 GPUs were using an improved RT engine from RDNA 2 architecture.
RT looks brand new 🤔
— Kepler (@Kepler_L2) April 30, 2024
This rumor does make sense considering that the upcoming PlayStation 5 Pro console, which will come with a custom RDNA GPU, is expected to use BVH8 (8-level bounding volume hierarchy) traversal shaders, according to Videocardz.com. This suggests an RT engine taken from the RDNA 4 GPU, as the current RT code only supports a 4-level hierarchy. On the other hand, recent rumors suggested that AMD is going to mostly focus on the mid-range to high-end graphics card segment, while the enthusiast series will have to wait for RDNA 5 architecture.
AMD is expected to launch its RDNA 4-based graphics cards in the second half of the year, so we might hear more about them during Computex 2024 show.
Dual-slot, triple-fan cooler
Galax has unveiled a rather interesting low-profile graphics card based on AD107 GPU, the Galax RTX 4060 low-profile. Although currently only available in Japan, the new low-profile RTX 4060 could be coming to the rest of the world, as it might be quite popular in the SFF market.
Due to the smaller PCB needed for AD107 and four memory chips, the Galax RTX 4060 low-profile is 18.2cm long and 6.9cm tall. In order to keep the GPU well-cooled, Galax went for a dual-slot cooler with three 40mm fans. The RTX 4060 is perfect for low-profile design, as both Asus and Gigabyte have a similar graphics card.
In case you missed it, the AD107 GPU packs 3,072 CUDA cores, 8GB of GDDR6 memory on a 128-bit memory interface. Galax also implemented a minor factory overclock to the GPU, but it still needs a single 8-pin PCIe power connector. It also comes with four display outputs, two DisplayPort 1.4a and two HDMI 2.1a ports.
Hopefully, we'll see the new Galax RTX 4060 low-profile on retail/e-tail shelves soon. (via Videocardz.com)
AMD just let out some of their MI300 plans albeit in a rather backhanded way.
Read more ▶
The post AMD outs MI300 plans… sort of appeared first on SemiAccurate.
You don’t need to be a full-on graphics designer or Photoshop pro to get your desired images anymore. Thanks to advanced AI tools, it now ...
The post Best AI Image Generators to Try Out In 2024 appeared first on Gizchina.com.
AMD just let out some of their MI300 plans albeit in a rather backhanded way.
Read more ▶
The post AMD outs MI300 plans… sort of appeared first on SemiAccurate.
Spoilers for the Fallout show follow.
The Simply NUC Onyx line of mini PCs are small, but powerful, computers with support for up to an Intel Core i9-13900H processor. Up until recently, the company only offered models with integrated graphics, but now Simply NUC has launched a new Onyx Pro computer with a slightly larger chassis support for an optional discrete GPU. […]
The post Simply NUC Onyx Pro mini PC pairs Intel Core i9-13900H with discrete graphics options appeared first on Liliputing.
With the 2018 launch of RTX technologies and the first consumer GPU built for AI — GeForce RTX — NVIDIA accelerated the shift to AI computing. Since then, AI on RTX PCs and workstations has grown into a thriving ecosystem with more than 100 million users and 500 AI applications.
Generative AI is now ushering in a new wave of capabilities from PC to cloud. And NVIDIA’s rich history and expertise in AI is helping ensure all users have the performance to handle a wide range of AI features.
Users at home and in the office are already taking advantage of AI on RTX with productivity- and entertainment-enhancing software. Gamers feel the benefits of AI on GeForce RTX GPUs with higher frame rates at stunning resolutions in their favorite titles. Creators can focus on creativity, instead of watching spinning wheels or repeating mundane tasks. And developers can streamline workflows using generative AI for prototyping and to automate debugging.
The field of AI is moving fast. As research advances, AI will tackle more complex tasks. And the demanding performance needs will be handled by RTX.
In its most fundamental form, artificial intelligence is a smarter type of computing. It’s the capability of a computer program or a machine to think, learn and take actions without being explicitly coded with commands to do so, or a user having to control each command.
AI can be thought of as the ability for a device to perform tasks autonomously, by ingesting and analyzing enormous amounts of data, then recognizing patterns in that data — often referred to as being “trained.”
AI development is always oriented around developing systems that perform tasks that would otherwise require human intelligence, and often significant levels of input, to complete — only at speeds beyond any individual’s or group’s capabilities. For this reason, AI is broadly seen as both disruptive and highly transformational.
A key benefit of AI systems is the ability to learn from experiences or patterns inside data, adjusting conclusions on their own when fed new inputs or data. This self-learning allows AI systems to accomplish a stunning variety of tasks, including image recognition, speech recognition, language translation, medical diagnostics, car navigation, image and video enhancement, and hundreds of other use cases.
The next step in the evolution of AI is content generation — referred to as generative AI. It enables users to quickly create new content, and iterate on it, based on a variety of inputs, which can include text, images, sounds, animation, 3D models or other types of data. It then generates new content in the same or a new form.
Popular language applications, like the cloud-based ChatGPT, allow users to generate long-form copy based on a short text request. Image generators like Stable Diffusion turn descriptive text inputs into the desired image. New applications are turning text into video and 2D images into 3D renderings.
AI PCs are computers with dedicated hardware designed to help AI run faster. It’s the difference between sitting around waiting for a 3D image to load, and seeing it update instantaneously with an AI denoiser.
On RTX GPUs, these specialized AI accelerators are called Tensor Cores. And they dramatically speed up AI performance across the most demanding applications for work and play.
One way that AI performance is measured is in teraops, or trillion operations per second (TOPS). Similar to an engine’s horsepower rating, TOPS can give users a sense of a PC’s AI performance with a single metric. The current generation of GeForce RTX GPUs offers performance options that range from roughly 200 AI TOPS all the way to over 1,300 TOPS, with many options across laptops and desktops in between. Professionals get even higher AI performance with the NVIDIA RTX 6000 Ada Generation GPU.
To put this in perspective, the current generation of AI PCs without GPUs range from 10 to 45 TOPS.
More and more types of AI applications will require the benefits of having a PC capable of performing certain AI tasks locally — meaning on the device rather than running in the cloud. Benefits of running on an AI PC include that computing is always available, even without an internet connection; systems offer low latency for high responsiveness; and increased privacy so that users don’t have to upload sensitive materials to an online database before it becomes usable by an AI.
RTX GPUs bring more than just performance. They introduce capabilities only possible with RTX technology. Many of these AI features are accessible — and impactful — to millions, regardless of the individual’s skill level.
From AI upscaling to improved video conferencing to intelligent, personalizable chatbots, there are tools to benefit all types of users.
RTX Video uses AI to upscale streaming video and display it in HDR. Bringing lower-resolution video in standard dynamic range to vivid, up to 4K high-resolution high dynamic range. RTX users can enjoy the feature with one-time, one-click enablement on nearly any video streamed in a Chrome or Edge browser.
NVIDIA Broadcast, a free app for RTX users with a straightforward user interface, has a host of AI features that improve video conferencing and livestreaming. It removes unwanted background sounds like clicky keyboards, vacuum cleaners and screaming children with Noise and Echo Removal. It can replace or blur backgrounds with better edge detection using Virtual Background. It smooths low-quality camera images with Video Noise Removal. And it can stay centered on the screen with eyes looking at the camera no matter where the user moves, using Auto Frame and Eye Contact.
Chat with RTX is a local, personalized AI chatbot demo that’s easy to use and free to download.
Users can easily connect local files on a PC to a supported large language model simply by dropping files into a single folder and pointing the demo to the location. It enables queries for quick, contextually relevant answers.
Since Chat with RTX runs locally on Windows with GeForce RTX PCs and NVIDIA RTX workstations, results are fast — and the user’s data stays on the device. Rather than relying on cloud-based services, Chat with RTX lets users process sensitive data on a local PC without the need to share it with a third party or have an internet connection.
Over the past six years, game performance has seen the greatest leaps with AI acceleration. Gamers have been turning NVIDIA DLSS on since 2019, boosting frame rates and improving image quality. It’s a technique that uses AI to generate pixels in video games automatically. With ongoing improvements, it now increases frame rates by up to 4x.
And with the introduction of Ray Reconstruction in the latest version, DLSS 3.5, visual quality is further enhanced in some of the world’s top titles, setting a new standard for visually richer and more immersive gameplay.
There are now over 500 games and applications that have revolutionized the ways people play and create with ray tracing, DLSS and AI-powered technologies.
Beyond frames, AI is set to improve the way gamers interact with characters and remaster classic games.
NVIDIA ACE microservices — including generative AI-powered speech and animation models — are enabling developers to add intelligent, dynamic digital avatars to games. Demonstrated at CES, ACE won multiple awards for its ability to bring game characters to life as a glimpse into the future of PC gaming.
NVIDIA RTX Remix, a platform for modders to create stunning RTX remasters of classic games, delivers generative AI tools that can transform basic textures from classic games into modern, 4K-resolution, physically based rendering materials. Several projects have already been released or are in the works, including Half-Life 2 RTX and Portal with RTX.
AI is unlocking creative potential by reducing or automating tedious tasks, freeing up time for pure creativity. These features run fastest or solely on PCs with NVIDIA RTX or GeForce RTX GPUs.
Adobe Premiere Pro’s Enhance Speech tool is accelerated by RTX, using AI to remove unwanted noise and improve the quality of dialogue clips so they sound professionally recorded. It’s up to 4.5x faster on RTX vs. Mac. Another Premiere feature, Auto Reframe, uses GPU acceleration to identify and track the most relevant elements in a video and intelligently reframes video content for different aspect ratios.
Another time-saving AI feature for video editors is DaVinci Resolve’s Magic Mask. Previously, if editors needed to adjust the color/brightness of a subject in one shot or remove an unwanted object, they’d have to use a combination of rotoscoping techniques or basic power windows and masks to isolate the subject from the background.
Magic Mask has completely changed that workflow. With it, simply draw a line over the subject and the AI will process for a moment before revealing the selection. And GeForce RTX laptops can run the feature 2.5x faster than the fastest non-RTX laptops.
This is just a sample of the ways that AI is increasing the speed of creativity. There are now more than 125 AI applications accelerated by RTX.
AI is enhancing the way developers build software applications through scalable environments, hardware and software optimizations, and new APIs.
NVIDIA AI Workbench helps developers quickly create, test and customize pretrained generative AI models and LLMs using PC-class performance and memory footprint. It’s a unified, easy-to-use toolkit that can scale from running locally on RTX PCs to virtually any data center, public cloud or NVIDIA DGX Cloud.
After building AI models for PC use cases, developers can optimize them using NVIDIA TensorRT — the software that helps developers take full advantage of the Tensor Cores in RTX GPUs.
TensorRT acceleration is now available in text-based applications with TensorRT-LLM for Windows. The open-source library increases LLM performance and includes pre-optimized checkpoints for popular models, including Google’s Gemma, Meta Llama 2, Mistral and Microsoft Phi-2.
Developers also have access to a TensorRT-LLM wrapper for the OpenAI Chat API. With just one line of code change, continue.dev — an open-source autopilot for VS Code and JetBrains that taps into an LLM — can use TensorRT-LLM locally on an RTX PC for fast, local LLM inference using this popular tool.
Every week, we’ll demystify AI by making the technology more accessible, and we’ll showcase new hardware, software, tools and accelerations for RTX AI PC users.
The iPhone moment of AI is here, and it’s just the beginning. Welcome to AI Decoded.
Get weekly updates directly in your inbox by subscribing to the AI Decoded newsletter.
To enhance the gaming experience, studios and developers spend tremendous effort creating photorealistic, immersive in-game environments.
But non-playable characters (NPCs) often get left behind. Many behave in ways that lack depth and realism, making their interactions repetitive and forgettable.
Inworld AI is changing the game by using generative AI to drive NPC behaviors that are dynamic and responsive to player actions. The Mountain View, Calif.-based startup’s Character Engine, which can be used with any character design, is helping studios and developers enhance gameplay and improve player engagement.
Register for NVIDIA GTC, which takes place March 17-21, to hear how leading companies like Inworld AI are using the latest innovations in AI and graphics. And join us at Game Developers Conference (GDC) to discover how the latest generative AI and RTX technologies are accelerating game development.
The Inworld team aims to develop AI-powered NPCs that can learn, adapt and build relationships with players while delivering high-quality performance and maintaining in-game immersion.
To make it easier for developers to integrate AI-based NPCs into their games, Inworld built Character Engine, which uses generative AI running on NVIDIA technology to create immersive, interactive characters. It’s built to be production-ready, scalable and optimized for real-time experiences.
The Character Engine comprises three layers: Character Brain, Contextual Mesh and Real-Time AI.
Character Brain orchestrates a character’s performance by syncing to its multiple personality machine learning models, such as for text-to-speech, automatic speech recognition, emotions, gestures and animations.
The layer also enables AI-based NPCs to learn and adapt, navigate relationships and perform motivated actions. For example, users can create triggers using the “Goals and Action” feature to program NPCs to behave in a certain way in response to a given player input.
Contextual Mesh allows developers to set parameters for content and safety mechanisms, custom knowledge and narrative controls. Game developers can use the “Relationships” feature to create emergent narratives, such that an ally can turn into an enemy or vice versa based on how players treat an NPC.
One big challenge developers face when using generative AI is keeping NPCs in-world and on-message. Inworld’s Contextual Mesh layer helps overcome this hurdle by rendering characters within the logic and fantasy of their worlds, effectively avoiding the hallucinations that commonly appear when using large language models (LLMs).
The Real-Time AI layer ensures optimal performance and scalability for real-time experiences.
Inworld, a member of the NVIDIA Inception program, which supports startups through every stage of their development, uses NVIDIA A100 Tensor Core GPUs and NVIDIA Triton Inference Server as integral parts of its generative AI training and deployment infrastructure.
Inworld used the open-source NVIDIA Triton Inference Server software to standardize other non-generative machine learning model deployments required to power Character Brain features, such as emotions. The startup also plans to use the open-source NVIDIA TensorRT-LLM library to optimize inference performance. Both NVIDIA Triton Inference Server and TensorRT-LLM are available with the NVIDIA AI Enterprise software platform, which provides security, stability and support for production AI.
Inworld also used NVIDIA A100 GPUs within Slurm-managed bare-metal machines for its production training pipelines. Similar machines wrapped in Kubernetes help manage character interactions during gameplay. This setup delivers real-time generative AI at the lowest possible cost.
“We chose to use NVIDIA A100 GPUs because they provided the best, most cost-efficient option for our machine learning workloads compared to other solutions,” said Igor Poletaev, vice president of AI at Inworld.
“Our customers and partners are looking to find novel and innovative ways to drive player engagement metrics by integrating AI NPC functionalities into their gameplay,” said Poletaev. “There’s no way to achieve real-time performance without hardware accelerators, which is why we required GPUs to be integrated into our backend architecture from the very beginning.”
Inworld’s generative AI-powered NPCs have enabled dynamic, evergreen gaming experiences that keep players coming back. Developers and gamers alike have reported enhanced player engagement, satisfaction and retention.
Inworld has powered AI-based NPC experiences from Niantic, LG UPlus, Alpine Electronics and more. One open-world virtual reality game using the Inworld Character Engine saw a 5% increase in playtime, while a detective-themed indie game garnered over $300,000 in free publicity after some of the most popular Twitch streamers discovered it.
Learn more about Inworld AI and NVIDIA technologies for game developers.
Wants to keep specs members only
AMD has hit a brick wall in its bid to boost its open-source Linux graphics driver.
The US company has been trying to get permission from the HDMI Forum, which controls the HDMI standard, to add HDMI 2.1+ features to its driver. HDMI 2.1+ allows for super-fast and high-quality video and audio over a single cable.
According to Phoronix the HDMI Forum has said no to AMD, leaving Linux users in the lurch. AMD has been struggling to fix a bug that stops Linux users from enjoying 4K@120Hz or 5K@240Hz via HDMI 2.1 for three years.
The problem is that the HDMI Forum has made its specs private, meaning only members can access them. AMD and the X.Org Foundation, a group that supports open-source graphics, have been begging the HDMI Forum to let them use the specs for their open-source driver. AMD's Linux engineers have worked hard to develop a solution that meets the HDMI Forum's demands.
However, the HDMI Forum has rejected AMD's proposal, saying it does not allow open-source implementations of HDMI 2.1+. This means that AMD can't fix the HDMI bug in Linux with a new driver, so HDMI 2.1+ features will not work with open-source drivers. Linux users will have to use DisplayPort instead, another cable that supports high-speed video and audio.
Claims to have created the missing link
Software King of the World, Microsoft, has unveiled a new Windows API to provide game developers with a seamless way to integrate super-resolution AI-upscaling features from Nvidia, AMD, and Intel.
Writing in his bog Volish programme manager Joshua Tucker describes Microsoft’s new DirectSR API as the “missing link” between games and super-resolution technologies, promising a “smoother, more efficient experience that scales across hardware.”
The API enables multi-vendor SR [super resolution] through a common set of inputs and outputs, allowing a single code path to activate various solutions, including Nvidia DLSS Super Resolution, AMD FidelityFX Super Resolution, and Intel XeSS.
The idea is that developers will be able to support this DirectSR API rather than have to write code for every upscaling technology.
The blog post follows the recent spotting of an “Automatic Super Resolution” feature in a test version of Windows 11, which pledged to “use AI to make supported games play more smoothly with enhanced details.” The feature now appears to plug into existing super-resolution technologies like DLSS, FSR, and XeSS rather than offering a Windows-level alternative.
Microsoft has announced that the new API will be available soon via a preview version of its Agility SDK.
It plans to offer a “sneak peek” of how DirectSR can be used during a developer session at the upcoming Game Developers Conference (GDC). The session, scheduled for 21 March, will include representatives from Microsoft, Nvidia, and AMD.