FreshRSS

Zobrazení pro čtení

Jsou dostupné nové články, klikněte pro obnovení stránky.

DirectX/SlimDX Alpha Blending

Hoping someone might be able to offer some advice for an issue i am having.

I am working on abit of a project, mainly for learning, which is loading and rendering graphics from an old mmorpg i used to play a long time ago called Helbreath.

The loading and rendering of the data/graphics is fine and i have managed that without issue… until it came to the spell/effects which need to be blended. The original games source code was leaked hence how i know how to read the files. Originally it used DirectDraw and i think they implemented their own blending system by reading the pixel data from the render surface and performing abit of math on it, but i might be wrong im not super familiar with C++ or DirectDraw.

The original image files are 24bit bitmaps with no alpha channel and a pure black background, this is a sample sprite sheet pulled from the games original libraries.

enter image description here

This is a sample of how i am getting the effects/spells to currently blend but i think they are too weak and i was wondering if there is a way to make them more vibrant and less washed out.

enter image description here

I am currently using these settings to perform the blending, i have spent a while messing around with mixing settings around to try and get the result i want but with no luck.

enter image description here

This is my first attempt at using SlimDX/DirectX so any advice would be appreciated.

For comparison here are those same effects over a black background, which is kinda the vibrancy i was hoping for.

enter image description here

I guess im kinda after the effect where the pure black pixels are fully transparent, but the colored parts of the sprite i was hoping to like apply a weighting to say take more from the source and less from the destination making it in my mind darker/more vibrant/more saturated.

I might just be completely not understanding how this works of course!

Thanks in advance for any advice or tips!

Incorrect Screen Space Reflection help

I'm leaving a question because I ran into a problem while implementing screen space reflection. The way I do it is by sampling the position and normal map saved with deferred rendering, changing it to the view point, using the reflect function to obtain the reflection vector, adding it little by little to the view position of the current pixel, and then using the ray marching method to determine the view position of the corresponding texture. We use a method of extracting color by comparing it with the depth value. However, reflections do not appear properly, colors appear in strange places, or are drawn overlapping multiple times. Even if you try both normal and world and view, it doesn't come out well, but world comes out more convincingly. Even if I try changing all the numbers, I don't get a good result. What's the problem?

enter image description here

enter image description here

screen shot 1 : normal view enter image description here

screnn shot 2 : normal world

enter image description here

another best reflection screen shot

enter image description here

How do you handle shaders/graphics while remaining cross-platform?

I'm building a C++ based game engine, and I have my ECS complete as well as some basic components for stuff like graphics & audio. However, I'm currently using a custom interface on top of SFML with GLSL based shaders and OpenGL based graphics. I'd like to switch to a graphics solution where I can switch between OpenGL, Vulkan, DirectX3D, and Metal without rewriting large portioins of my code. The graphics API itself isn't a problem, since I can easily build an interface on top of it and reimplement it for each desired platform. My issue, however, is with the shaders.

I'm currently writing my test shaders in GLSL targeting OpenGL. I know I can use the SPIR-V translator to generate HLSL/MSL/Vulkan-Style GLSL from my OpenGL source code, but I'm not sure how that will work when I start having to set uniforms, handle shader buffers, and the like.

The big solution I've heard of is generating shaders at runtime, which is what Godot does. However, my engine is very performance-oriented, so I'd like to precompile all my shaders if possible. I've also seen that Unity uses HLSL2GLSL translator and SPIR-V cross is very common. However, I'm worried about how these will interact with setting uniforms and whatnot, and I'm very concerned about their impact on performance.

Optimization and best practices – Mesh shaders on RDNA™ graphics cards

AMD GPUOpen - Graphics and game developer resources

The second post in this series on mesh shaders covers best practices for writing mesh and amplification shaders, as well as how to use the AMD Radeon™ Developer Tool Suite to profile and optimize mesh shaders.

The post Optimization and best practices – Mesh shaders on RDNA™ graphics cards appeared first on AMD GPUOpen.

❌