FreshRSS

Normální zobrazení

Jsou dostupné nové články, klikněte pro obnovení stránky.
PředevčíremHlavní kanál
  • ✇Recent Questions - Game Development Stack Exchange
  • Particles not rendering over projectorsidurvesh
    I am using projectors for shadows...When I use particles for bike speed up i.e., nitro speed the particles get cutout by those shadows.... Here is screenshot of it, Here is my shader code of projectors , Shader "Projector/Projector Multiply Black" { Properties { _ShadowTex("Cookie", 2D) = "gray" { TexGen ObjectLinear } _ShadowStrength("Strength",float) = 1 } Subshader { Tags{ "RenderType" = "Transparent" "Queue" = "Transparent+100" } Pas
     

Particles not rendering over projectors

I am using projectors for shadows...When I use particles for bike speed up i.e., nitro speed the particles get cutout by those shadows....

Here is screenshot of it,

enter image description here

Here is my shader code of projectors ,

Shader "Projector/Projector Multiply Black"
{
    Properties
    {
        _ShadowTex("Cookie", 2D) = "gray" { TexGen ObjectLinear }
    _ShadowStrength("Strength",float) = 1
    }

        Subshader
    {
        Tags{ "RenderType" = "Transparent"  "Queue" = "Transparent+100" }
        Pass
    {
        ZWrite Off

        //Fog { Mode Off }

        Blend DstColor Zero

        CGPROGRAM
#pragma vertex vert
#pragma fragment frag
#pragma fragmentoption ARB_fog_exp2
#pragma fragmentoption ARB_precision_hint_fastest
#include "UnityCG.cginc"


        struct v2f
    {
        float4 pos : SV_POSITION;
        float2 uv_Main     : TEXCOORD0;
    };

    sampler2D _ShadowTex;
    float4x4 unity_Projector;
    float _ShadowStrength;

    v2f vert(appdata_tan v)
    {
        v2f o;


        o.pos = mul(UNITY_MATRIX_MVP, v.vertex);

        o.uv_Main = mul(unity_Projector, v.vertex).xy;


        return o;
    }

    half4 frag(v2f i) : COLOR
    {
        half4 tex = tex2D(_ShadowTex, i.uv_Main);
        half strength = (1 - tex.a*_ShadowStrength);
        tex = (strength,strength,strength,strength);
        return tex;
    }
        ENDCG

    }
    }
}

Here is my particle code,

// Simple additive particle shader.

Shader "Custom/Particle additive"
{
Properties
{
    _MainTexture ("Particle Texture (Alpha8)", 2D) = "white" {}
}

Category
{
    Tags { "Queue"="Transparent" "IgnoreProjector"="True" "RenderType"="Transparent" }
    Blend SrcAlpha One
    Cull Off Lighting Off ZWrite Off Fog {Color (0,0,0,0)}

    BindChannels
    {
        Bind "Color", color
        Bind "Vertex", vertex
        Bind "TexCoord", texcoord
    }

    SubShader
    {
        Pass
        {
            SetTexture [_MainTexture]
            {
                combine primary, texture * primary
            }
        }
    }
}
}
  • ✇Recent Questions - Game Development Stack Exchange
  • Unity shader invalid subscript worldPosWestMansionHero
    I am trying to write a shader for Unity that replicates Splatoon's painting system, but I cannot get the shader to compile. It has an error on line 41 stating invalid subscript 'worldPos' at line 41 (on d3d11). I'm unsure what this means, and my research came up short. I tried a few different things but nothing worked. I switched my shader to unlit which solved another problem with the v2f not being recognized, but now there's this problem. Below is the code. Shader "Unlit/TexturePaintingMechani
     

Unity shader invalid subscript worldPos

I am trying to write a shader for Unity that replicates Splatoon's painting system, but I cannot get the shader to compile. It has an error on line 41 stating invalid subscript 'worldPos' at line 41 (on d3d11). I'm unsure what this means, and my research came up short. I tried a few different things but nothing worked. I switched my shader to unlit which solved another problem with the v2f not being recognized, but now there's this problem. Below is the code.

Shader "Unlit/TexturePaintingMechanicShader2"
{
    Properties
    {
        _MainTex ("Texture", 2D) = "white" {}
    }
    SubShader
    {
        Tags { "RenderType"="Opaque" }
        LOD 100

        Pass
        {
            CGPROGRAM
            #pragma vertex vert
            #pragma fragment frag
            // make fog work
            #pragma multi_compile_fog

            #include "UnityCG.cginc"

            struct appdata
            {
                float4 vertex : POSITION;
                float2 uv : TEXCOORD0;
            };

            struct v2f
            {
                float2 uv : TEXCOORD0;
                UNITY_FOG_COORDS(1)
                float4 vertex : SV_POSITION;
            };

            sampler2D _MainTex;
            float4 _MainTex_ST;

                v2f vert(appdata v)
            {
            v2f o;
            o.worldPos = mul(unity_ObjectToWorld,v.vertex);
            o.uv = v.uv;
            float4 uv = float(0,0,0,,1);
            uv.xy = (v.uv.xy * 2-1) * float2(1,_ProjectionParams.x);
            o.vertex = uv;
            return o;
             }

        float mask(float3 position, float3 center, float radius, float hardness)
        {
            float m = distance(center,position);
            return 1- smoothstep(radius * hardness,radius,m);
        }

        fixed4 frag(v2f i) : SV_Target{
            float m= mask(i.worldPos, _PainterPosition,_Radius, _Hardness);
            float edge = m * _Strength;
            return lerp(float4(0,0,0,0), float4(1,0,0,1),edge);
        }

            ENDCG
        }
    }
}

Is it possible to use a pre-existing texture buffer containing vertex data to initialise a vertex buffer for rendering in OpenGL v4.6?

I'm generating a heightmap in a compute shader in OpenGL v4.6 and storing it to a texture.

Lets say I actually store the full vertex data in that texture instead of just the height, which is a trivial change, and that I could easily also create an index buffer in a separate texture/SSBO at the same time.

Is there a way to use this pre-existing texture/SSBO data to create a vertex and index buffer directly if I made sure the memory layouts were correct?

It seems wasteful to pull the data back from GPU just to copy it to a new vertex array on CPU and then push back to GPU, when I could just get the CPU code to tell the GPU that this data is the vertex array instead and never have the data leave the GPU... But I have no idea how I'd tell OpenGL to map one to the other.

Development:

I've found info about copying buffer data from the one arbitrary buffer type to another, so I've given that a go. It's not as efficient as simply calling the texture buffer a vertex buffer, but this only needs to happen once, so it's a good enough solution. However, I'm getting a black screen...

This is my VAO setup code:


    const size_t num_vertices = _map_terrain_texture_shape.x * _map_terrain_texture_shape.y;
    const size_t total_vertex_position_bytes = num_vertices * sizeof(glm::vec4);
    const size_t total_vertex_colour_bytes = num_vertices * sizeof(glm::vec4);
    const size_t total_vertex_bytes = total_vertex_position_bytes + total_vertex_colour_bytes;

    std::vector<uint32_t> indices = _make_indices(_map_terrain_texture_shape);
    const size_t total_index_bites = indices.size() * sizeof(uint32_t);
    enter code here
    glGenVertexArrays(1, &_vao);
    glGenBuffers(1, &_vbo);
    glGenBuffers(1, &_ebo);

    glBindVertexArray(_vao);

    glBindBuffer(GL_ARRAY_BUFFER, _vbo);
    glBufferData(GL_ARRAY_BUFFER, total_vertex_bytes, nullptr, GL_STATIC_DRAW);

    glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, _ebo);
    glBufferData(GL_ELEMENT_ARRAY_BUFFER, total_index_bites, indices.data(), GL_STATIC_DRAW);

    glEnableVertexAttribArray(VERTEX_POSITION_ATTRIB_INDEX);
    glEnableVertexAttribArray(VERTEX_COLOUR_ATTRIB_INDEX);

    // vertex draw positions
    glVertexAttribPointer(VERTEX_POSITION_ATTRIB_INDEX, glm::vec4::length(), GL_FLOAT, GL_FALSE, sizeof(glm::vec4), (void*)0);
    // vertex colours
    glVertexAttribPointer(VERTEX_COLOUR_ATTRIB_INDEX, glm::vec4::length(), GL_FLOAT, GL_FALSE, sizeof(glm::vec4), (void*)total_vertex_position_bytes);

    glDisableVertexAttribArray(VERTEX_POSITION_ATTRIB_INDEX);
    glDisableVertexAttribArray(VERTEX_COLOUR_ATTRIB_INDEX);

    glBindVertexArray(0);

And the code running the compute shader that populates the texture buffers (image2Ds) that I copy into vertex buffer looks like this:

    _map_terrain_mesh_shader->use();

    _main_state.terrain_generator->map_terrain_heightmap_texture->bind_for_active_shader(_map_terrain_mesh_shader->id, 0, "i_heightmap_texture");
    _main_state.terrain_generator->map_terrain_vertex_position_texture->bind_for_active_shader(_map_terrain_mesh_shader->id, 1, "o_vertex_position_texture");
    _main_state.terrain_generator->map_terrain_vertex_colour_texture->bind_for_active_shader(_map_terrain_mesh_shader->id, 2, "o_vertex_colour_texture");

    _map_terrain_mesh_shader->dispatch(glm::uvec3{ _map_terrain_texture_shape, 1});

    const size_t num_vertices = _map_terrain_texture_shape.x * _map_terrain_texture_shape.y;
    const size_t total_vertex_position_bytes = num_vertices * sizeof(glm::vec4);
    const size_t total_vertex_colour_bytes = num_vertices * sizeof(glm::vec4);

    const auto position_texture_id = _main_state.terrain_generator->map_terrain_vertex_position_texture->id;
    const auto colour_texture_id = _main_state.terrain_generator->map_terrain_vertex_colour_texture->id;

    glBindBuffer(GL_COPY_WRITE_BUFFER, _vbo);

    glBindBuffer(GL_COPY_READ_BUFFER, position_texture_id);
    glCopyBufferSubData(position_texture_id, _vbo,
                        0, 0,
                        total_vertex_position_bytes);

    glBindBuffer(GL_COPY_READ_BUFFER, colour_texture_id);
    glCopyBufferSubData(colour_texture_id, _vbo,
                        0, total_vertex_position_bytes,
                        total_vertex_colour_bytes);

    glBindBuffer(GL_COPY_READ_BUFFER, 0);
    glBindBuffer(GL_COPY_WRITE_BUFFER, 0);

I have checked that this compute shader produces the correct results by using these buffers in a raytracing renderer I already had setup. That is now using this data instead of the original heightmap data.

I've gone for vec4 for each just to be sure I don't run into packing issues or whatever while I get it working, and I'm purposely not interlacing the position/colour data. I'm keeping it as a block of each.

Now assuming my compute shader is doing it's job correctly, can anyone tell me if I'm doing this right?

Custom matrix structure with OpenGL shaders

I have a MAT4 structure.

struct MAT4 {
    MAT4() {
        int c = 0;
        for (int x = 0; x < 4; x++) {
            for (int y = 0; y < 4; y++) {
                this->matrix[x][y] = 0.0;
                this->pointer[c] = this->matrix[x][y];
                c++;
            }
        }
    }

    double matrix[4][4];
    double pointer[16]; // for opengl

    void LoadIdentity() {
        this->matrix[0][0] = 1.0;
        this->matrix[1][1] = 1.0;
        this->matrix[2][2] = 1.0;
        this->matrix[3][3] = 1.0;
    }

    void RotateX(double x, bool rads = false) {
        if (rads) x *= drx::rad;
        this->matrix[1][1] = cos(x);
        this->matrix[2][1] = -sin(x);
        this->matrix[2][2] = cos(x);
        this->matrix[1][2] = sin(x);
    }
    void RotateY(double y, bool rads = false) {
        if (rads) y *= drx::rad;
        this->matrix[0][0] = cos(y);
        this->matrix[2][0] = sin(y);
        this->matrix[2][2] = cos(y);
        this->matrix[0][2] = -sin(y);
    }
    void RotateZ(double z, bool rads = false) {
        if (rads) z *= drx::rad;
        this->matrix[0][0] = cos(z);
        this->matrix[1][0] = -sin(z);
        this->matrix[1][1] = cos(z);
        this->matrix[0][1] = sin(z);
    }

    void Translate(double x, double y, double z) {
        this->matrix[3][0] = x;
        this->matrix[3][1] = y;
        this->matrix[3][2] = z;
    }

    void Scale(double x, double y, double z) {
        this->matrix[0][0] = x;
        this->matrix[1][1] = y;
        this->matrix[2][2] = z;
    }

    double* Pointer() {
        int c = 0;
        for (int x = 0; x < 4; x++) {
            for (int y = 0; y < 4; y++) {
                this->pointer[c] = this->matrix[x][y];
                c++;
            }
        }

        return this->pointer;
    }

    void Dump() {
        for (int x = 0; x < 4; x++) {
            for (int y = 0; y < 4; y++) {
                std::cout << "\n [" << x << ", " << y << "]: " << this->matrix[x][y];
            }
        }
    }
};

Which I'm then trying to pass onto OpenGL:

drx::util::MAT4 trans;
trans.LoadIdentity();
trans.RotateY(45.0, true);
trans.Dump(); // outputs values as should
glUseProgram(this->P);
glUniformMatrix4dv(glGetUniformLocation(this->P, "transform"), 1, GL_FALSE, trans.Pointer());
glUseProgram(0);

My shader looks like:

#version 330 core
layout (location = 0) in vec3 aPos;
layout (location = 1) in vec3 inColor;

out vec3 ourColor;

uniform mat4 transform;

void main()
{
    gl_Position = transform * vec4(aPos, 1.0);
    ourColor = inColor;
} 

If I take out the transforms from shaders, my triangle draws fine. But if I use the transforms my triangle disappears, is it offscreen or what could be happening?

Trying to follow this tutorial on Youtube.

Update: glGetError() gives 1282

std::cout << "\n " << glGetError(); // 0
int loc = glGetUniformLocation(this->P, "transform");
std::cout << "\n " << glGetError(); // 0
glUniformMatrix4dv(loc, 1, GL_FALSE, trans.Pointer());
std::cout << "\n " << glGetError(); // 1282

Update 2: Tried with glm, same result, no drawing.

Update 3: location for uniform variable returns -1

int loc = glGetUniformLocation(this->P, "transform"); // -1

/* defs */
extern PFNGLGETUNIFORMLOCATIONPROC glGetUniformLocation;
glGetUniformLocation = (PFNGLGETUNIFORMLOCATIONPROC)wglGetProcAddress("glGetUniformLocation");  
  • ✇Recent Questions - Game Development Stack Exchange
  • When compiling shaders in OpenGL, I get random error messagesterraquad
    I am trying to follow LearnOpenGL while coding in the Zig language, and something that is very odd is that sometimes my shader compilation fails even though I changed nothing between executing the app. I also don't understand the errors. By the way, I use mach-glfw and zigglgen. Some errors I get: error: [opengl] Failed to compile shader: assets/shaders/shader.frag 0(8) : error C0000: syntax error, unexpected '=' at token "=" error: [opengl] Failed to link shader program: Fragment info ----
     

When compiling shaders in OpenGL, I get random error messages

I am trying to follow LearnOpenGL while coding in the Zig language, and something that is very odd is that sometimes my shader compilation fails even though I changed nothing between executing the app. I also don't understand the errors. By the way, I use mach-glfw and zigglgen.

Some errors I get:

error: [opengl] Failed to compile shader: assets/shaders/shader.frag
  0(8) : error C0000: syntax error, unexpected '=' at token "="

error: [opengl] Failed to link shader program:
  Fragment info
-------------
0(8) : error C0000: syntax error, unexpected '=' at token "="
(0) : error C2003: incompatible options for link
error: [opengl] Failed to compile shader: assets/shaders/shader.frag
  0(9) : error C0000: syntax error, unexpected '(', expecting ';' at token "("

error: [opengl] Failed to link shader program:
  Fragment info
-------------
0(9) : error C0000: syntax error, unexpected '(', expecting ';' at token "("
(0) : error C2003: incompatible options for link

error: [opengl] Failed to compile shaders/shader.vert:
0(6) : error C0000: syntax error, unexpected $undefined at token "<undefined>"

Here is the code:

// Vertex shader
#version 330 core
layout (location = 0) in vec3 aPos;

out vec4 vertexColor;

void main() {
    gl_Position = vec4(aPos.xyz, 1.0);
    vertexColor = vec4(0.5, 0.0, 0.0, 1.0);
}
// Fragment shader
#version 330 core
out vec4 FragColor;
  
in vec4 vertexColor;

void main() {
    FragColor = vertexColor;
}
// Shortened main code
const std = @import("std");
const builtin = @import("builtin");
const glfw = @import("glfw");
const gl = @import("gl");
const App = @import("App.zig");

var gl_procs: gl.ProcTable = undefined;

fn glfwErrorCallback(err: glfw.ErrorCode, desc: [:0]const u8) void {
    ...
}

fn glfwFramebufferSizeCallback(_: glfw.Window, w: u32, h: u32) void {
    gl.Viewport(0, 0, @intCast(w), @intCast(h));
}

pub fn main() !void {
    // GLFW initialization
    if (!glfw.init(.{})) {
        ...
    }
    defer glfw.terminate();

    // Window creation
    const window = glfw.Window.create(1280, 720, "example opengl app", null, null, .{
        ...
    }) orelse {
        ...
    };
    defer window.destroy();
    glfw.makeContextCurrent(window);

    // OpenGL preparation
    if (!gl_procs.init(glfw.getProcAddress)) {
        ...
    }
    gl.makeProcTableCurrent(&gl_procs);
    window.setFramebufferSizeCallback(glfwFramebufferSizeCallback);

    // App startup
    var app = App{
        .window = window,
    };
    app.run() catch |err| {
        ...
    };
}

// shortened App code
...

window: glfw.Window,
vertices: [12]f32 = [_]f32{
    0.5,  0.5,  0.0,
    0.5,  -0.5, 0.0,
    -0.5, -0.5, 0.0,
    -0.5, 0.5,  0.0,
},
indices: [6]gl.uint = [_]gl.uint{
    0, 1, 3,
    1, 2, 3,
},

fn createCompiledShader(file: []const u8, stype: Shader.ShaderType) !Shader {
    const shader = try Shader.fromFile(file, stype, std.heap.raw_c_allocator);
    if (shader.compile()) |msg| {
        std.log.err("[opengl] Failed to compile shader: {s}\n  {s}", .{ file, msg });
    }
    return shader;
}

pub fn run(this: App) !void {
    // == STARTUP

    // Create vertex array object
    ...

    // Create vertex buffer object
    ...

    // Create element buffer object
    ...

    // Vertex attributes
    ...

    // Create shaders
    const vertex_shader = try createCompiledShader("assets/shaders/shader.vert", .Vertex);
    const fragment_shader = try createCompiledShader("assets/shaders/shader.frag", .Fragment);

    // Create shader program
    const shader_program = ShaderProgram.init(&.{
        vertex_shader,
        fragment_shader,
    });
    if (shader_program.link()) |msg| {
        std.log.err("[opengl] Failed to link shader program:\n  {s}", .{msg});
    }
    // Activate program and delete shaders
    shader_program.use();
    vertex_shader.delete();
    fragment_shader.delete();

    // == RENDER LOOP

    while (!this.window.shouldClose()) {
        gl.ClearColor(0.5, 0.3, 0.1, 1.0);
        gl.Clear(gl.COLOR_BUFFER_BIT);

        shader_program.use();
        gl.BindVertexArray(vao);
        gl.DrawElements(gl.TRIANGLES, 6, gl.UNSIGNED_INT, 0);

        this.window.swapBuffers();
        glfw.pollEvents();
    }
}
// shortened Shader class
...

pub const ShaderType = enum {
    Vertex,
    Fragment,
};

/// The source code of the shader.
source: []const u8,
gl_object: gl.uint,

fn createOpenglShader(src: []const u8, shader_type: ShaderType) gl.uint {
    const stype: gl.uint = switch (shader_type) {
        .Vertex => gl.VERTEX_SHADER,
        .Fragment => gl.FRAGMENT_SHADER,
    };
    const object: gl.uint = gl.CreateShader(stype);
    gl.ShaderSource(object, 1, @ptrCast(&src.ptr), null);
    return object;
}

/// Creates a `Shader` object from the source string.
pub fn fromString(src: []const u8, shader_type: ShaderType) Shader {
    ...
}

/// Creates a `Shader` object from the file contents of the given `path`.
/// The file path has to be relative to the folder where the executable resides.
/// If you want to use a file outside of that folder, open the file yourself and pass its' contents to `Shader.fromString`.
/// For some reason, you can't pass a `GeneralPurposeAllocator` since the program segfaults if you do.
pub fn fromFile(path: []const u8, shader_type: ShaderType, alloc: std.mem.Allocator) !Shader {
    var arena = std.heap.ArenaAllocator.init(alloc);
    defer arena.deinit();
    const allocator = arena.allocator();

    var root_dir = try std.fs.openDirAbsolute(try std.fs.selfExeDirPathAlloc(allocator), .{});
    defer root_dir.close();

    var file = try root_dir.openFile(path, .{});
    defer file.close();

    const buf = try allocator.alloc(u8, try file.getEndPos());
    _ = try file.readAll(buf);

    return .{
        .source = buf,
        .gl_object = createOpenglShader(buf, shader_type),
    };
}

/// Compiles the shader. If compilation succeeded, the return value will be null.
/// Otherwise, the return value will be the error message.
pub fn compile(self: Shader) ?[256]u8 {
    gl.CompileShader(self.gl_object);
    var success: gl.int = undefined;
    gl.GetShaderiv(self.gl_object, gl.COMPILE_STATUS, &success);
    if (success == 0) {
        var info_log = std.mem.zeroes([256]u8);
        gl.GetShaderInfoLog(self.gl_object, 256, null, &info_log);
        return info_log;
    }
    return null;
}

pub fn delete(self: Shader) void {
    ...
}
```

How do you handle shaders/graphics while remaining cross-platform?

I'm building a C++ based game engine, and I have my ECS complete as well as some basic components for stuff like graphics & audio. However, I'm currently using a custom interface on top of SFML with GLSL based shaders and OpenGL based graphics. I'd like to switch to a graphics solution where I can switch between OpenGL, Vulkan, DirectX3D, and Metal without rewriting large portioins of my code. The graphics API itself isn't a problem, since I can easily build an interface on top of it and reimplement it for each desired platform. My issue, however, is with the shaders.

I'm currently writing my test shaders in GLSL targeting OpenGL. I know I can use the SPIR-V translator to generate HLSL/MSL/Vulkan-Style GLSL from my OpenGL source code, but I'm not sure how that will work when I start having to set uniforms, handle shader buffers, and the like.

The big solution I've heard of is generating shaders at runtime, which is what Godot does. However, my engine is very performance-oriented, so I'd like to precompile all my shaders if possible. I've also seen that Unity uses HLSL2GLSL translator and SPIR-V cross is very common. However, I'm worried about how these will interact with setting uniforms and whatnot, and I'm very concerned about their impact on performance.

GDC 2024: We reveal incredible Work Graphs perf, AMD FSR 3.1, GI with Brixelizer, and so much more

22. Březen 2024 v 17:00

AMD GPUOpen - Graphics and game developer resources

Learn about our GDC 2024 activities, including AMD FSR 3.1, AMD FidelityFX Brixelizer, work graphs, mesh shaders, tools, CPU, and more.

The post GDC 2024: We reveal incredible Work Graphs perf, AMD FSR 3.1, GI with Brixelizer, and so much more appeared first on AMD GPUOpen.

Procedural grass rendering – Mesh shaders on AMD RDNA™ graphics cards

20. Březen 2024 v 18:00

AMD GPUOpen - Graphics and game developer resources

The fourth post in our mesh shaders series takes a look at the specific example of rendering detailed vegetation.

The post Procedural grass rendering – Mesh shaders on AMD RDNA™ graphics cards appeared first on AMD GPUOpen.

GDC 2024: Work graphs and draw calls – a match made in heaven!

18. Březen 2024 v 22:31

AMD GPUOpen - Graphics and game developer resources

Introducing "mesh nodes", which make draw calls an integral part of the work graph, providing a higher perf alternative to ExecuteIndirect dispatches.

The post GDC 2024: Work graphs and draw calls – a match made in heaven! appeared first on AMD GPUOpen.

GDC 2024: Work graphs, mesh shaders, FidelityFX™, dev tools, CPU optimization, and more.

Od: GPUOpen
12. Březen 2024 v 15:00

AMD GPUOpen - Graphics and game developer resources

Our GDC 2024 presentations this year include work graphs, mesh shaders, AMD FSR 3, GI with AMD FidelityFX Brixelizer, AMD Ryzen optimization, RGD, RDTS, and GPU Reshape!

The post GDC 2024: Work graphs, mesh shaders, FidelityFX™, dev tools, CPU optimization, and more. appeared first on AMD GPUOpen.

Introducing GPU Reshape – shader instrumentation for everyone

Od: GPUOpen
18. Leden 2024 v 16:58

AMD GPUOpen - Graphics and game developer resources

GPU Reshape brings powerful features typical of CPU tooling to the GPU, providing validation of dynamic behaviour. Read on for all of the details.

The post Introducing GPU Reshape – shader instrumentation for everyone appeared first on AMD GPUOpen.

  • ✇AMD GPUOpen
  • Optimization and best practices – Mesh shaders on RDNA™ graphics cardsmesh_shaders
    AMD GPUOpen - Graphics and game developer resources The second post in this series on mesh shaders covers best practices for writing mesh and amplification shaders, as well as how to use the AMD Radeon™ Developer Tool Suite to profile and optimize mesh shaders. The post Optimization and best practices – Mesh shaders on RDNA™ graphics cards appeared first on AMD GPUOpen.
     

Optimization and best practices – Mesh shaders on RDNA™ graphics cards

16. Leden 2024 v 11:29

AMD GPUOpen - Graphics and game developer resources

The second post in this series on mesh shaders covers best practices for writing mesh and amplification shaders, as well as how to use the AMD Radeon™ Developer Tool Suite to profile and optimize mesh shaders.

The post Optimization and best practices – Mesh shaders on RDNA™ graphics cards appeared first on AMD GPUOpen.

❌
❌