FreshRSS

Normální zobrazení

Jsou dostupné nové články, klikněte pro obnovení stránky.
PředevčíremHlavní kanál
  • ✇Recent Questions - Game Development Stack Exchange
  • Relation between an object and its rendererX Y
    Recently I decided to start optimizing code in the game development field. I found this pacman project on github a good starting point: https://github.com/LucaFeggi/PacMan_SDL/tree/main I found a lot of things that deserve an optimization. One of the most things that caught my attention is that a lot of classes have their Draw (render) method inside them, such as Pac, Ghost, Fruit classes... So I decided to make a Renderer class, and move to it the Draw method and the appropriate fields that the
     

Relation between an object and its renderer

Recently I decided to start optimizing code in the game development field.

I found this pacman project on github a good starting point: https://github.com/LucaFeggi/PacMan_SDL/tree/main

I found a lot of things that deserve an optimization. One of the most things that caught my attention is that a lot of classes have their Draw (render) method inside them, such as Pac, Ghost, Fruit classes...

So I decided to make a Renderer class, and move to it the Draw method and the appropriate fields that the latter works on.

Let us take the Pac class inside the PacRenderer.hpp as an example, to clarify things:

class Pac : public Entity{
    public:
        Pac();
        ~Pac();
        void UpdatePos(std::vector<unsigned char> &mover, unsigned char ActualMap[]);
        unsigned char FoodCollision(unsigned char ActualMap[]);
        bool IsEnergized();
        void ChangeEnergyStatus(bool NewEnergyStatus);
        void SetFacing(unsigned char mover);
        bool IsDeadAnimationEnded();
        void ModDeadAnimationStatement(bool NewDeadAnimationStatement);
        void UpdateCurrentLivingPacFrame();
        void ResetCurrentLivingFrame();
        void WallCollisionFrame();
        void Draw();
    private:
        LTexture LivingPac;
        LTexture DeathPac;
        SDL_Rect LivingPacSpriteClips[LivingPacFrames];
        SDL_Rect DeathPacSpriteClips[DeathPacFrames];
        unsigned char CurrLivingPacFrame;
        unsigned char CurrDeathPacFrame;
        bool EnergyStatus;
        bool DeadAnimationStatement;
};

So according to what I described above, the PacRenderer class will be:

class PacRenderer {
public:
        PacRenderer(std::shared_ptr<Pac> pac);
        void Draw();
private:
        LTexture LivingPac;
        LTexture DeathPac;
        SDL_Rect LivingPacSpriteClips[LivingPacFrames];
        SDL_Rect DeathPacSpriteClips[DeathPacFrames];
}

PacRenderer::PacRenderer(std::shared_ptr<Pac> pac)
{
    this->pac = pac;

    //loading textures here instead of in the Fruit class
    LivingPac.loadFromFile("Textures/PacMan32.png");
    DeathPac.loadFromFile("Textures/GameOver32.png");
    InitFrames(LivingPacFrames, LivingPacSpriteClips);
    InitFrames(DeathPacFrames, DeathPacSpriteClips);
}

After moving Draw() and 4 fields from the Pac class.

At the beginning, I thought that it is a very good optimization, and separation between game logic and graphics rendering.

Until I read this question and its right answer: How should renderer class relate to the class it renders?

I found that what the right answer describes as

you're just taking a baby step away from "objects render themselves" rather than moving all the way to an approach where something else renders the objects

applies exactly to my solution.

So I want to verify if this description really apply to my solution.

Is my solution good, did I actually separate game logic from graphics rendering?

If not, what is the best thing to do in this case?

How do I convert a voxel model to a low-poly mesh with texture mapping that accurately mimics the coloured voxels?

I've created a model in MagicaVoxel (A) and I want to use it in Godot. I imported the model into Blender, but then realized that it's using a very high polygon count for such a simple model. I found that using MagicaVoxel's OBJ export yields interesting results in that voxels of the same colour share polygons (B), but when many different coloured voxels are next to each other, it still creates a high polygon count.

voxel polygon count

Ideally, I'd like to have a model that is the lowest polygon count possible (C) with an accurate texture map to mimic the voxel colours. Is there an existing tool to achieve this? I really like how MagicaVoxel works and I'm not interested in using Blender to model or manually texture the mesh, but maybe there's a plugin for Blender? I'm open to suggestions.

As a side note, it's possible that I could just use the OBJ file as is, but I wonder about performance. Mesh B has 216 polygons, while mesh C uses 140. This is just one asset in a medieval adventure game and I'd like to have a very cluttered world. ;-)

Edit For clarification: MagicaVoxel exports an OBJ file with accurate texture mapping, but isolates different colours of voxels to their own polygons, creating more polygons than are required. Mesh B is the result (I imagine this removes the need for any anisotropic filtering). Mesh C was also exported from MagicaVoxel, but with the colour information removed. Thus, without the desired material information. I just wanted to avoid any confusion with what MagicaVoxel can do on it's own. Maybe there's a solution within MagicaVoxel that I'm not aware of?


Update: I believe Blender can do what I need, but it's going to require a lot more research to get it right. I was able to get a texture map to bake by diffusing colour with some success (some faces got the wrong colour, see the yellow near the bottom of the blade), but I couldn't quite figure out how to do it with vertex colour emissions to see if the result would be better. Then (as you can see in the sword handle) I need to setup up a pixel perfect UV and texture map, prior to baking. I'm kind of leaning away from doing it manually in Blender, to be honest. I may have to do this for a thousand or more models, and every time a model is edited.

blender baking attempt

I'm going to let this question sit for now in the hope that there's an easier solution. Maybe a script exists to do this? Maybe I should learn how to write my own? In the meantime, I can still proceed with the higher polygon counts in the OBJ exports from MagicaVoxel. The neat thing about the native MagicaVoxel OBJ exports is that the texture map is a simple 1px tall by 256px wide palette PNG file (every colour is a single square pixel) that all meshes can share. Everything can use the same texture file. Maybe that offsets the higher polygon count performance hit? Anyway, I'm taking a break from Blender. ;-)

  • ✇Recent Questions - Game Development Stack Exchange
  • Adjusting texture generation inside a quadtreez3nth10n
    Well, I'm trying to adjust the position of generated textures using this approach. I replicated this in Unity3D for a complete texture of 4096x4096: The point is that I need to generate the same texture on a quadtree using different level of details / zoom per each node. I have almost the code finished, but I cannot calibrate textures inside the grid/nodes: With grid: The point is that I'm unsure at which point of the code execution, I have two main parts that I doubt about. The first one is
     

Adjusting texture generation inside a quadtree

Well, I'm trying to adjust the position of generated textures using this approach.

I replicated this in Unity3D for a complete texture of 4096x4096:

texture generated

The point is that I need to generate the same texture on a quadtree using different level of details / zoom per each node.

I have almost the code finished, but I cannot calibrate textures inside the grid/nodes:

enter image description here

With grid:

biomes with grid

The point is that I'm unsure at which point of the code execution, I have two main parts that I doubt about.

The first one is on the OnGUI() call, read comments FMI.

private void OnGUI()
{
    // Skipped logic
    var totalTimeExecution = DrawBiomeTextureLogic();
    // Skipped logic
}

private long DrawBiomeTextureLogic()
{
    var octreeKey = world.worldSettings.TargetOctreeKey;
    var regions = world.worldState.ChunkData.Get_Regions();

    if (!regions.ContainsKey(octreeKey))
        return -1;

    var region = regions[octreeKey];

    // will draw the smaller ones first (Depth = 0, 2^0=1 * textureSize)
    // but we draw everything as 16x16 because we are doing a level of detail with textures
    var map = region.NodeMap.AsEnumerable().OrderByDescending(x => x.Value.Depth);

    foreach (var (key, octree) in map)
    {
        // fields from the octree node
        var position = octree.Position;
        var depth = octree.Depth;
        var mult = 1 << depth;

        // first intPosition, needs to refactgor
        var size = world.worldSettings.ChunkSize * mult;
        var intPosition = new Vector2Int(position.x - size / 2, position.y - size / 2);

        DrawTexture(octree);
    }

    return totalTimeExecution;
}

private void DrawTexture(OctreeNode octree)
{
    // fields from the octree node
    // note: I say quadtree on the question, but because I'm skipping z axis values with a HashSet, 
    // I will create another question to create another algorithm for it
    var position = octree.Position;
    var key = octree.LocCode;
    var depth = octree.Depth;
    var mult = 1 << depth;

    var size = world.worldSettings.ChunkSize * mult;

    // use Y position because we need a bird view of the map (edit: but i need)
    // the problem could be here
    var intPosition = new Vector2Int(position.x - size / 2, position.y - size / 2);
    var targetPosition = new long2(intPosition.x, intPosition.y);

    // or maybe, but this is for adjusting inside scroll view
    var texRect = new Rect(intPosition.x, -intPosition.y - MAP_SIZE_HEIGHT, size, size);

    // pass targetPosition into second stage
    TextureSettings.InitializeTextureParams(targetPosition, depth, key);

    // Get the texture from the biomeTextureController
    var texture = biomeTextureController.GetTexture(TextureSettings, out var textureTask);

    if (texture != null)
    {
        Textures.TryAdd(new Vector2Int(intPosition.x, -intPosition.y), (texture, size));
    }

    var color = texture != null ? Color.green : Color.red;

    if (texture != null)
    {
        GUI.DrawTexture(texRect, texture);

        // not relevant
        UIUtils.DrawOutlinedRect(texRect, color, 1);
    }
}

The second stage mentioned in code (line 59), is the Job (Unity Jobs) that I use, I did it parallel with bounds checking for index and the mentioned document from Cuberite above, the call is explained here:

[BurstCompile(OptimizeFor = OptimizeFor.Performance)]
public struct BiomeTextureParallelJob : IJobParallelFor, IDisposable
{
    // colors of the texture
    public NativeArray<Color32> data;

    // used to get Biome color depending on its index (enumeration)
    [ReadOnly]
    public NativeParallelHashMap<short, Color> colorsCollection;

    public BiomeGenenerationSettings settings;
    public BiomeTextureJobSettings jobSettings;

    public long2 TargetPosition => jobSettings.TargetPosition;
    public int Depth => jobSettings.Depth;
    public int TextureSize => jobSettings.textureSize;

    public void Execute(int index)
    {
        OnStart();

        To2D(index, TextureSize, out var x, out var y);

        if (showPercentage)
            counter[threadIndex]++;

        // same octree fields calculated again
        var d = 1 << (Depth + 4);
        var p = d / TextureSize; // example: 64 / 16 = 4
        var s = TextureSize * (1 << Depth); // textureSize multiplied by its 2^n node size

        // maybe the problem start here
        // this is from the first stage
        var tx = TargetPosition.x;
        var ty = TargetPosition.y;

        // I tried some operations, but I'm unable to adjust it
        var xx = x * p; // - p / 2; // * (tx < 0 ? -1 : 1); // + p / 2;
        var yy = y * p; // - p / 2; // * (ty < 0 ? -1 : 1); // + p / 2;

        long wx = tx + xx - s / 2; // * (tx < 0 ? -1 : 1);
        long wy = ty + yy - s / 2; // * (ty < 0 ? -1 : 1);


        if (!IsInBounds(index, data.Length))
            return;

        data[index] =
            BiomeTextureUtils.CalcPixelColor(settings, colorsCollection, wx, wy, 1,
            false, AlphaChannel);
    }

    [BurstDiscard]
    public void Dispose()
    {
        try
        {
            if (data.IsCreated)
                data.Dispose();
        }
        catch { }
    }
}

As you can see the problem could be found in two places. It could be some simple, as you can see on UI, more zoomed:

UI view

I'd like to share a demo with the minimum code, I'll prepare one if needed.

Note: I'd like to add a spoiler to hide such big images. I'm sorry.

  • ✇Recent Questions - Game Development Stack Exchange
  • Software to stretch the edge of tiles for paddingSilas Hayes-Williams
    I'm currently working on a pixel art game with a 2d tile map and I am aware that to stop texture bleeding you need to pad out each tile. I also find the best result if you pad out each tile by 'stretching' its edge. For example: becomes --------> However, to me, padding out each tile manually is a bit tedious and also makes the tilesheet harder to edit since you need to remember to re-pad every time. I'm wondering if there is any software out there which can take in a tilesheet and do all t
     

Software to stretch the edge of tiles for padding

I'm currently working on a pixel art game with a 2d tile map and I am aware that to stop texture bleeding you need to pad out each tile. I also find the best result if you pad out each tile by 'stretching' its edge. For example:

tile unpadded becomes --------> tile padded

However, to me, padding out each tile manually is a bit tedious and also makes the tilesheet harder to edit since you need to remember to re-pad every time. I'm wondering if there is any software out there which can take in a tilesheet and do all the padding for you?

I'd prefer to edit the tilesheet all in one place rather than edit tiles separately and pack them together. Or is there some way of storing each tile separately but editing them together as though they were in a single tilesheet, and then using a texture packer to pack them together and do everything you need to do for the padding?

How to apply saturn's ring texture in Unreal Engine 4?

I'm working on a "solar system" model project. And while trying to apply saturn's ring texture which is this one : enter image description here

it ended up looking like this: enter image description here

I'm new to UE4 and this branch in general. So I have no idea how to fix this . your help would be appreciated

  • ✇Recent Questions - Game Development Stack Exchange
  • Implicitly ray traced cone texture mapping -- Trying to avoid atanSteven Lu
    I am kinda looking for some ways out of this little rabbit hole. I wanted to experiment with high fidelity per pixel cone drawing and shading. I have an application (radar) that really only needs cone 3D geometry and I figured it would be cool if I could render them implicitly and get sharp pixel perfect geometry by deriving the geometry in addition to the shading within the fragment shader. This all works and I got quite far embedding the z-picking and occlusion logic such that now I finally am
     

Implicitly ray traced cone texture mapping -- Trying to avoid atan

I am kinda looking for some ways out of this little rabbit hole.

I wanted to experiment with high fidelity per pixel cone drawing and shading. I have an application (radar) that really only needs cone 3D geometry and I figured it would be cool if I could render them implicitly and get sharp pixel perfect geometry by deriving the geometry in addition to the shading within the fragment shader.

This all works and I got quite far embedding the z-picking and occlusion logic such that now I finally am able to ignore parts of the cone that intersect my pixel rays behind the camera and to always show the nearest intersection in front of the camera (easier said than done!). Even computing the normal direction was a piece of cake and the lighting looks great.

Now I grapple with computing the texture coordinates, and I'm realizing I might be a bit screwed and forced to use atan2 for the radial angle coordinate on the cone.

Typically whenever we want to reach for atan2 in hot code or shader code, we're trying to do something further with the angle derived and can eliminate it by being clever with various vectors available to us from which various trig values can be derived efficiently. But here I need to directly use an angle for a texture lookup and it seems like I'm SOL and really actually need to use an atan function now in the shader due to the approach I took with this. So far this seems like a dealbreaker and I may actually need to go back to using a traditional cone made out of vertices and interpolated uv coordinates.

For background info, the way the shader is constructed is by substituting the implicit ray equation into an implicit cone equation; as this yields a quadratic equation in the ray's parameter variable, the coefficients of the quadratic equation are computed and substituted in and zero, one or two t-values (distance along ray) pop out.

Is it possible to use a pre-existing texture buffer containing vertex data to initialise a vertex buffer for rendering in OpenGL v4.6?

I'm generating a heightmap in a compute shader in OpenGL v4.6 and storing it to a texture.

Lets say I actually store the full vertex data in that texture instead of just the height, which is a trivial change, and that I could easily also create an index buffer in a separate texture/SSBO at the same time.

Is there a way to use this pre-existing texture/SSBO data to create a vertex and index buffer directly if I made sure the memory layouts were correct?

It seems wasteful to pull the data back from GPU just to copy it to a new vertex array on CPU and then push back to GPU, when I could just get the CPU code to tell the GPU that this data is the vertex array instead and never have the data leave the GPU... But I have no idea how I'd tell OpenGL to map one to the other.

Development:

I've found info about copying buffer data from the one arbitrary buffer type to another, so I've given that a go. It's not as efficient as simply calling the texture buffer a vertex buffer, but this only needs to happen once, so it's a good enough solution. However, I'm getting a black screen...

This is my VAO setup code:


    const size_t num_vertices = _map_terrain_texture_shape.x * _map_terrain_texture_shape.y;
    const size_t total_vertex_position_bytes = num_vertices * sizeof(glm::vec4);
    const size_t total_vertex_colour_bytes = num_vertices * sizeof(glm::vec4);
    const size_t total_vertex_bytes = total_vertex_position_bytes + total_vertex_colour_bytes;

    std::vector<uint32_t> indices = _make_indices(_map_terrain_texture_shape);
    const size_t total_index_bites = indices.size() * sizeof(uint32_t);
    enter code here
    glGenVertexArrays(1, &_vao);
    glGenBuffers(1, &_vbo);
    glGenBuffers(1, &_ebo);

    glBindVertexArray(_vao);

    glBindBuffer(GL_ARRAY_BUFFER, _vbo);
    glBufferData(GL_ARRAY_BUFFER, total_vertex_bytes, nullptr, GL_STATIC_DRAW);

    glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, _ebo);
    glBufferData(GL_ELEMENT_ARRAY_BUFFER, total_index_bites, indices.data(), GL_STATIC_DRAW);

    glEnableVertexAttribArray(VERTEX_POSITION_ATTRIB_INDEX);
    glEnableVertexAttribArray(VERTEX_COLOUR_ATTRIB_INDEX);

    // vertex draw positions
    glVertexAttribPointer(VERTEX_POSITION_ATTRIB_INDEX, glm::vec4::length(), GL_FLOAT, GL_FALSE, sizeof(glm::vec4), (void*)0);
    // vertex colours
    glVertexAttribPointer(VERTEX_COLOUR_ATTRIB_INDEX, glm::vec4::length(), GL_FLOAT, GL_FALSE, sizeof(glm::vec4), (void*)total_vertex_position_bytes);

    glDisableVertexAttribArray(VERTEX_POSITION_ATTRIB_INDEX);
    glDisableVertexAttribArray(VERTEX_COLOUR_ATTRIB_INDEX);

    glBindVertexArray(0);

And the code running the compute shader that populates the texture buffers (image2Ds) that I copy into vertex buffer looks like this:

    _map_terrain_mesh_shader->use();

    _main_state.terrain_generator->map_terrain_heightmap_texture->bind_for_active_shader(_map_terrain_mesh_shader->id, 0, "i_heightmap_texture");
    _main_state.terrain_generator->map_terrain_vertex_position_texture->bind_for_active_shader(_map_terrain_mesh_shader->id, 1, "o_vertex_position_texture");
    _main_state.terrain_generator->map_terrain_vertex_colour_texture->bind_for_active_shader(_map_terrain_mesh_shader->id, 2, "o_vertex_colour_texture");

    _map_terrain_mesh_shader->dispatch(glm::uvec3{ _map_terrain_texture_shape, 1});

    const size_t num_vertices = _map_terrain_texture_shape.x * _map_terrain_texture_shape.y;
    const size_t total_vertex_position_bytes = num_vertices * sizeof(glm::vec4);
    const size_t total_vertex_colour_bytes = num_vertices * sizeof(glm::vec4);

    const auto position_texture_id = _main_state.terrain_generator->map_terrain_vertex_position_texture->id;
    const auto colour_texture_id = _main_state.terrain_generator->map_terrain_vertex_colour_texture->id;

    glBindBuffer(GL_COPY_WRITE_BUFFER, _vbo);

    glBindBuffer(GL_COPY_READ_BUFFER, position_texture_id);
    glCopyBufferSubData(position_texture_id, _vbo,
                        0, 0,
                        total_vertex_position_bytes);

    glBindBuffer(GL_COPY_READ_BUFFER, colour_texture_id);
    glCopyBufferSubData(colour_texture_id, _vbo,
                        0, total_vertex_position_bytes,
                        total_vertex_colour_bytes);

    glBindBuffer(GL_COPY_READ_BUFFER, 0);
    glBindBuffer(GL_COPY_WRITE_BUFFER, 0);

I have checked that this compute shader produces the correct results by using these buffers in a raytracing renderer I already had setup. That is now using this data instead of the original heightmap data.

I've gone for vec4 for each just to be sure I don't run into packing issues or whatever while I get it working, and I'm purposely not interlacing the position/colour data. I'm keeping it as a block of each.

Now assuming my compute shader is doing it's job correctly, can anyone tell me if I'm doing this right?

❌
❌