FreshRSS

Zobrazení pro čtení

Jsou dostupné nové články, klikněte pro obnovení stránky.

Use Stage's draw() method to invoke Actor's draw method in Libgdx

Question : In the following snippet, I wish to draw different shapes such as rectangles, circles, triangles etc. I will be creating different class files for each of such objects, as I did for Rectangle class here.

I've been trying to invoke the draw method of Rectangle object from Stage object, but I am unable to do so. If I make a plain call to the draw method of the Rectangle object, I can draw the object.

Can some one suggest me how is this possible ? Also, I had many other queries which I'd tried to put in comments. Please have a look at them and kindly help me in figuring out the concept beneath it.

Disclaimer : I am new to game programming, started with Libgdx.

Following is the Scan class :

package com.mygdx.scan.game;

import com.badlogic.gdx.ApplicationAdapter;
import com.badlogic.gdx.Gdx;
import com.badlogic.gdx.graphics.Color;
import com.badlogic.gdx.graphics.GL20;
import com.badlogic.gdx.graphics.glutils.ShapeRenderer;
import com.badlogic.gdx.graphics.glutils.ShapeRenderer.ShapeType;
import com.badlogic.gdx.scenes.scene2d.Stage;

public class Scan extends ApplicationAdapter {
private Stage stage;
public static ShapeRenderer shapeRenderer;

@Override
public void create () {
    stage = new Stage();
    shapeRenderer = new ShapeRenderer();
    Rectangle rect1 = new Rectangle(50, 50, 100, 100);
    stage.addActor(rect1);

}

@Override
public void render () {
    Gdx.gl.glClearColor(1, 1, 1, 1);
    Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT);
    shapeRenderer.setProjectionMatrix(stage.getCamera().combined);
    shapeRenderer.begin(ShapeType.Filled);
    shapeRenderer.setColor(Color.BLACK);
    //new Rectangle(50, 50, 100, 100).draw();
    shapeRenderer.end();
    // This should call Rectangles draw method. I want a Shapetype.Filled Rectangle. 
    // But I'm unable to invoke this method if I add the actor in stage object and invoke it's draw method. 
    stage.draw(); 
}

public void dispose() {
    stage.dispose();
}
}

Following is the Rectangle Actor class :

package com.mygdx.scan.game;

import com.badlogic.gdx.scenes.scene2d.Actor;

public class Rectangle extends Actor {

float xcord, ycord, width, height;
public Rectangle(float x , float y , float width, float height) {
    this.xcord = x;
    this.ycord = y;
    this.width = width;
    this.height = height;
}

// This needs to be called in the Scan class. I just want the draw       method to be invoked. 
// Also I wish to draw many such rectangles. What is the best practice to get hold of ShapeRenderer object from Scan class ?
// Should I use one instance of  ShapeRenderer object in Scan class or should I create ShapeRenderer object for each of the Rectangle object ??
// I also plan to repeat the activity for another object such as circles. So I would like to know the best practice here.
public void draw() {
    Scan.shapeRenderer.rect(this.xcord, this.ycord, this.width, this.height);
}

}

Why is my Gouraud Shading not working correctly?

I am trying to write a 3d Renderer in C using SDL and cgml but my shading seems to be not working correctly. When rendering the teapot I get seams and when I want to render a cube I get a bright white line between my triangles. I am pretty sure that I calculate my normals correctly but I can show that code too if necessary. All my math functions like v3_normalise / cam_perspective should be correct too because they are just aliases for cgml functions.enter image description here

enter image description here

static void _draw_triangle(SDL_Renderer* renderer, int x1, int y1, int x2, int y2, int x3, int y3) {
    SDL_RenderLine(renderer, x1, y1, x2, y2);
    SDL_RenderLine(renderer, x2, y2, x3, y3);
    SDL_RenderLine(renderer, x3, y3, x1, y1);
}

static int32_t _compare_triangles(const void* a, const void* b) {
    triangle_t* triangle_a = (triangle_t*)a;
    triangle_t* triangle_b = (triangle_t*)b;

    float z1 = (triangle_a->vertices[0].z + triangle_a->vertices[1].z + triangle_a->vertices[2].z) / 3.0f;
    float z2 = (triangle_b->vertices[0].z + triangle_b->vertices[1].z + triangle_b->vertices[2].z) / 3.0f;

    float comparison = z2 - z1;
    if (comparison < 0.0f) return 1;
    if (comparison > 0.0f) return -1;
    return 0;
}

static void _render_triangles_mesh(SDL_Renderer* renderer, triangle_t* triangles_to_render, size_t num_triangles_to_render) {
    SDL_SetRenderDrawColor(renderer, 0xFF, 0, 0xFF, 0xFF);  // set color to pink

    for (size_t j = 0; j < num_triangles_to_render; j++) {
        triangle_t triangle = dynamic_array_at(triangles_to_render, j);

        _draw_triangle(
            renderer,
            triangle.vertices[0].x, triangle.vertices[0].y, 
            triangle.vertices[1].x, triangle.vertices[1].y,
            triangle.vertices[2].x, triangle.vertices[2].y
        );
    }
}

static v3i _calc_vertex_color(v3 normal, v3 light_direction) {
    float intensity = glm_max(0.1f, v3_dot(normal, light_direction)) * 1.5f;
    v3i color = v3i_of((int32_t)glm_clamp(255.0f * intensity, 0.0f, 255.0f));

    return color;
}

static void _render_triangles_filled(SDL_Renderer* renderer, triangle_t* triangles_to_render, size_t num_triangles_to_render, v3* vertex_normals, v3 light_direction) {
    /* convert to SDL_Vertex triangles and add to vertices to render */
    SDL_Vertex vertices[num_triangles_to_render * 3];

    for (size_t i = 0; i < num_triangles_to_render; i++) {
        triangle_t triangle = dynamic_array_at(triangles_to_render, i);

        for (size_t j = 0; j < 3; j++) {
            v3i color = _calc_vertex_color(vertex_normals[triangle.indices[j]], light_direction);

            /* add vertex to SDL vertices */
            SDL_Vertex vertex = {
                .position = {triangle.vertices[j].x, triangle.vertices[j].y}, 
                .color = {color.r, color.g, color.b, 0xFF}
            };
            vertices[i * 3 + j] = vertex;
        }
    }
        
    /* render triangles */
    SDL_RenderGeometry(renderer, NULL, vertices, num_triangles_to_render * 3, NULL, 0); 
}

int32_t render(state_t* state, mesh_t* mesh, v3 object_offset) {
    static float alpha = 0;
    alpha += 0.6f * state->time.delta_sec;

    m4 rotation_matrix = glms_euler_xyz(v3_of(0.0f, alpha, 0.0f)); // glms_euler_xyz(v3_of(alpha * 0.5f, 0.0f, alpha));
    m4 translation_matrix = glms_translate_make(object_offset);
    m4 world_matrix = m4_mul(translation_matrix, rotation_matrix);

    v3 up = v3_of(0.0f, 1.0f, 0.0f);
    v3 target = v3_add(state->engine.camera.position, state->engine.camera.direction);
    m4 camera_matrix = cam_lookat(state->engine.camera.position, target, up);
    m4 view_matrix = glms_inv_tr(camera_matrix);
    
    triangle_t* triangles_to_render = dynamic_array_create(triangle_t);

    for (size_t i = 0; i < mesh->num_triangles; i++) {
        triangle_t triangle = dynamic_array_at(mesh->triangles, i);
        triangle_t triangle_transformed, triangle_projected, triangle_viewed;

        /* rotate and translate triangle */
        triangle_transformed = triangle;
        triangle_transformed.vertices[0] = m4_mulv(world_matrix, triangle.vertices[0]);
        triangle_transformed.vertices[1] = m4_mulv(world_matrix, triangle.vertices[1]);
        triangle_transformed.vertices[2] = m4_mulv(world_matrix, triangle.vertices[2]);
    
        /* world space to camera space */
        triangle_viewed = triangle_transformed;
        triangle_viewed.vertices[0] = m4_mulv(view_matrix, triangle_transformed.vertices[0]);
        triangle_viewed.vertices[1] = m4_mulv(view_matrix, triangle_transformed.vertices[1]);
        triangle_viewed.vertices[2] = m4_mulv(view_matrix, triangle_transformed.vertices[2]);

        /* 3d to 2d */
        triangle_projected = triangle_viewed;
        triangle_projected.vertices[0] = m4_mulv(state->engine.projection_matrix, triangle_viewed.vertices[0]);
        triangle_projected.vertices[1] = m4_mulv(state->engine.projection_matrix, triangle_viewed.vertices[1]);
        triangle_projected.vertices[2] = m4_mulv(state->engine.projection_matrix, triangle_viewed.vertices[2]);
        triangle_projected.vertices[0] = v4_divs(triangle_projected.vertices[0], triangle_projected.vertices[0].w);
        triangle_projected.vertices[1] = v4_divs(triangle_projected.vertices[1], triangle_projected.vertices[1].w);
        triangle_projected.vertices[2] = v4_divs(triangle_projected.vertices[2], triangle_projected.vertices[2].w);

        /* backface culling using winding order */
        v3 line1 = v3_sub(v3_from(triangle_projected.vertices[1]), v3_from(triangle_projected.vertices[0]));
        v3 line2 = v3_sub(v3_from(triangle_projected.vertices[2]), v3_from(triangle_projected.vertices[0]));
        float sign = line1.x * line2.y - line2.x * line1.y;
        if (sign > 0.0f) continue;
        
        /* scale into view */
        triangle_projected.vertices[0].x = map(triangle_projected.vertices[0].x, -1.0f, 1.0f, 0, WIDTH);
        triangle_projected.vertices[0].y = map(triangle_projected.vertices[0].y, -1.0f, 1.0f, 0, HEIGHT);
        triangle_projected.vertices[1].x = map(triangle_projected.vertices[1].x, -1.0f, 1.0f, 0, WIDTH);
        triangle_projected.vertices[1].y = map(triangle_projected.vertices[1].y, -1.0f, 1.0f, 0, HEIGHT);
        triangle_projected.vertices[2].x = map(triangle_projected.vertices[2].x, -1.0f, 1.0f, 0, WIDTH);
        triangle_projected.vertices[2].y = map(triangle_projected.vertices[2].y, -1.0f, 1.0f, 0, HEIGHT);
        
        /* add triangle to list */
        dynamic_array_append(triangles_to_render, triangle_projected);
    }

    /* transform vertex normals */
    v3* transformed_normals = malloc(sizeof(v3) * mesh->num_vertices);
    m3 normal_matrix = m3_inv(m4_pick3t(world_matrix));
    
    for (size_t i = 0; i < mesh->num_vertices; i++) {
        v3 normal = dynamic_array_at(mesh->vertex_normals, i);
        v3 normal_transformed = v3_normalize(m3_mulv(normal_matrix, normal));
        transformed_normals[i] = normal_transformed;
    }

    /* sort triangles back to front */
    size_t num_triangles_to_render = dynamic_array_get_length(triangles_to_render);
    qsort(triangles_to_render, num_triangles_to_render, sizeof(triangle_t), _compare_triangles);

    /* draw triangles */
    #ifdef DEBUG
        _render_triangles_mesh(state->renderer, triangles_to_render, num_triangles_to_render);
    #else
        _render_triangles_filled(state->renderer, triangles_to_render, num_triangles_to_render, transformed_normals, state->engine.light.direction);
    #endif
    
    /* cleanup */
    free(transformed_normals);
    dynamic_array_destroy(triangles_to_render);

    return state->retval;
}

EDIT: Image of the normals

enter image description here

Regarding with pixel arrays and textures in SDL2

Currently, the way I render my frames is as follows. I have two arrays of pixels (in SDL_PIXELFORMAT_ABGR8888) called frame_pixels (which represents the pixel data for the current frame) and clear_frame_pixels (which is a constant array of black pixels). First I clear the renderer and reset frame_pixels by copying clear_frame_pixels to it. During my draw calls, I write to frame_pixels and then copy the data into a texture called frame_texture (with access SDL_TEXTUREACCESS_STREAMING), which is then copied onto the renderer and presented.

SDL_RenderClear(renderer)
memcpy(frame_pixels, clear_frame, size_of_frame_pixels);

// Draw functions go here.

// Lock frame texture and copy the pixel array onto it.
unsigned char *locked_pixels;
int pitch;
SDL_LockTexture(frame_texture, NULL, locked, &pitch);
memcpy(locked, frame_pixels, size_of_frame_pixels);
SDL_UnlockTexture(frame_texture);
SDL_RenderCopy(renderer, frame_texture, NULL, NULL);

SDL_Present(renderer);

I have a few questions, the first of which concerns the top 2 lines:

  • Do I need SDL_RenderClear anymore? Without it, my program runs with no difference, but I'm worried that there are underlying side effects that have not hit me yet.
  • Is there a faster way to do this? The reason I'm rendering using textures and pixel arrays is because it is much faster than the standard SDL_RenderDrawX functions. So any further optimization is appreciated.
  • In terms of terminology, would frame_pixels be called a "frame buffer"? What would frame_texture be called, then?

Graphics2D.drawImage not working

I'm trying to come up with my own game using Java AWT after watching a few video tutorials. However, I encountered a problem where I cannot draw an external image file that I loaded using the BufferedImage object.

The problem seems to be on the method that I'm using to draw the image on the screen, where I'm using Graphics2D.drawImage() method to draw.


Here is part of my code (I modified and skipped parts that seemed irrelevant to the topic):

Window Class

public class Window extends JPanel{
    public Window(int width, int height) {
        JFrame frame = new JFrame("Zerro's Game");
        
        frame.setPreferredSize(new Dimension(width, height));
        
        frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
        frame.setLocationRelativeTo(null);
        frame.setResizable(false);
        frame.setVisible(true);
        frame.pack();
    }
}

Game Class

public class Game extends JFrame implements Runnable {
    // Dimension for Main Frame
    private static final int WIDTH = 640;
    private static final int HEIGHT = 480;

    // Image
    private BufferedImage image;
    private Graphics2D g;

    public void run() {
        this.requestFocus();
        
        // For rendering purposes
        image = new BufferedImage(WIDTH, HEIGHT, BufferedImage.TYPE_INT_RGB);
        g = (Graphics2D) image.getGraphics();
        
        //Game Loop
        long now;
        long updateTime;
        long wait;

        final int TARGET_FPS = 60;
        final long OPTIMAL_TIME = 1000000000 / TARGET_FPS;
        
        while (isRunning) {
            now = System.nanoTime();
            
            update();
            render();
            
            updateTime = now - System.nanoTime();
            wait = (OPTIMAL_TIME - updateTime) / 1000000;
                    
            try {
                Thread.sleep(wait);
            } catch (Exception e) {
                e.printStackTrace();
            }
        }
    }

    private void render() {
        if(gameState == STATE.GAME) {
            handler.render(g);
        } else if(gameState == STATE.MENU) {
            menu.render(g);
        }
    }

Menu Class

public class Menu extends KeyAdapter{
    private BufferedImage image;

    public void render(Graphics2D g) {
        try {
            image = ImageIO.read(getClass().getResource("/Pixel_Background.png"));
            g.drawImage(image, 0, 0, null);
        } catch(Exception e) {
            e.printStackTrace();
        }
    }
}

This code results in making empty frame without any content inside. I confirmed that it properly loads an image by using System.out.println(image.getHeight()) which prints out the exact height of an image that I'm using.

I've seen some comments in the internet that I need to use paintComponent() method. However, I'm just wondering if it's not a good idea to use paintComponent() in game development as the video tutorials (3 of them) that I watched didn't use such method to draw images.

Also, I'm wondering about the reason why we pass in the Graphics2D object as parameter in all the render() methods.

Depth sorting issue

I've been working on a custom tile-based map editor for a while now, and everything works as expected, or so I thought until I've tried rendering the actual map including the dynamic objects as well.

As it currently works, it's saving data into a buffer in a format of: [ground layer] [on-ground layer] (flowers/rocks) [wall layer] [on-wall layer] (windows/torches/bookshelves)

So basically there's a total of 4 layers. The tiles are rendered from top left corner to bottom right corner in the following order: [ground layer] -> All tiles [on-ground layer] -> All tiles [wall layer] Does an instance exists on that spot? in case it does the instance gets rendered first, and then the tile itself. [on-wall layer] renders normally right after that.

This system seems perfectly fine it the game has a grid-based movement, as well as if all the dynamic sprites are the same size like the tile size. Why? Well, you'd just walk anywhere and get covered with a certain tile or be drawn over a certain tile. On the other hand, if any of the dynamic instances has let's say a sprite height which is bigger than the actual tile size the depth sorting issues appear.

Example: We've got a pillar which is 16x48 in size. The very bottom of the pillar has a collider of 16x16 and it can't be stepped on, but the mid and top part don't have any colliders. Now if a player for example steps on the topmost tile of the pillar everything gets rendered normally. On the other hand if the steps on the mid tile, the "player's head" would be rendered over the topmost tile..

Screenshot of tile map sorting problem

I'm wondering if there's an actual solution for depth ordering in a case like this.

❌