FreshRSS

Normální zobrazení

Jsou dostupné nové články, klikněte pro obnovení stránky.
PředevčíremHlavní kanál
  • ✇Recent Questions - Game Development Stack Exchange
  • Why does reading my depth texture in GLSL return less than one?Grimelios
    I've created a depth texture in OpenGL (using C#) as follows: // Create the framebuffer. var framebuffer = 0u; glGenFramebuffers(1, &framebuffer); glBindFramebuffer(GL_FRAMEBUFFER, framebuffer); // Create the depth texture. var depthTexture = 0u; glGenTextures(1, &depthTexture); glBindTexture(GL_TEXTURE_2D, depthTexture); glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT24, 800, 600, 0, GL_DEPTH_COMPONENT, GL_FLOAT, null); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
     

Why does reading my depth texture in GLSL return less than one?

I've created a depth texture in OpenGL (using C#) as follows:

// Create the framebuffer.
var framebuffer = 0u;

glGenFramebuffers(1, &framebuffer);
glBindFramebuffer(GL_FRAMEBUFFER, framebuffer);

// Create the depth texture.
var depthTexture = 0u;

glGenTextures(1, &depthTexture);
glBindTexture(GL_TEXTURE_2D, depthTexture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT24, 800, 600, 0, GL_DEPTH_COMPONENT, GL_FLOAT, null);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, depthTexture, 0);

Later, I sample from the depth texture as follows:

float depth = texture(depthTexture, texCoords).r;

But even when no geometry has been rendered to that pixel, the depth value coming back is less than 1 (seems to be very slightly above 0.5). This is confusing to me since, per the documentation on glClearDepth, the default value is 1. Note that this is not a problem of linearizing depth since I'm attemping to compare depth directly (using the same near and far planes), not convert that depth back to world space.

Why is my depth texture sample returning <1 when no geometry has been rendered?

  • ✇Recent Questions - Game Development Stack Exchange
  • LWJGL and JOML rotation issuesPjRock
    I'm making a scene builder in LWJGL. Objects in the scene have positions and rotations. Like most scene builders/modelers, I have colors handles to show the objects orientation. I have a typical setup, red points in the positive x direction, blue in the positive z. The problem is the handles don't point in the correct direction. I have attached a screenshot showing the issue. The cube on the right has a rotation of 0, 0, 0, and the handles are correct. The cube on the left has a rotation o
     

LWJGL and JOML rotation issues

I'm making a scene builder in LWJGL. Objects in the scene have positions and rotations. Like most scene builders/modelers, I have colors handles to show the objects orientation. I have a typical setup, red points in the positive x direction, blue in the positive z.

The problem is the handles don't point in the correct direction. I have attached a screenshot showing the issue. The cube on the right has a rotation of 0, 0, 0, and the handles are correct. The cube on the left has a rotation of 0, 30, 0. Where I'm confused is why is the blue handle rotated 30 degrees clockwise, and the mesh is 30 degrees COUNTER-clockwise?

enter image description here

I compute the cube's rotation with

public Matrix4f getLocalMatrix() {
    // I update the position, rotation and scale directly, so I recalculate the matrix every time.
    return this.localMatrix.translationRotateScale(this.position, this.rotation, this.scale);
}

And to draw the handles I use

Matrix4f m = gameObject.getLocalMatrix();
gizmos.drawRay(gameObject.position, m.positiveZ(new Vector3f()));

...

public void drawRay(Vector3f start, Vector3f direction) {
    glBegin(GL_LINES);
    glVertex3f(start.x, start.y, start.z);
    glVertex3f(start.x + direction.x, start.y + direction.y, start.z + direction.z);
    glEnd();
}

I use a simple vertex shader, I don't think that's the issue,

layout (location=0) in vec3 inPosition;
layout (location=1) in vec2 texCoord;

out vec2 outTextCoord;

uniform mat4 projectionMatrix;
uniform mat4 viewMatrix;
uniform mat4 modelMatrix;

void main() {
    gl_Position = projectionMatrix * viewMatrix * modelMatrix * vec4(inPosition, 1.0);
    outTextCoord = texCoord;
}

The uniforms are being set correctly (I'm assuming). modelMatrix is set to gameObject.getLocalMatrix());

The only thing I can think of is some of my code is using right handed coordinates, and some is left handed?

Differences between rotations and translations of different camera properties in LiBGDX

I am trying to understand the camera API (applicable to perspective camera ONLY) of LiBGDX.

It really does not make sense that you can call rotate and translate on many different properties of the camera. I would like to know what is the difference between them?

Here is the list of rotate and translate methods that act on the LibGDX camera:

  1. camera.translate() , camera.rotate()
  2. camera.view.translate() , camera.view.rotate()
  3. camera.position.traMul(Matrix4 m) , camera.position.rotate()
  4. camera.direction.traMul(Matrix4 m) , camera.direction.rotate()

To my understanding, the camera.view is the actual Frustrum of the camera what can be seen on the screen! What is the difference of rotating(translating) the camera's direction, as compared to rotation(translation) of the camera's view?

What about I just translate or rotate the camera and NOT the view of the camera OR the direction OR the position of the camera? What effect will that have?

I have read the documentation and its really lacking in helping us understand! Please someone to help demystified these camera concepts!

LiBGDX camera drifts far away from modelinstance after translation and rotation

My camera starts in the position I expect. However, after some time it drifts far away from the red modelinstance! How to keep it position right behind the red modelinstance without drifting far away!

Here is how I initially set up my camera:

    //initial set up of my camera:

 Gdx.input.setCursorCatched(true);
    Gdx.input.setCursorPosition(Gdx.graphics.getWidth() / 2, Gdx.graphics.getHeight() / 2);

    camera = new PerspectiveCamera(67, Gdx.graphics.getWidth(), Gdx.graphics.getHeight());

 camera.position.set(10f,12f,17.5f);
   
    camera.lookAt(10f,0,10f);        

    camera.up.set(new Vector3(10f,0,10f).Y);
    camera.near = 0.1f;
    camera.far = 300f;     

    camera.update();

Here is how I translate my red model instance: AFTER PRESSING W KEY

BoundingBox bbox0091 = new BoundingBox();
                    ThreeDWithMultipleScreensGame.gameMainPlayerReference.calculateBoundingBox(bbox0091);
                    bbox0091.mul(ThreeDWithMultipleScreensGame.gameMainPlayerReference.transform);
                    Vector3 centerV= new Vector3();
                    bbox0091.getCenter(centerV);

 vecyr.set(0,0,0); //clearing vecyr... important
                    mat4_.set(ThreeDWithMultipleScreensGame.gameMainPlayerReference.transform);
                    vecyr.z-=deltatime*LevelOneScreen.playerSpeed;

                    if(Float.compare(Math.abs(centerV.x-10f),0.001f)>0)
                    vecyr.x-=deltatime*centerV.x;
                   
                    mat4_.translate(vecyr);
                    ThreeDWithMultipleScreensGame.gameMainPlayerReference.transform.set(mat4_);
                    ThreeDWithMultipleScreensGame.gameMainPlayerReference.calculateTransforms();
                    ThreeDWithMultipleScreensGame.gameMainPlayerReference.transform.getTranslation(ThreeDWithMultipleScreensGame.playerCurrentPosition);

Here is how I update my camera hoping that it will follow right behind my red model instance:

camera.up.set(0,1,0);
                 
                    Vector3 tmpVector=new Vector3();
                    camera.position.add(tmpVector.set(camera.direction).scl(deltatime*LevelOneScreen.playerSpeed).x,0,
                            tmpVector.set(camera.direction).scl(deltatime*LevelOneScreen.playerSpeed).z);  
                    camera.lookAt(centerV.x,0,centerV.z-7.5f);
                    camera.update();

Here is the picture that show the drifting effect I am talking about:

picture

Notice how the black line gets longer after some time of rotating and translating the red modelinstance How to fix this problem so camera stays always right behind the red model instance?

Modelinstance rotation breaks apart after translating modelinstance foward in LiBGDX project

The following code rotates my modelinstance about its centerpoint y-axis. HOWEVER, when I move the modelinstance foward the rotation gets all wrong! I am not sure what is happening. I need for the rotation to work the same always NOT just before I start moving the modelinstance foward!

Here how I apply my rotation after pressing the D key:

BoundingBox bbox = new BoundingBox();
                    ThreeDWithMultipleScreensGame.gameMainPlayerReference.calculateBoundingBox(bbox);
                    bbox.mul(/*node_.globalTransform*/ThreeDWithMultipleScreensGame.gameMainPlayerReference.transform);
                    Vector3 centerVector_ = new Vector3();
                    bbox.getCenter(centerVector_);

                    Gdx.app.log("okcenter","here: " + centerVector_.toString());


                    if(true){

                        Matrix4 m4 = new Matrix4();
                        m4.set(ThreeDWithMultipleScreensGame.gameMainPlayerReference.transform);
                        //m4.translate(10,0,10).rotate(0,1,0,-20f*LevelOneScreen.playerRotationSpeed*deltatime).translate(-10,0,-10);
                        m4.translate(centerVector_).rotate(0,1,0,-20f*LevelOneScreen.playerRotationSpeed*deltatime).translate(-1f*centerVector_.x,-1f*centerVector_.y,-1f*centerVector_.z);
                        Gdx.app.log("multiply","we have: " + centerVector_.cpy().scl(-1f).toString());
                        ThreeDWithMultipleScreensGame.gameMainPlayerReference.transform.set(m4);
                        ThreeDWithMultipleScreensGame.gameMainPlayerReference.calculateTransforms();

                    }

And, here is how I translate my modelinstance foward:

vecyr.set(0,0,0); //clearing vecyr... important
                    mat4_.set(ThreeDWithMultipleScreensGame.gameMainPlayerReference.transform);
                    vecyr.z-=deltatime*LevelOneScreen.playerSpeed;
                    mat4_.translate(vecyr);
                    ThreeDWithMultipleScreensGame.gameMainPlayerReference.transform.set(mat4_);
                    ThreeDWithMultipleScreensGame.gameMainPlayerReference.calculateTransforms();
                    ThreeDWithMultipleScreensGame.gameMainPlayerReference.transform.getTranslation(ThreeDWithMultipleScreensGame.playerCurrentPosition);


camera.position.add(vecyr);
                    
camera.lookAt(10f,0,ThreeDWithMultipleScreensGame.playerCurrentPosition.z-7.5f);
                    camera.update();

Why does the rotation only work before I start moving my modelinstance foward?

Rotate modelinstance node about its own centerpoint y axis in LiBGDX

I am trying to rotate the node of a modelinstance as follows:

Note: the node and modelinstance are rotating BUT not about their own center Y axis!

 Node node_ = myModelInstance.getNode("boxy", true);

                    BoundingBox bbox = new BoundingBox();
                    myModelInstance.calculateBoundingBox(bbox);
                    bbox.mul(node_.globalTransform/*myModelInstance.transform*/);
                    Vector3 centerVector_ = new Vector3();
                    bbox.getCenter(centerVector_);

                    if (true) {
                        try {
                           
                            if (node_ != null) {
                                
                                Gdx.app.log("information", "Before rotation: " + node_.localTransform.toString());

                                // Extract the local Y-axis and create a rotation matrix
                                //Vector3 yAxis = new Vector3(0, centerVector_.Y, 0);

                                float rotationAngle = MathUtils.degreesToRadians * 45;
                                       
                                Gdx.app.log("information", "Rotation angle: " + rotationAngle);


                                // Set rotation matrix based on the Y-axis and angle
                                rotationMatrix.setToRotation(centerVector_.Y/*yAxis*/, rotationAngle);

                                // Log the rotation matrix
                                Gdx.app.log("information", "Rotation matrix: " + rotationMatrix.toString());

                                // Apply the rotation to the node's local transform
                                node_.globalTransform/*localTransform*/.mulLeft(rotationMatrix);
                               // node_.globalTransform.getRotation(new Quaternion(),true);
                                node_.calculateLocalTransform();

                                //myModelInstance.transform.set(node.localTransform);

                               // myModelInstance.transform.set(node.localTransform.mulLeft(rotationMatrix));


                                // Log the local transform after applying rotation
                                Gdx.app.log("information", "After rotation: " + node_.localTransform.toString());

                                // Recalculate the transforms to update the hierarchy
                                //myModelInstance.calculateTransforms();

                                Gdx.app.log("error44", "Rotation applied successfully.");
                            } else {
                                Gdx.app.log("error44", "Node 'boxy' not found.");
                            }
                        } catch (Exception e) {
                            e.printStackTrace();
                            Gdx.app.log("error44", "Exception: " + e.toString());
                        }
                    }

I need the modelinstance and its node to rotate about the Y axis of the centerpoint of the modelinstance! How to do this??

Here is a picture of the redbox modelinstance that is supposed to rotate in place about its own y axis.. we can say the normal to the top face!

enter image description here

  • ✇Recent Questions - Game Development Stack Exchange
  • How to create a SIMPLE Skybox using OpenGL and SDLガブリエル Gabriel
    As the title is saying I'm trying to make a simple skybox to learn how it works. Using, of course OpenGL and SDL. I have tried read some sites, here are them: link 1 link 2 link 3. None of them were of good use, since they are poorly written or simply too much complex. For example one of them use glfw instead of SDL, while another use another way that I didn't understand and wouldn't use anyways, because I don't want to rewrite all my code JUST so that I can try it. Or they also use a camera imp
     

How to create a SIMPLE Skybox using OpenGL and SDL

As the title is saying I'm trying to make a simple skybox to learn how it works. Using, of course OpenGL and SDL.

I have tried read some sites, here are them: link 1 link 2 link 3.

None of them were of good use, since they are poorly written or simply too much complex. For example one of them use glfw instead of SDL, while another use another way that I didn't understand and wouldn't use anyways, because I don't want to rewrite all my code JUST so that I can try it.

Or they also use a camera implementation (all of the 3 of this links, If I'm not mistaken), but I don't know if I necessarily need a camera, since my objective isn't moving or rotating around, it's just to draw the skybox.

Also they use shaders. Which I also don't know if is necessary, but I tried to do implement them anyway. All of my attempts did compile and didn't show any errors but it also didn't draw anything, just a grey screen. Probably because of "glClearColor(0.1f, 0.1f, 0.1f, 1.0f)" as some people said (I searched and read about similar problems), they also said that maybe it was a geometry problem, I don't know if it's and how to fix it, since no error is shown.

Also, don't ask me the code of each attempt because I haven't, they are all around, some commented, some I deleted and don't remember anymore, I simply can't put it all together anymore. But from reading the links and "to prove" (I guess) I got some understanding of it, so instead I will share my understanding.

To draw something or in this case the skybox I would need:
1° Thing, create the VAO and VBO:

unsigned int VAO, VBO; 
glGenVertexArrays(1, &VAO); 
glGenBuffers(1, &VBO); 
glBindVertexArray(VAO); 
glBindBuffer(GL_ARRAY_BUFFER, VBO); 
glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), &vertices, GL_STATIC_DRAW); glEnableVertexAttribArray(0); 
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 3 * sizeof(float), (void*)0);

2° Thing, is generating a texture ID and binding a texture to it and some parameters:

unsigned int ID;
unsigned char *data = //load image here
int width, height;
glGenTextures(1, &ID);
glBindTexture(GL_TEXTURE_CUBE_MAP, ID);
glTexImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_X, 0, GL_RGB, width, height, 0, GL_RGB, GL_UNSIGNED_BYTE, image);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_R, GL_CLAMP_TO_EDGE);

*image here is refering to the loaded image, one of the links use stb_image and other SDL_LoadBMP for example. I tried all of them, I don't know if it's relevant to use the one that the tutorial is using, while I tried them, I also put in error checks that the tutorials gave and no error was give whatsoever. The book that I'm reading uses SOIL_load_image though.

SOIL_load_image("path", &width, &height, 0, SOIL_LOAD_AUTO);
//So:
data = SOIL_load_image("path", &width, &height, 0, SOIL_LOAD_AUTO);

*Also, a couple of observations:
In the case of link1, they use gluBuild2DMipmaps, with GL_TEXTURE_2D.
In the case of link2, they use the above glTexImage2D, with GL_TEXTURE_CUBE_MAP_POSITIVE_X + i. i goes from 0 to 5, each face of a cube.
In the case of link3, they use glTexImage2D, with GL_TEXTURE_2D, but individually for each face.

3° Thing, is then drawing in the main loop:

//glEnable(GL_DEPTH_TEST); One of the tutorials says that I should use this before drawing.

//Loop
    glClearColor(0.1f, 0.1f, 0.1f, 1.0f);
    glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT);

    glBindVertexArray(VAO);
    glActiveTexture(GL_TEXTURE0);
    glBindTexture(GL_TEXTURE_CUBE_MAP, texture);
    glDrawArrays(GL_TRIANGLES, 0, 36);
    glBindVertexArray(0);

    SDL_GL_SwapWindow(mWindow);
//

*One of the tutorials just draw a cube right away, I also tried this, no success:

//repeate for each face, here the tutorial obviously generates each face texture ID individually. 
   glBindTexture(GL_TEXTURE_2D, face_textureID);
       glBegin(GL_QUADS);
       glTexCoord2f(0,0);
       glVertex3f(size/2,size/2,size/2);
       glTexCoord2f(1,0);  
       glVertex3f(-size/2,size/2,size/2);
       glTexCoord2f(1,1);
       glVertex3f(-size/2,-size/2,size/2);
       glTexCoord2f(0,1);
       glVertex3f(size/2,-size/2,size/2);
    glEnd();
//

*Here a lot happens on the tutorials I read that I don't understand or don't know the purpose of.

//Some tutorials use those functions
glDepthFunc(GL_LESS);
glDepthFunc(GL_LEQUAL);

glLoadIdentity();

glLightfv(GL_LIGHT0,GL_POSITION,pos); //float pos[]={-1.0,1.0,-2.0,1.0};

glEnable(GL_LIGHTING);
//and
glDisable(GL_LIGHTING);

glEnable(GL_DEPTH_TEST);
//and
glDisable(GL_DEPTH_TEST);
   
glEnable(GL_TEXTURE_2D);
 

*In addition. Shaders.

unsigned int vertex, fragment;
unsigned int shaderID;

//Example of a shader of a tutorial
const char *vertexShaderSource = "#version 330 core\n"
"layout (location = 0) in vec3 aPos;\n"
"out vec4 vertexColor;\n"
"void main()\n"
"{\n"
"    gl_Position = vec4(aPos, 1.0);\n"
"    vertexColor = vec4(0.9, 0.5, 0.0, 1.0);\n"
"}\0";  

vertex = glCreateShader(GL_VERTEX_SHADER);
glShaderSource(vertex, 1, &vertexShaderSource, NULL);
glCompileShader(vertex);

fragment = glCreateShader(GL_FRAGMENT_SHADER);
glShaderSource(fragment, 1, &fragmentShaderSource, NULL);
glCompileShader(fragment);

shaderID = glCreateProgram();
glAttachShader(Cubemap_Shader_ID, vertex);
glAttachShader(Cubemap_Shader_ID, fragment);
glLinkProgram(Cubemap_Shader_ID);

glDeleteShader(vertex);
glDeleteShader(fragment);  

glUseProgram(shaderID);

//Draw. Bind VAO, activate texture, bind texture, draw arrays and bind arrays

SDL_GL_SwapWindow(mWindow);

I will even give the entire code I used, to those out there that cry about a "minimal reproducible code". The code used here is from "Game Programming in C++: Creating 3D Games (Game Design) 1st Edition by Sanjay Madhav" if you're wondering or if you want to see more of it.

//game.h
#pragma once

#include "../Header Files/SDL/SDL_types.h"

#include <unordered_map>
#include <string>
#include <vector>

class Game
{
public:
    Game();
    bool Initialize();
    void RunLoop();
    void Shutdown();

    void ProcessInput();

private:
    bool mIsRunning;

    class Renderer* mRenderer;
};

//renderer.h
#pragma once

#include "../Header Files/SDL/SDL.h"

#include <string>
#include <vector>
#include <unordered_map>

class Renderer
{
public:
    Renderer(class Game* game);
    ~Renderer();

    bool Initialize(float screenW, float screenH);
    void Shutdown();

    void Init_Things();
    void Draw();

private:
    class Game* mGame;

    SDL_Window* mWindow;

    SDL_GLContext mContext;

    float mScreenW;
    float mScreenH;

    //Here you should create VAO, VBO, shader and the textureID so that you can use initialize them and use in the drawing function. For example:    
    //unsigned int VAO;
    //unsigned int VBO;
    //unsigned int shader;
    //unsigned int textureID
}
//main.cpp
#include "../Header Files/Game.h"

int main(int argc, char *argv[])
{
    Game game;

    bool success = game.Initialize();

    if (success)
    {
        game.RunLoop();
    }

    game.Shutdown();
    return 0;
};

//game.cpp
#include "../Header Files/Game.h"
#include "../Header Files/Renderer.h"

#include "../Header Files/SDL/SDL.h"

#include <algorithm>

Game::Game() :mRenderer(nullptr), mIsRunning(true) {}

bool Game::Initialize()
{
    if (SDL_Init(SDL_INIT_VIDEO|SDL_INIT_AUDIO) != 0)
    {
        SDL_Log("Unable to initialize SDL: %s", SDL_GetError());
        return false;
    }

    mRenderer = new Renderer(this);
    if (!mRenderer->Initialize(1024.0f, 768.0f))
    {
        SDL_Log("Failed to initialize renderer");
        delete mRenderer;
        mRenderer = nullptr;
        return false;
    }

    mRenderer->Init_Things();
    //I put steps 1° and 2° here.

    
    return true;
}

void Game::RunLoop() 
{
    while (mIsRunning)
    {
        ProcessInput();
        mRenderer->Draw();
        //Here things are drawn.
    }
}

void Game::Shutdown() 
{
    if (mRenderer)
    {
        mRenderer->Shutdown();
    }

    SDL_Quit();
}

void Game::ProcessInput()
{
    SDL_Event event;
    
    while (SDL_PollEvent(&event))
    {
        switch (event.type)
        {
            case SDL_QUIT:
                mIsRunning = false;
                break;
        }
    }
    
    const Uint8* state = SDL_GetKeyboardState(NULL);
    if (state[SDL_SCANCODE_ESCAPE])
    {
        mIsRunning = false;
    }
}

#include "../Header Files/Game.h"
#include "../Header Files/Renderer.h"

#include <algorithm>

#include "../Header Files/GL/glew.h"
#include "../Header Files/SOIL/SOIL.h"

Renderer::Renderer(Game* game) :mGame(game) {}

Renderer::~Renderer() {}

bool Renderer::Initialize(float screenW, float screenH)
{
    mScreenW = screenW;
    mScreenH = screenH;

    SDL_GL_SetAttribute(SDL_GL_CONTEXT_PROFILE_MASK, SDL_GL_CONTEXT_PROFILE_CORE);

    SDL_GL_SetAttribute(SDL_GL_CONTEXT_MAJOR_VERSION, 3);
    SDL_GL_SetAttribute(SDL_GL_CONTEXT_MINOR_VERSION, 3);

    SDL_GL_SetAttribute(SDL_GL_RED_SIZE, 8);
    SDL_GL_SetAttribute(SDL_GL_GREEN_SIZE, 8);
    SDL_GL_SetAttribute(SDL_GL_BLUE_SIZE, 8);
    SDL_GL_SetAttribute(SDL_GL_ALPHA_SIZE, 8);
    SDL_GL_SetAttribute(SDL_GL_DEPTH_SIZE, 24);

    SDL_GL_SetAttribute(SDL_GL_DOUBLEBUFFER, 1);

    SDL_GL_SetAttribute(SDL_GL_ACCELERATED_VISUAL, 1);

    mWindow = SDL_CreateWindow("Game", SDL_WINDOWPOS_CENTERED, SDL_WINDOWPOS_CENTERED, static_cast<int>(mScreenW), static_cast<int>(mScreenH), SDL_WINDOW_OPENGL);
    if (!mWindow)
    {
        SDL_Log("Failed to create window: %s", SDL_GetError());
        return false;
    }

    mContext = SDL_GL_CreateContext(mWindow);

    glewExperimental = GL_TRUE;
    if (glewInit() != GLEW_OK)
    {
        SDL_Log("Failed to initialize GLEW.");
        return false;
    }

    glGetError();

    return true;
}

void Renderer::Shutdown()
{
    //Here you can delete the VAO, VBO and shaders. Using:
    //glDeleteVertexArrays(1, &VAO);
    //glDeleteBuffers(1, &VBO);
    //glDeleteProgram(shader);

    SDL_GL_DeleteContext(mContext);
    SDL_DestroyWindow(mWindow);
}

void Renderer::Init_Things()
{
    //Here I would put steps 1° and 2°.

    //One of the tutorials say that I should activate the shader(s) in the initialization (before drawing), like for example:
    //glUseProgram(CubemapShaderID); 

    //glUseProgram(SkyboxShaderID);    
}

void Renderer::Draw()
{
    //glClearColor(0.1f, 0.1f, 0.1f, 1.0f);
    //glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);

    //Here I would put step 3° (Drawing).

    //SDL_GL_SwapWindow(mWindow);
}

TL;DR and as clear as possible. How to make a SIMPLE Skybox. Without a camera and shader(s), only if necessary.

And if necessary, please 😭 just a simple shader (without Reflection, Refraction, etc.), and a simple camera implementation (without translation, rotation, etc.)...

  • ✇Recent Questions - Game Development Stack Exchange
  • How do I fix java.lang.ClassNotFoundException: org.lwjgl.glfw.GLFWDoggo4
    I have not used Java in a while and thought I might try LWJGL with OpenGL and GLFW. I am using Apache Maven as a Build System. It lets me compile the program, but when I run it, it says: Exception in thread "main" java.lang.NoClassDefFoundError: org.lwjgl/glfw/GLFW at com.OpenGLTest.app.Main.main(Main.java:25) Caused by: java.lang.ClassNotFoundException: org.lwjgl.glfw.GLFW at java.base/jdk.internal.loader.BuiltinClassLoader.loadClass(BuiltinClassLoader.java.641) at java.base/jdk.int
     

How do I fix java.lang.ClassNotFoundException: org.lwjgl.glfw.GLFW

I have not used Java in a while and thought I might try LWJGL with OpenGL and GLFW. I am using Apache Maven as a Build System. It lets me compile the program, but when I run it, it says:

Exception in thread "main" java.lang.NoClassDefFoundError: org.lwjgl/glfw/GLFW
    at com.OpenGLTest.app.Main.main(Main.java:25)
Caused by: java.lang.ClassNotFoundException: org.lwjgl.glfw.GLFW
    at java.base/jdk.internal.loader.BuiltinClassLoader.loadClass(BuiltinClassLoader.java.641)
    at java.base/jdk.internal.loader.ClassLoaders$AppClassLoader.loadClass(ClassLoaders.java:188)
    at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:525)
    ... 1 more

My code is:

// Main.java
package com.OpenGLTest.app;

import org.lwjgl.*;
import org.lwjgl.glfw.*;
import org.lwjgl.opengl.*;
import org.lwjgl.system.*;

import java.nio.*;

import static org.lwjgl.glfw.Callbacks.*;
import static org.lwjgl.glfw.GLFW.*;
import static org.lwjgl.opengl.GL11.*;
import static org.lwjgl.system.MemoryStack.*;
import static org.lwjgl.system.MemoryUtil.*;

public class Main {

  private static long window;
  private static final int WIDTH = 800;
  private static final int HEIGHT = 600;
  private static final String TITLE = "OpenGL Window";

  public static void main(String[] args) {
    // CHECK
    if (!glfwInit()) {
      System.err.println("ERROR: GLFW IS NOT INSTALLED");
      System.exit(-1);
    }
    glfwInit();
    glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 4);
    glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 6);
    glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);
    glfwWindowHint(GLFW_OPENGL_FORWARD_COMPAT, GL_TRUE);

    glfwWindowHint(GLFW_VISIBLE, 0);
    glfwWindowHint(GLFW_RESIZABLE, 0);

    window = glfwCreateWindow(WIDTH, HEIGHT, TITLE, NULL, NULL);
    if (window == NULL) {
      System.err.println("ERROR: FAILED TO CREATE GLFW WINDOW");
    }

    glfwMakeContextCurrent(window);

    glfwShowWindow(window);
  }
}

My LWJGL version is 3.3.3 My JRE is 17

I am sorry if the answer is obvious. Somehow the only kinda answer I found on the internet is http://forum.lwjgl.org/index.php?topic=6994.0:

https://stackoverflow.com/questions/34413/why-am-i-getting-a-noclassdeffounderror-in-java

How to solve this entirely depends on how you invoke the java command, whether you use an >IDE and which one you use, whether you use the Java 9+ Module System or the classpath and >whether you use a Java build system (like Maven, Gradle, Ant+Ivy).

  • ✇Recent Questions - Game Development Stack Exchange
  • How to hide a post-processed mesh outline when/where the mesh is hiddennils
    I'm working on setting up an active outline in my 3d engine, a highlight effect for selected 3d characters or scenery in the screen. After working with the stencil buffer and getting some unsatisfactory results (issues with concave shapes, outline thickness due to distance from camera, and inconsistencies between my desktop and laptop), I switched to edge detection and frame buffer sampling and got an outline I'm pretty satisfied with. However, I am not able to hide the outline when the selecte
     

How to hide a post-processed mesh outline when/where the mesh is hidden

I'm working on setting up an active outline in my 3d engine, a highlight effect for selected 3d characters or scenery in the screen. After working with the stencil buffer and getting some unsatisfactory results (issues with concave shapes, outline thickness due to distance from camera, and inconsistencies between my desktop and laptop), I switched to edge detection and frame buffer sampling and got an outline I'm pretty satisfied with.

However, I am not able to hide the outline when the selected mesh is behind another mesh. This makes sense given my process, since I simply render 2d shader outline from a frame buffer after rendering the rest of the scene.

Two screen captures of my results are below. The first is a "good" outline, the second is where the outline is seen over a mesh that blocks the outline source.

enter image description here

The rendering process runs like this: 1) Draw only the alpha of the highlighted mesh, capturing a black silhouette in a frame buffer (framebuffer1).

2) Pass the texture from framebuffer1 to a second shader that performs the edge detection. Capture edge in framebuffer2.

3) Render the entire scene.

4) Render the texture from framebuffer2 on top of the scene.

I have a few ideas on how to accomplish and am hoping to get feedback on their validity, or on simpler or better methods.

First, I've thought of rendering the entire scene to a frame buffer and storing the visible silhouette of the highlighted mesh in the alpha channel (all white save where the highlighted mesh is visible). I would then perform the edge detection on the alpha channel, render the scene frame buffer and then render the edge on top. Resulting in something like this:

result

To accomplish this, I thought of setting a define only during the render pass of the highlighted object that would draw all black in the alpha for any visible pixels.

My second idea is to use the current render process outlined above, but also store the X, Y and Z coordinates in the R, G and B channels of framebuffer1 when rendering the silhouette of the selected mesh. Edge detections would be performed and stored in framebuffer2, but I would pass on the RGB/XYZ values from the edges of the alpha to the silhouette. Then, when rendering the scene, I would test if the coordinate is within the edge stored in framebuffer2. If so, I would then test the depth of the current fragment to determine if it is in front of or behind the coordinates extracted from the RGB channels (converted to camera space). If the fragment is in front of the depth coordinates, the fragment would be rendered normally. If the fragment is behind, it would be rendered as the solid outline color. This seems like a more convoluted and error prone method...I haven't fully grasped packing and unpacking floats in OpenGL yet, but my feeling is I may run into floating point precision issues when trying to store the XYZ coordinates in the RGB channels.

I'm using LibGDX for this project and would like to support WebGL and OpenGL ES, so none of the solutions involving geometry shaders or newer GLSL functions are available to me. If anyone could comment on my proposed approaches or propose something better I'd really appreciate it.

  • ✇Recent Questions - Game Development Stack Exchange
  • How to set matrices for different objects, in OpenGL?Valtsuh
    Not sure how to do it, since I'm only seeing one instance of matrix-use drawn (only one of the objects has an energy bar): glDrawElementsInstanced(GL_TRIANGLES, this->energy.indices, GL_UNSIGNED_INT, this->energy.indiceArray, this->energy.instances); Works fine for same type of objects, but on the image they are just place-holders. I'm creating the matrix instance buffer object, along with VBO: float* matrices = new float[this->energy.instances * 16]; for (int j = 0; j < this-&
     

How to set matrices for different objects, in OpenGL?

Not sure how to do it, since I'm only seeing one instance of matrix-use drawn (only one of the objects has an energy bar):

image

glDrawElementsInstanced(GL_TRIANGLES, this->energy.indices, GL_UNSIGNED_INT, this->energy.indiceArray, this->energy.instances);

Works fine for same type of objects, but on the image they are just place-holders.

I'm creating the matrix instance buffer object, along with VBO:

float* matrices = new float[this->energy.instances * 16];

for (int j = 0; j < this->energy.instances; j++) {
    drx::util::MAT4F mat;
    mat.LoadIdentity();
    for (int y = 0, c = 0; y < 4; y++) {
        for (int x = 0; x < 4; x++, c++) {
            matrices[j * 16 + c] = mat.matrix[x][y];
        }
    }
}

glGenBuffers(1, &this->energy.IBO);
glBindBuffer(GL_ARRAY_BUFFER, this->energy.IBO);
glBufferData(GL_ARRAY_BUFFER, this->energy.instances * 16 * sizeof(float), matrices, GL_DYNAMIC_DRAW);
delete[] matrices;
for (int i = 0; i < 4; i++) {
    glEnableVertexAttribArray(2 + i);
    glVertexAttribPointer(2 + i, 4, GL_FLOAT, GL_FALSE, 16 * sizeof(float), (void*)(i * 4 * sizeof(float)));
    glVertexAttribDivisor(2 + i, 1);
}

Updating:

glBindBuffer(GL_ARRAY_BUFFER, this->energy.IBO);
for (int i = 0; i < this->energy.instances; i++) {
    drx::util::MAT4F mat;
    mat.LoadIdentity();
    mat.Translate(this->lab->dob[i].pos.x, this->lab->dob[i].pos.y - 7.5f, 0.0f);
    for (int y = 0, c = 0; y < 4; y++) {
        for (int x = 0; x < 4; x++, c++) {
            this->matrix[c] = mat.matrix[x][y];
        }
    }
    glBufferSubData(GL_ARRAY_BUFFER, 16 * i * sizeof(float), 16 * sizeof(float), this->matrix);
}

And drawing:

glUseProgram(this->energy.Program);
this->energy.vertex.SetMVP(ortho);
glBindVertexArray(this->energy.VAO);
glDrawElements(GL_TRIANGLES, this->energy.indices, GL_UNSIGNED_INT, this->energy.indiceArray);
  • ✇Recent Questions - Game Development Stack Exchange
  • How to render frustum shape for debug visualization?Josh
    I've been implementing frustum culling in an OpenGL application that appears to be working correctly. The issue I have been having when trying to render the frustum shape from the camera, my debug lines appear to only outline the application window borders. When the application starts I would see the four yellow debug lines in a rectangular pattern around the OpenGL window, but nothing else related to the frustum. I managed to get the frustum shape to render in a previous attempt, but it would n
     

How to render frustum shape for debug visualization?

I've been implementing frustum culling in an OpenGL application that appears to be working correctly.

The issue I have been having when trying to render the frustum shape from the camera, my debug lines appear to only outline the application window borders. When the application starts I would see the four yellow debug lines in a rectangular pattern around the OpenGL window, but nothing else related to the frustum.

I managed to get the frustum shape to render in a previous attempt, but it would not stay with the camera's movement.

GitHub link: https://github.com/JoshTyra/OpenGLFrustum_Culling

I have tried different methods that I could find online, and in papers covering this topic. Beare in mind some of the code was experimental.

Is it possible to use a pre-existing texture buffer containing vertex data to initialise a vertex buffer for rendering in OpenGL v4.6?

I'm generating a heightmap in a compute shader in OpenGL v4.6 and storing it to a texture.

Lets say I actually store the full vertex data in that texture instead of just the height, which is a trivial change, and that I could easily also create an index buffer in a separate texture/SSBO at the same time.

Is there a way to use this pre-existing texture/SSBO data to create a vertex and index buffer directly if I made sure the memory layouts were correct?

It seems wasteful to pull the data back from GPU just to copy it to a new vertex array on CPU and then push back to GPU, when I could just get the CPU code to tell the GPU that this data is the vertex array instead and never have the data leave the GPU... But I have no idea how I'd tell OpenGL to map one to the other.

Development:

I've found info about copying buffer data from the one arbitrary buffer type to another, so I've given that a go. It's not as efficient as simply calling the texture buffer a vertex buffer, but this only needs to happen once, so it's a good enough solution. However, I'm getting a black screen...

This is my VAO setup code:


    const size_t num_vertices = _map_terrain_texture_shape.x * _map_terrain_texture_shape.y;
    const size_t total_vertex_position_bytes = num_vertices * sizeof(glm::vec4);
    const size_t total_vertex_colour_bytes = num_vertices * sizeof(glm::vec4);
    const size_t total_vertex_bytes = total_vertex_position_bytes + total_vertex_colour_bytes;

    std::vector<uint32_t> indices = _make_indices(_map_terrain_texture_shape);
    const size_t total_index_bites = indices.size() * sizeof(uint32_t);
    enter code here
    glGenVertexArrays(1, &_vao);
    glGenBuffers(1, &_vbo);
    glGenBuffers(1, &_ebo);

    glBindVertexArray(_vao);

    glBindBuffer(GL_ARRAY_BUFFER, _vbo);
    glBufferData(GL_ARRAY_BUFFER, total_vertex_bytes, nullptr, GL_STATIC_DRAW);

    glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, _ebo);
    glBufferData(GL_ELEMENT_ARRAY_BUFFER, total_index_bites, indices.data(), GL_STATIC_DRAW);

    glEnableVertexAttribArray(VERTEX_POSITION_ATTRIB_INDEX);
    glEnableVertexAttribArray(VERTEX_COLOUR_ATTRIB_INDEX);

    // vertex draw positions
    glVertexAttribPointer(VERTEX_POSITION_ATTRIB_INDEX, glm::vec4::length(), GL_FLOAT, GL_FALSE, sizeof(glm::vec4), (void*)0);
    // vertex colours
    glVertexAttribPointer(VERTEX_COLOUR_ATTRIB_INDEX, glm::vec4::length(), GL_FLOAT, GL_FALSE, sizeof(glm::vec4), (void*)total_vertex_position_bytes);

    glDisableVertexAttribArray(VERTEX_POSITION_ATTRIB_INDEX);
    glDisableVertexAttribArray(VERTEX_COLOUR_ATTRIB_INDEX);

    glBindVertexArray(0);

And the code running the compute shader that populates the texture buffers (image2Ds) that I copy into vertex buffer looks like this:

    _map_terrain_mesh_shader->use();

    _main_state.terrain_generator->map_terrain_heightmap_texture->bind_for_active_shader(_map_terrain_mesh_shader->id, 0, "i_heightmap_texture");
    _main_state.terrain_generator->map_terrain_vertex_position_texture->bind_for_active_shader(_map_terrain_mesh_shader->id, 1, "o_vertex_position_texture");
    _main_state.terrain_generator->map_terrain_vertex_colour_texture->bind_for_active_shader(_map_terrain_mesh_shader->id, 2, "o_vertex_colour_texture");

    _map_terrain_mesh_shader->dispatch(glm::uvec3{ _map_terrain_texture_shape, 1});

    const size_t num_vertices = _map_terrain_texture_shape.x * _map_terrain_texture_shape.y;
    const size_t total_vertex_position_bytes = num_vertices * sizeof(glm::vec4);
    const size_t total_vertex_colour_bytes = num_vertices * sizeof(glm::vec4);

    const auto position_texture_id = _main_state.terrain_generator->map_terrain_vertex_position_texture->id;
    const auto colour_texture_id = _main_state.terrain_generator->map_terrain_vertex_colour_texture->id;

    glBindBuffer(GL_COPY_WRITE_BUFFER, _vbo);

    glBindBuffer(GL_COPY_READ_BUFFER, position_texture_id);
    glCopyBufferSubData(position_texture_id, _vbo,
                        0, 0,
                        total_vertex_position_bytes);

    glBindBuffer(GL_COPY_READ_BUFFER, colour_texture_id);
    glCopyBufferSubData(colour_texture_id, _vbo,
                        0, total_vertex_position_bytes,
                        total_vertex_colour_bytes);

    glBindBuffer(GL_COPY_READ_BUFFER, 0);
    glBindBuffer(GL_COPY_WRITE_BUFFER, 0);

I have checked that this compute shader produces the correct results by using these buffers in a raytracing renderer I already had setup. That is now using this data instead of the original heightmap data.

I've gone for vec4 for each just to be sure I don't run into packing issues or whatever while I get it working, and I'm purposely not interlacing the position/colour data. I'm keeping it as a block of each.

Now assuming my compute shader is doing it's job correctly, can anyone tell me if I'm doing this right?

  • ✇Recent Questions - Game Development Stack Exchange
  • Matrices for OpenGL shadersValtsuh
    So I'm trying to figure out the model, view and projection matrices. I can, with some effort, find my drawings (3x3x3 structure of cubes) in 3D space and it looks like: The problem is, the cubes seem to be offset from the center (where the lines meet at 0, 0, 0), which is where around they're initially positioned, and don't really seem to be sticking to it when moving camera. All around while moving camera the cube movement is off, seems off and inverted. The green area is the far end of the vi
     

Matrices for OpenGL shaders

So I'm trying to figure out the model, view and projection matrices. I can, with some effort, find my drawings (3x3x3 structure of cubes) in 3D space and it looks like:

result

The problem is, the cubes seem to be offset from the center (where the lines meet at 0, 0, 0), which is where around they're initially positioned, and don't really seem to be sticking to it when moving camera. All around while moving camera the cube movement is off, seems off and inverted.

The green area is the far end of the view frustum, and the near end is where the black lines meet.

So far, from what I've gathered, I'm passing to the shaders:

// The model matrix
model.position = { x * scale, y * scale, z * scale }; // 3 x 3 x 3, scale = 50.0
drx::util::MAT4F mModel;
drx::util::MAT4F mSize;
mModel.LoadIdentity();
mModel.Translate(model.position.x, model.position.y, model.position.z);
mSize.Scale(scale, scale, scale);
mModel = mModel.Add(mSize);

// from the MAT4F structure
void Translate(float x, float y, float z) {
    this->matrix[3][0] = x;
    this->matrix[3][1] = y;
    this->matrix[3][2] = z;
}

void Scale(float x, float y, float z) {
    this->matrix[0][0] = x;
    this->matrix[1][1] = y;
    this->matrix[2][2] = z;
}

// The view matrix
m.LoadIdentity();
m.matrix[0][0] = this->right.x;
m.matrix[1][0] = this->right.y;
m.matrix[2][0] = this->right.z;
m.matrix[0][1] = this->up.x;
m.matrix[1][1] = this->up.y;
m.matrix[2][1] = this->up.z;
m.matrix[0][2] = this->front.x; //
m.matrix[1][2] = this->front.y; // Direction vector
m.matrix[2][2] = this->front.z; //
m.matrix[3][0] = this->position.x; //
m.matrix[3][1] = this->position.y; // Eye or camera position vector
m.matrix[3][2] = this->position.z; //
// The projection matrix
float sf = tanf(fov / 2.0f);
this->matrix[0][0] = 1.0f / (ar * sf);
this->matrix[1][1] = 1.0f / sf;
this->matrix[2][2] = -((f + n) / (f - n));
this->matrix[2][3] = -1.0f;
this->matrix[3][2] = -((2.0f * f * n) / (f - n));
this->matrix[3][3] = 1.0f;

And on vertex shader, I have:

#version 330 core
layout (location = 0) in vec3 aPos;
layout (location = 1) in vec3 inColor;
layout (location = 2) in vec2 texCoord;

out vec4 ourColor;
out vec2 texCoords;

uniform vec4 myColor;
uniform float scale;
uniform mat4 view;
uniform mat4 projection;
uniform mat4 model;

uniform vec3 myOtherColor;

void main()
{
    vec4 myPos = vec4(aPos, 1.0);
    gl_Position = projection * view * model * myPos;
    ourColor = vec4(inColor, 1.0);
    texCoords = texCoord;
} 
  • ✇Recent Questions - Game Development Stack Exchange
  • The view matrix finally explainedRaouf
    I must say that I am really confused by how a view matrix is constructed and works. First, there are 3 terms: view matrix, lookat matrix, and camera transformation matrix. Are those 3 the same, or different things. From what I undestand, the camera transformation matrix is basically the model matrix of the camera, and the view matrix is the inverse of that. The lookat matrix is basically for going from world space to view space, and I think I undestand how it works (doing dot products for proje
     

The view matrix finally explained

I must say that I am really confused by how a view matrix is constructed and works.

First, there are 3 terms: view matrix, lookat matrix, and camera transformation matrix. Are those 3 the same, or different things. From what I undestand, the camera transformation matrix is basically the model matrix of the camera, and the view matrix is the inverse of that. The lookat matrix is basically for going from world space to view space, and I think I undestand how it works (doing dot products for projecting a point into another coordinate system).

I am also confused by the fact that sometimes, it seems like the view matrix is built by translation and dot products, and some other times, it is built with translation and rotation (with cos and sin).

There are also quaternions. When you convert a quaternion to a matrix, what kind of matrix is this?

Can someone explain to me how it really works, or point me towards a good resource.

Thank you.

Custom matrix structure with OpenGL shaders

I have a MAT4 structure.

struct MAT4 {
    MAT4() {
        int c = 0;
        for (int x = 0; x < 4; x++) {
            for (int y = 0; y < 4; y++) {
                this->matrix[x][y] = 0.0;
                this->pointer[c] = this->matrix[x][y];
                c++;
            }
        }
    }

    double matrix[4][4];
    double pointer[16]; // for opengl

    void LoadIdentity() {
        this->matrix[0][0] = 1.0;
        this->matrix[1][1] = 1.0;
        this->matrix[2][2] = 1.0;
        this->matrix[3][3] = 1.0;
    }

    void RotateX(double x, bool rads = false) {
        if (rads) x *= drx::rad;
        this->matrix[1][1] = cos(x);
        this->matrix[2][1] = -sin(x);
        this->matrix[2][2] = cos(x);
        this->matrix[1][2] = sin(x);
    }
    void RotateY(double y, bool rads = false) {
        if (rads) y *= drx::rad;
        this->matrix[0][0] = cos(y);
        this->matrix[2][0] = sin(y);
        this->matrix[2][2] = cos(y);
        this->matrix[0][2] = -sin(y);
    }
    void RotateZ(double z, bool rads = false) {
        if (rads) z *= drx::rad;
        this->matrix[0][0] = cos(z);
        this->matrix[1][0] = -sin(z);
        this->matrix[1][1] = cos(z);
        this->matrix[0][1] = sin(z);
    }

    void Translate(double x, double y, double z) {
        this->matrix[3][0] = x;
        this->matrix[3][1] = y;
        this->matrix[3][2] = z;
    }

    void Scale(double x, double y, double z) {
        this->matrix[0][0] = x;
        this->matrix[1][1] = y;
        this->matrix[2][2] = z;
    }

    double* Pointer() {
        int c = 0;
        for (int x = 0; x < 4; x++) {
            for (int y = 0; y < 4; y++) {
                this->pointer[c] = this->matrix[x][y];
                c++;
            }
        }

        return this->pointer;
    }

    void Dump() {
        for (int x = 0; x < 4; x++) {
            for (int y = 0; y < 4; y++) {
                std::cout << "\n [" << x << ", " << y << "]: " << this->matrix[x][y];
            }
        }
    }
};

Which I'm then trying to pass onto OpenGL:

drx::util::MAT4 trans;
trans.LoadIdentity();
trans.RotateY(45.0, true);
trans.Dump(); // outputs values as should
glUseProgram(this->P);
glUniformMatrix4dv(glGetUniformLocation(this->P, "transform"), 1, GL_FALSE, trans.Pointer());
glUseProgram(0);

My shader looks like:

#version 330 core
layout (location = 0) in vec3 aPos;
layout (location = 1) in vec3 inColor;

out vec3 ourColor;

uniform mat4 transform;

void main()
{
    gl_Position = transform * vec4(aPos, 1.0);
    ourColor = inColor;
} 

If I take out the transforms from shaders, my triangle draws fine. But if I use the transforms my triangle disappears, is it offscreen or what could be happening?

Trying to follow this tutorial on Youtube.

Update: glGetError() gives 1282

std::cout << "\n " << glGetError(); // 0
int loc = glGetUniformLocation(this->P, "transform");
std::cout << "\n " << glGetError(); // 0
glUniformMatrix4dv(loc, 1, GL_FALSE, trans.Pointer());
std::cout << "\n " << glGetError(); // 1282

Update 2: Tried with glm, same result, no drawing.

Update 3: location for uniform variable returns -1

int loc = glGetUniformLocation(this->P, "transform"); // -1

/* defs */
extern PFNGLGETUNIFORMLOCATIONPROC glGetUniformLocation;
glGetUniformLocation = (PFNGLGETUNIFORMLOCATIONPROC)wglGetProcAddress("glGetUniformLocation");  

How would I implement multithreading to load in textures while displaying a loading screen in OpenGL?

So basically I'm stumped trying to make it so that when I load in a texture my entire application doesn't freeze. I have tried using futures and running the whole loading process on another thread, both to no avail.

Using futures I couldn't figure out why my program was still hanging after a loadTexture call had been made so I tried another approach which was to run all of my load calls in my scene on a separate thread. That didn't work either because it didn't have the opengl context of my main thread.

Does anyone know of any practical examples of multithreaded loading and passing said data back to the main thread after it loads the image from ssd -> ram to be loaded to the gpu?

I have looked all around but haven't seen any practical examples of multithreaded loading for opengl.

Below is my resourcemanager code, if you have an idea of how I could implement multithreaded rendering to it it would be greatly appreciated but I am mainly asking this question to be pointed in the right direction of learning resources that could lead me to solve this problem on my own.

Thanks for any help.

#include "ResourceManager.h"

// Instantiate static variables
std::map<std::string, Texture>    ResourceManager::Textures;
std::map<std::string, Shader>       ResourceManager::Shaders;


Shader ResourceManager::LoadShader(const char *vShaderFile, const char *fShaderFile, std::string name)
{
    Shaders[name] = Shader(vShaderFile, fShaderFile);
    return Shaders[name];
}

Shader& ResourceManager::GetShader(std::string name)
{
    return Shaders[name];
}

Texture ResourceManager::LoadTexture(const char *file, bool alpha, std::string name)
{
    Textures[name] = loadTextureFromFile(file, alpha);
    return Textures[name];
}

Texture& ResourceManager::GetTexture(std::string name)
{
    return Textures[name];
}

void ResourceManager::Clear()
{
    // (properly) delete all shaders    
    for (auto iter : Shaders)
        iter.second.Delete();
    // (properly) delete all textures
    for (auto iter : Textures)
        glDeleteTextures(1, &iter.second.ID);
}

Texture ResourceManager::loadTextureFromFile(const char *file, bool alpha)
{
    // create texture object
    Texture texture;
    if (alpha)
    {
        texture.Internal_Format = GL_RGBA;
        texture.Image_Format = GL_RGBA;
    }
    // load image
    int width, height, nrChannels;
    unsigned char* data = stbi_load(file, &width, &height, &nrChannels, 0);
    // now generate texture
    texture.Generate(width, height, data); //Does all opengl calls to generate texture
    // and finally free image data
    stbi_image_free(data);
    return texture;
}
```
  • ✇Recent Questions - Game Development Stack Exchange
  • OpenGL strange depth testingValtsuh
    Depth testing isn't working (hopefully the images below describe, I'll gladly describe more when asked) for me, and I can't seem to figure out why. GL initialization: glEnable(GL_CULL_FACE); //glFrontFace(GL_CW); glDisable(GL_LIGHTING); glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_REPLACE); glShadeModel(GL_FLAT); glEnable(GL_DEPTH_TEST); glDepthMask(true); glDepthFunc(GL_LESS); //glDepthRange(-100.0, 100.0); Pre-drawing: glMatrixMode(GL_PROJECTION); glLoadIdentity(); gluPers
     

OpenGL strange depth testing

Depth testing isn't working (hopefully the images below describe, I'll gladly describe more when asked) for me, and I can't seem to figure out why.

GL initialization:

glEnable(GL_CULL_FACE);
//glFrontFace(GL_CW);               
glDisable(GL_LIGHTING);
glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_REPLACE);
glShadeModel(GL_FLAT);
glEnable(GL_DEPTH_TEST);
glDepthMask(true);
glDepthFunc(GL_LESS);
//glDepthRange(-100.0, 100.0);

Pre-drawing:

glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluPerspective(this->camera.zoom, drx::gfx::ogl::ar, -100.0, 100.0);
this->camera.CameraLookAt();

/* Camera function */
drx::util::SPOT center = this->position.Add(this->front);
gluLookAt(this->position.x, this->position.y, this->position.z, center.x, center.y, center.z, this->up.x, this->up.y, this->up.z);

Some drawing:

glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, tx.id);
glBegin(GL_TRIANGLE_FAN);
//glColor3ub(tri.color.red, tri.color.green, tri.color.blue);

glTexCoord2f(u[0], v[0]);
glVertex3d(a.x, a.y, a.z);
glTexCoord2f(u[1], v[1]);
glVertex3d(b.x, b.y, b.z);
glTexCoord2f(u[2], v[2]);
glVertex3d(c.x, c.y, c.z);

glEnd();
glDisable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, 0);

Which then result to:

result1

result2

If I disable depth testing and look at from just the right angle, I get:

result3

Update 1: Tried to set glClearDepth, still same.

glClearDepth(200.0);
  • ✇Recent Questions - Game Development Stack Exchange
  • When compiling shaders in OpenGL, I get random error messagesterraquad
    I am trying to follow LearnOpenGL while coding in the Zig language, and something that is very odd is that sometimes my shader compilation fails even though I changed nothing between executing the app. I also don't understand the errors. By the way, I use mach-glfw and zigglgen. Some errors I get: error: [opengl] Failed to compile shader: assets/shaders/shader.frag 0(8) : error C0000: syntax error, unexpected '=' at token "=" error: [opengl] Failed to link shader program: Fragment info ----
     

When compiling shaders in OpenGL, I get random error messages

I am trying to follow LearnOpenGL while coding in the Zig language, and something that is very odd is that sometimes my shader compilation fails even though I changed nothing between executing the app. I also don't understand the errors. By the way, I use mach-glfw and zigglgen.

Some errors I get:

error: [opengl] Failed to compile shader: assets/shaders/shader.frag
  0(8) : error C0000: syntax error, unexpected '=' at token "="

error: [opengl] Failed to link shader program:
  Fragment info
-------------
0(8) : error C0000: syntax error, unexpected '=' at token "="
(0) : error C2003: incompatible options for link
error: [opengl] Failed to compile shader: assets/shaders/shader.frag
  0(9) : error C0000: syntax error, unexpected '(', expecting ';' at token "("

error: [opengl] Failed to link shader program:
  Fragment info
-------------
0(9) : error C0000: syntax error, unexpected '(', expecting ';' at token "("
(0) : error C2003: incompatible options for link

error: [opengl] Failed to compile shaders/shader.vert:
0(6) : error C0000: syntax error, unexpected $undefined at token "<undefined>"

Here is the code:

// Vertex shader
#version 330 core
layout (location = 0) in vec3 aPos;

out vec4 vertexColor;

void main() {
    gl_Position = vec4(aPos.xyz, 1.0);
    vertexColor = vec4(0.5, 0.0, 0.0, 1.0);
}
// Fragment shader
#version 330 core
out vec4 FragColor;
  
in vec4 vertexColor;

void main() {
    FragColor = vertexColor;
}
// Shortened main code
const std = @import("std");
const builtin = @import("builtin");
const glfw = @import("glfw");
const gl = @import("gl");
const App = @import("App.zig");

var gl_procs: gl.ProcTable = undefined;

fn glfwErrorCallback(err: glfw.ErrorCode, desc: [:0]const u8) void {
    ...
}

fn glfwFramebufferSizeCallback(_: glfw.Window, w: u32, h: u32) void {
    gl.Viewport(0, 0, @intCast(w), @intCast(h));
}

pub fn main() !void {
    // GLFW initialization
    if (!glfw.init(.{})) {
        ...
    }
    defer glfw.terminate();

    // Window creation
    const window = glfw.Window.create(1280, 720, "example opengl app", null, null, .{
        ...
    }) orelse {
        ...
    };
    defer window.destroy();
    glfw.makeContextCurrent(window);

    // OpenGL preparation
    if (!gl_procs.init(glfw.getProcAddress)) {
        ...
    }
    gl.makeProcTableCurrent(&gl_procs);
    window.setFramebufferSizeCallback(glfwFramebufferSizeCallback);

    // App startup
    var app = App{
        .window = window,
    };
    app.run() catch |err| {
        ...
    };
}

// shortened App code
...

window: glfw.Window,
vertices: [12]f32 = [_]f32{
    0.5,  0.5,  0.0,
    0.5,  -0.5, 0.0,
    -0.5, -0.5, 0.0,
    -0.5, 0.5,  0.0,
},
indices: [6]gl.uint = [_]gl.uint{
    0, 1, 3,
    1, 2, 3,
},

fn createCompiledShader(file: []const u8, stype: Shader.ShaderType) !Shader {
    const shader = try Shader.fromFile(file, stype, std.heap.raw_c_allocator);
    if (shader.compile()) |msg| {
        std.log.err("[opengl] Failed to compile shader: {s}\n  {s}", .{ file, msg });
    }
    return shader;
}

pub fn run(this: App) !void {
    // == STARTUP

    // Create vertex array object
    ...

    // Create vertex buffer object
    ...

    // Create element buffer object
    ...

    // Vertex attributes
    ...

    // Create shaders
    const vertex_shader = try createCompiledShader("assets/shaders/shader.vert", .Vertex);
    const fragment_shader = try createCompiledShader("assets/shaders/shader.frag", .Fragment);

    // Create shader program
    const shader_program = ShaderProgram.init(&.{
        vertex_shader,
        fragment_shader,
    });
    if (shader_program.link()) |msg| {
        std.log.err("[opengl] Failed to link shader program:\n  {s}", .{msg});
    }
    // Activate program and delete shaders
    shader_program.use();
    vertex_shader.delete();
    fragment_shader.delete();

    // == RENDER LOOP

    while (!this.window.shouldClose()) {
        gl.ClearColor(0.5, 0.3, 0.1, 1.0);
        gl.Clear(gl.COLOR_BUFFER_BIT);

        shader_program.use();
        gl.BindVertexArray(vao);
        gl.DrawElements(gl.TRIANGLES, 6, gl.UNSIGNED_INT, 0);

        this.window.swapBuffers();
        glfw.pollEvents();
    }
}
// shortened Shader class
...

pub const ShaderType = enum {
    Vertex,
    Fragment,
};

/// The source code of the shader.
source: []const u8,
gl_object: gl.uint,

fn createOpenglShader(src: []const u8, shader_type: ShaderType) gl.uint {
    const stype: gl.uint = switch (shader_type) {
        .Vertex => gl.VERTEX_SHADER,
        .Fragment => gl.FRAGMENT_SHADER,
    };
    const object: gl.uint = gl.CreateShader(stype);
    gl.ShaderSource(object, 1, @ptrCast(&src.ptr), null);
    return object;
}

/// Creates a `Shader` object from the source string.
pub fn fromString(src: []const u8, shader_type: ShaderType) Shader {
    ...
}

/// Creates a `Shader` object from the file contents of the given `path`.
/// The file path has to be relative to the folder where the executable resides.
/// If you want to use a file outside of that folder, open the file yourself and pass its' contents to `Shader.fromString`.
/// For some reason, you can't pass a `GeneralPurposeAllocator` since the program segfaults if you do.
pub fn fromFile(path: []const u8, shader_type: ShaderType, alloc: std.mem.Allocator) !Shader {
    var arena = std.heap.ArenaAllocator.init(alloc);
    defer arena.deinit();
    const allocator = arena.allocator();

    var root_dir = try std.fs.openDirAbsolute(try std.fs.selfExeDirPathAlloc(allocator), .{});
    defer root_dir.close();

    var file = try root_dir.openFile(path, .{});
    defer file.close();

    const buf = try allocator.alloc(u8, try file.getEndPos());
    _ = try file.readAll(buf);

    return .{
        .source = buf,
        .gl_object = createOpenglShader(buf, shader_type),
    };
}

/// Compiles the shader. If compilation succeeded, the return value will be null.
/// Otherwise, the return value will be the error message.
pub fn compile(self: Shader) ?[256]u8 {
    gl.CompileShader(self.gl_object);
    var success: gl.int = undefined;
    gl.GetShaderiv(self.gl_object, gl.COMPILE_STATUS, &success);
    if (success == 0) {
        var info_log = std.mem.zeroes([256]u8);
        gl.GetShaderInfoLog(self.gl_object, 256, null, &info_log);
        return info_log;
    }
    return null;
}

pub fn delete(self: Shader) void {
    ...
}
```

How do you handle shaders/graphics while remaining cross-platform?

I'm building a C++ based game engine, and I have my ECS complete as well as some basic components for stuff like graphics & audio. However, I'm currently using a custom interface on top of SFML with GLSL based shaders and OpenGL based graphics. I'd like to switch to a graphics solution where I can switch between OpenGL, Vulkan, DirectX3D, and Metal without rewriting large portioins of my code. The graphics API itself isn't a problem, since I can easily build an interface on top of it and reimplement it for each desired platform. My issue, however, is with the shaders.

I'm currently writing my test shaders in GLSL targeting OpenGL. I know I can use the SPIR-V translator to generate HLSL/MSL/Vulkan-Style GLSL from my OpenGL source code, but I'm not sure how that will work when I start having to set uniforms, handle shader buffers, and the like.

The big solution I've heard of is generating shaders at runtime, which is what Godot does. However, my engine is very performance-oriented, so I'd like to precompile all my shaders if possible. I've also seen that Unity uses HLSL2GLSL translator and SPIR-V cross is very common. However, I'm worried about how these will interact with setting uniforms and whatnot, and I'm very concerned about their impact on performance.

❌
❌