FreshRSS

Zobrazení pro čtení

Jsou dostupné nové články, klikněte pro obnovení stránky.

Spacebar not working consistently in Unity

Sometimes it works but sometimes it just doesn't. Maybe it has something to do with gravity or drag?

using System.Collections;
using System.Collections.Generic;
using UnityEngine;

public class BirdScript : MonoBehaviour
{
    public Rigidbody2D myRigidbody;
    public float flapStrenght;

    // Start is called before the first frame update (plays ony once)
    void Start()
    {
        gameObject.name = "Roger";
    }

    // Update is called once per frame
    void Update()
    {
        if (Input.GetKeyDown(KeyCode.Space) == true) 
        {
            myRigidbody.velocity = Vector2.up * flapStrenght;
        }
    }
}

RenderTexture doesn’t work in build

A Gameobject sprite, a Texture2D which is written into using "Texture2D.ReadPixels" works/is visible in the Editor/Play Mode but is not/is invisible when built.

The Texture2D is written to with the RT at timed intervals, and when the game is first built, before the interval where RT is written to by the game, the RT seems to work, but as soon as it's written to by the script, it disappears.

What's odd is, I've tried replacing the Sprite with an Image and instead of using the Texture2D using a material with the RenderTexture itself, but that is invisible when built as well.

The fact that the Texture2D sprite appears before being written to would make me think it's not an inability to render, but an error with the Texture2D being written to, but that doesn't explain why the RenderTexture itself doesn't appear when used as an Image.

Overall, I'm just really confused and don't know what's going on or what to do.

Part of the code where the RT is written into the Texture2D:

public Texture2D CameraFeed;
    
    [...]
    

IEnumerator RefreshCamera(float RefreshRate)
{
        
    yield return new WaitForSeconds(RefreshRate);
    yield return new WaitForEndOfFrame();
    
    CameraFeed.Reinitialize(1, 1);
    CameraFeed.SetPixel(1,1, Color.black);
    CameraFeed.Reinitialize(510, 492);

    if(SelectedCamera == DiningRoom && CameraPowers.DiningRoomPower > 0)
    {
        RenderTexture.active = DiningRoomRT;
        CameraFeed.ReadPixels(new Rect(0, 0, DiningRoomRT.width, DiningRoomRT.height), 0, 0);
        CameraFeed.Apply();
    }
    [...]
    //Terminal
    
    TerminalLinesVisible = 0;
    TerminalTimer = 0;
    
    //
    
    StartCoroutine(RefreshCamera(RefreshRate));
    
    
}

Combine 2D and 3D lighting in Unity

I am using URP and would like to have 2D and 3D objects in the scene affected by lights.

This is not supported by Unity, however the docs said that one can use "Camera Stacking" for this exact case.

While interoperability between the respective Lights and Renderers may be developed in the future, currently a combination of 2D and 3D Lights and 2D and 3D Renderers in a single Scene can be achieved by using the camera stacking technique.

It does not work. I added two cameras first - base, second - overlay. Both with 3d renderer. Then added overlay camera to stack in base camera, then switched it's renderer to 2d renderer. Here is the message i get:

Only cameras with compatible renderer types can be stacked. The camera: Camera_1 are using the renderer Renderer2D, but the base camera: Camera are using UniversalRenderer. Will skip rendering

One option I have is to use 3D lights only.

Relation between an object and its renderer

Recently I decided to start optimizing code in the game development field.

I found this pacman project on github a good starting point: https://github.com/LucaFeggi/PacMan_SDL/tree/main

I found a lot of things that deserve an optimization. One of the most things that caught my attention is that a lot of classes have their Draw (render) method inside them, such as Pac, Ghost, Fruit classes...

So I decided to make a Renderer class, and move to it the Draw method and the appropriate fields that the latter works on.

Let us take the Pac class inside the PacRenderer.hpp as an example, to clarify things:

class Pac : public Entity{
    public:
        Pac();
        ~Pac();
        void UpdatePos(std::vector<unsigned char> &mover, unsigned char ActualMap[]);
        unsigned char FoodCollision(unsigned char ActualMap[]);
        bool IsEnergized();
        void ChangeEnergyStatus(bool NewEnergyStatus);
        void SetFacing(unsigned char mover);
        bool IsDeadAnimationEnded();
        void ModDeadAnimationStatement(bool NewDeadAnimationStatement);
        void UpdateCurrentLivingPacFrame();
        void ResetCurrentLivingFrame();
        void WallCollisionFrame();
        void Draw();
    private:
        LTexture LivingPac;
        LTexture DeathPac;
        SDL_Rect LivingPacSpriteClips[LivingPacFrames];
        SDL_Rect DeathPacSpriteClips[DeathPacFrames];
        unsigned char CurrLivingPacFrame;
        unsigned char CurrDeathPacFrame;
        bool EnergyStatus;
        bool DeadAnimationStatement;
};

So according to what I described above, the PacRenderer class will be:

class PacRenderer {
public:
        PacRenderer(std::shared_ptr<Pac> pac);
        void Draw();
private:
        LTexture LivingPac;
        LTexture DeathPac;
        SDL_Rect LivingPacSpriteClips[LivingPacFrames];
        SDL_Rect DeathPacSpriteClips[DeathPacFrames];
}

PacRenderer::PacRenderer(std::shared_ptr<Pac> pac)
{
    this->pac = pac;

    //loading textures here instead of in the Fruit class
    LivingPac.loadFromFile("Textures/PacMan32.png");
    DeathPac.loadFromFile("Textures/GameOver32.png");
    InitFrames(LivingPacFrames, LivingPacSpriteClips);
    InitFrames(DeathPacFrames, DeathPacSpriteClips);
}

After moving Draw() and 4 fields from the Pac class.

At the beginning, I thought that it is a very good optimization, and separation between game logic and graphics rendering.

Until I read this question and its right answer: How should renderer class relate to the class it renders?

I found that what the right answer describes as

you're just taking a baby step away from "objects render themselves" rather than moving all the way to an approach where something else renders the objects

applies exactly to my solution.

So I want to verify if this description really apply to my solution.

Is my solution good, did I actually separate game logic from graphics rendering?

If not, what is the best thing to do in this case?

Advice on the assets made by this creator?

I was considering trying the inventory, attributes, and dialogue assets made by this creator. Before I download them, I wanted to take a look at any tutorials or documentation for their assets. I didn't find much, their github only has a simple readme file in the repos for the three assets that I wanted to check out. And the only other thing I found was the page they said was their documentation on their publishers page. It took me to a site that was flagged as suspicious by MacAfee before I went there, and when I went ahead, I saw it was weird looking, so I got out of there immediately. Does anyone know if their assets or website is safe? Would you recommend using their assets, or should I find others to use? For example, are their assets easy to use and work without many errors or changes to their code, are they high quality?

Why does this function cause memory leaks?

I'm trying to make a pickup and drop script by myself and every time the game calls the pickup function, it seemed to cause memory leaks and freezes the game for few seconds. enter image description here enter image description here enter image description here enter image description here 200 -> 130 After those few seconds, the items does get picked up but the fps drops significant and the drop function seems to do fine.
After getting picked up hierarchy

*Attached to the Item holder

using UnityEngine;

public class PickupController : MonoBehaviour
{
    public GameObject itemHolder;

    public bool slotFull;

    private GameObject currentObject;

    private IPickable pickupScript;

    public void PickUp(GameObject pickupObject)
    {

        if (pickupObject == null)
        {
            Debug.LogError("Pickup Object is null");
            return;
        }

        slotFull = true;
        currentObject = pickupObject;
        pickupScript = pickupObject.GetComponentInChildren<IPickable>(); //I'm trying to get the WheelRange.cs script

        Debug.Log(pickupObject.gameObject.name);

        if (pickupScript == null)
        {
            Debug.LogError("pickupScript is null.");
            return;
        }


        currentObject.transform.SetParent(itemHolder.transform);

        pickupScript.OnItemPickup();

        currentObject.transform.localPosition = Vector3.zero;
        currentObject.transform.localRotation = Quaternion.identity;
    }

    public void Drop()
    {
        slotFull = false;
        currentObject.transform.SetParent(null); //Removes the parent
        pickupScript.OnItemDrop();
    }
}

*Attached to WheelRange gameobject

using UnityEngine;

public class WheelRange : MonoBehaviour, IPickable
{
    
    [SerializeField] private Canvas canvas;
    [SerializeField] private BoxCollider boxCollider;
    [SerializeField] private PickupController pickupScript;
    private GameObject parentObject;
    private bool beingUsed;
    private float distToGround;


    void Start()
    {
        beingUsed = false;
        canvas.gameObject.SetActive(false);
        parentObject = this.transform.parent.gameObject; //Gets the actual wheel gameobject
        distToGround = boxCollider.bounds.extents.y;
    }
    public void OnItemPickup()
    {
        beingUsed = true;
        pickupScript.PickUp(parentObject);

        boxCollider.gameObject.SetActive(false);
        canvas.gameObject.SetActive(false);
        
    }

    public void OnItemDrop()
    {
        beingUsed = false;
        boxCollider.gameObject.SetActive(true);
        
    }



    //Rest of these functions probably doesn't apply to the error
    bool IsGrounded()
    {
        return Physics.Raycast(parentObject.transform.position, Vector3.down, distToGround + 0.1f);

    }

    void OnTriggerEnter(Collider other)
    {
        if (other.gameObject.tag == "Player") canvas.gameObject.SetActive(true);
    }

    void OnTriggerExit(Collider other)
    {
        if (other.gameObject.tag == "Player") canvas.gameObject.SetActive(false);
    }
}

The interface

public interface IPickable
{
    void OnItemPickup();

    void OnItemDrop();
}

How is it possible that Minecraft save files are so small?

I want to make a game almost like Minecraft using Three.js, but something that stumps me is how you have over 100,000 blocks in a world and the save files are not that big in size. How does Mojang make Minecraft world save files? (Any version of Minecraft works for me)

For instance, there is this unicorn world my little sister downloaded a long, long time ago for Bedrock edition, and it's just a pink world with tons of ice cream statues, rainbows, castles, Unicorns, and other stuff like that. It goes on for pretty much 2000 blocks in every direction (not including the Y axis) and that world is only 6 MB. How is this possible?

How can I perform hot reload in Godot?

In the documentation for versions 3.5 and 4.3, it mentions that hot reloading is possible:

3.5: https://docs.godotengine.org/en/3.5/getting_started/introduction/godot_design_philosophy.html

4.3: https://docs.godotengine.org/en/4.3/getting_started/introduction/godot_design_philosophy.html

Godot tries to provide its own tools to answer most common needs. It has a dedicated scripting workspace, an animation editor, a tilemap editor, a shader editor, a debugger, a profiler, the ability to hot-reload locally and on remote devices, etc.

The Godot editor runs on the game engine. It uses the engine's own UI system, it can hot-reload code and scenes when you test your projects, or run game code in the editor. This means you can use the same code and scenes for your games, or build plugins and extend the editor.

But how exactly do you perform it?

For example, in Flutter, if you type r or shift + r in the terminal after running $ flutter run, it performs a hot reload. How do you do something similar in Godot? Does simply saving a GDScript file trigger hot reload? (It doesn't seem to work that way for me...)

(More specifically, in Flutter, r is hot reload and shift + r is hot restart.)

I am editing GDScript using either VSCode or the editor included with Godot, and I am running it using the play button(attached below) in Godot.


Since this is a large project, providing a minimal code example might be time-consuming or even impossible. Could you simply explain the steps for performing a hot reload? Is it possible to do this using only the inspector? Is it not possible to directly modify the GDScript file for this? Also, is the hot reload triggered automatically, or do I need to press a button to initiate it?

If I can understand the correct patterns and limitations for performing hot reloads accurately, I think I’ll be able to experiment on my own. (Or, I can give up on the patterns that don’t work.)

enter image description here

Unity XR Build and Run no longer working

I have connected the meta quest 3 to my computer via USB cable and allowed debugging on the device.

I have opened the default VR template from Unity Hub, without any alterations.

In Unity I select "build and run" for the default scene. This gives the following messages:

  • Application installed to device "2G0Y...ZK [Quest 3]".
  • Build completed with a result of 'Succeeded' in 11 seconds (10622 ms).

The problem is that the app does not appear on the Meta Quest at all...! The quest does not respond at all. There are no error messages.

post process behavior artifacts on new unity versions

I have an old project that I need to run on Unity 2022.3.40. It uses post process behavior and post process works absolutely fine in the editor, just as before migrating to new Unity. But when I make an Android build and run it, bloom makes bright surfaces black or bluish. From what I found, post process behavior shows artifacts on Unity 2018 and newer. I can't switch to post process stack because I'm not able to recreate the effects in there.

Here is how it is in the build:

enter image description here enter image description here

and here in the editor:

enter image description here enter image description here

The settings for bloom in the post processing profile:

enter image description here

I'm using post process behavior 1.0.4

The artifacts only happen on mobile. According to RenderDoc, uber shader caused this.

Why pallete and grid cell sizes differs

I created a tile map with default cell size 1,1. Then I created a palette with the same cell size but they are not the same. In the palette they do not fill the entire cell and changing the pixels per units does not affect the palette. I have the problem that the floor tiles are not the same size with the tiles that should go under so I tried to resize them in palette to fill the whole cell but in the game it was way bigger then 1 cell enter image description here

When translating controls into German, do I assume a German keyboard?

I am translating a video game into German, and I am wondering about the keys listed for the controls in the settings. On an American keyboard we have QWE, AD, and ZXC forming a square that the player can easily use with their left hand, so those are default controls. But on a German keyboard the Z and the Y have switched places, so do I need to put a Y in place of the Z for the translation? Can I assume that German-speaking people are not using the American keyboard setup?

How can I find valid starting positions for my puzzle game?

Here is an animation I made of how the game plays. https://i.imgur.com/KRXCOyD.mp4 It's an empty board with two tiles (T) showing how they interact with each other. There can also be walls (W) that act like the tiles but they can't be moved — they only stop the moveable tiles.

You can only move one tile at a time. When you move it, it slides until it stops. Tiles could theoretically be places anywhere on the board, but I'm trying to find what I call starting positions. In graph-theory, they create a cycle and once you get into a cycle, you can't escape. I don't want to start outside this cycle. So if I start the puzzle in an invalid state (e.i. not on starting positions) I would eventually get into a valid state, but I don't want it to ever be invalid.

For the animation, there's 28 valid starting positions. If it were just one tile, there would be only 4.

Here's how I think it could be done. Let's just focus on one for now.

enter image description here

Here we have a moveable tile in the top left corner and a wall on the bottom row. enter image description here

From the start, I could move down or right. From down, I could only move back up. From the right, I could move down, left, and then up. From the final up position, I could move left or right but notice these are one-way.

If I number the cells of the board 1 through 25 like this:

enter image description here

Then the directed graph for all the starting positions would look like this:

enter image description here

Notice how I can reach any node from any other node. I might have to take a detour to get back to 3 but it's possible.

If I started in the middle which is a non-valid position:

enter image description here

The graph ends up looking like this:

enter image description here

Here, 11, 13, and 15 are invalid starting positions. It's difficult to see, but if you start on any of them and then move to a valid position, you can never get back to an invalid position.

I'm trying to make an algorithm that can get all the valid starting positions. Here's how I think I could do it. I'm hoping somebody can tell me if I'm right or wrong, or if there's a better way. I'd start with a depth-first or breadth-first search. This could generate a list of candidates. Then for each cell that wasn't visited, do another search until I have a list of lists of candidates. Then I would union all the lists together. Maybe that would work, I'm not sure.

I don't know if I could use something like Tarjan's Algorithm for strongly connected components or if I just need to cycle detection algorithm.

I'm thinking my solution might not work, because it's possible to have more than 1 list of valid starting positions. Consider the following:

enter image description here

Each tile forms their own cycle. For these examples, I'm only considering 1 or 2 tiles, but in reality, there can be between 1 and empty cells - 1 tiles. There needs to be at least one in order to move to. And I understand these graphs can get quiet large.

Are unsafe attacks necessary in combat systems?

I have been studying a lot of combat systems across various genres of games, and from those that I have seen, having unsafe attacks seems to be quite a common staple. But I am not sure why? I think I understand the concept/purpose of unsafe attacks as a mechanic in a combat system; it essentially boils down to a risk vs reward system right? In order to balance an attack and prevent it from being overpowered the risk of it being unsafe is introduced, so if you get it right you reap a huge reward but if not you are hugely punished. In counter, an attack that is less risky reaps an equally less significant reward.

I suppose another way to pose the same question is, what effect would it have on a combat system if all attacks were made safe? For example would this make the combat more offensive/defensive focused as oppose to being an equal balance of both? Do games with such combat systems even exist?

I feel that adding a sprinkle of reality onto this concept may shed some more insight into why I find this mechanic slightly confusing. I am happy to be proven wrong, but my understanding is that in real life a skilled combatant (of any discipline) would never intentionally attack with a move that they know is unsafe, yet from what I have seen many games feature player characters with a plethora of unsafe attacks. Doesn't this go against the narrative that this is a skilled combatant? Another example is the basic jab; again happy to be proven wrong but my understanding is that the purpose of this is to create momentum for the attacker, hunt for an opening and be able to rely on this as the fastest and most safest attack in their arsenal. Yet so many games have jabs be unsafe on block. Why?

Hopefully I have explained my question in enough detail but if not please let me know what is missing and I'll be happy to add it. Thank you and looking forward to some insight on this part of combat systems.

Unity NullReferenceException even though I see it's not in the debugger

This is the full error:

NullReferenceException: Object reference not set to an instance of an object.

Variable activeWeapon is set to an instance in the Unity editor. The problem occurs when setting the bullet.transform.position. I can follow the Debugger and see that activeWeapon is not null when I step into and after that line. But when I step out of the function, then I get the error above. Even when I use activeWeapon.transform I get the same error.

Using the player's transform shoots the bullet from the player, and not from the active weapon's position.

public void weaponAttack()
{
    GameObject bullet = ObjPoolWeapon.Instance.GetPooledObj();
    //Vector3 spawnLocation = activeWeapon.transform.position + activeWeapon.spawnLocationWM;
    **bullet.transform.position = activeWeapon.spawnLocationGO.transform.position;**
    bullet.transform.rotation = Quaternion.identity;
    bullet.SetActive(true);
}

image of debugger

What exactly is component-based architecture and how do I get it to work?

My problems with OOP:

For a while now, I have been attempting to make full games on Godot, but I keep running into issues around poor organization and planning with my classes and OOP architecture. After days of working on a project, I would run into a devastating issue with the way I organized my class hierarchy, resulting in me having to restart or give up on the project to not waste time to go through and fix every class in the class hierarchy. I recently have started to be way too timid to get anything done with my project, fearing that one mistake in my organization could cause me to restart again.

What I think the functionality of Component-based architecture is:

When I came across component-based architecture, I thought I could be able to make a base class and then integrate component classes that serve separate functionalities to the base class, so I wouldn't have to make large class hierarchies and better manage and organize my code.

Why I need help with implementing component-based architecture:

While trying to implement a component-based design, I found that I had no idea how to exactly make one work at all. I don't understand how I could integrate the functionality of one class into another without having prebuilt code that is made directly for each possible component, which beats the purpose of component-based architecture. I don't know how I get the base class to properly run component code without directly and consciously addressing each type of component. I keep trying to do research on my own, but I just end up getting more confused as I try to look it up.

Octree Query - Frustum Search and Recursive Vector Inserts

Brief

I have spent probably the last year thinking about implementing an Octree data structure into my C++ game engine project for scene management and frustum culling of lights and meshes. Right now my goal is to beat the performance of my current iterative brute force approach to frustum testing every single light and mesh in the scene.

I finally decided to attack this head on and have over the past week implemented a templated Octree class which allows me to store data within my octree such as UUID (uint32_t in my case). I also plan to be able to repurpose this structure for other features in the game engine, but for now, frustum culling is my primary goal for this system.

Now down to brass tacks, I have a performance issue with std::vector::insert() and the recursive nature of my current design.

Structure

  1. Octree<typename DataType>, this is the base class which manages all API calls from the user such as insert, remove, update, query (AABB, Sphere, or Frustum), etc. When I create the Octree, the constructor takes an OctreeConfig struct which holds basic information on what properties the Octree should take, e.g., MinNodeSize, PreferredMaxDataSourcesPerNode, etc.
  2. OctreeDataSource<typename DataType>, this is a simple struct that holds an AABB bounding box that represents the data in 3D space, and the value of the DataType, e.g., a UUID. I plan to also extend this so I can have bounding spheres or points for the data types aswell.
  3. OctreeNode<typename DataType>, this is a private struct within the Octree class, as I do not want the user to access the nodes directly; however, each node has a std::array<OctreeNode<DataType>, 8> for its children, and it also holds a std::vector<std::shared_ptr<OctreeDataSource<DataType>>> which holds a vector of smart pointers to the data source.

Problem

My current issue is the performance impact of std::vector::insert() that is called recursively through the OctreeNode's when I call my Octree::Query(CameraFrustum) method.

As seen above in my structure, each OctreeNode holds an std::vector of data sources and when I query the Octree, it range inserts all of these vectors into a single pre-allocated vector that is passed down the Octree by reference.

When I query the Octree, it takes the following basic steps:

Query Method

  1. Octree::Query
    1. Create a static std::vector and ensure that on creation it has reserved space for the query (currently I am just hard coding this to 1024 as this sufficiently holds all the mesh objects in my current octree test scene, so there are no reallocations when performing an std::vector range insert).
    2. Clear the static vector.
    3. Call OctreeNode::Query and pass the vector as reference.
  2. OctreeNode::Query
    1. Check Count of data sources in current node and children, if we have no data sources in this node and it's children, we return - simples :)
    2. Conduct a frustum check on the current node AABB bounds. Result is either Contains, Intersects, or DoesNotContain.
      • Contains: (PERFORMANCE IMPACT HERE) If the current node is fully contained within the frustum, we will simply include all DataSources into the query from the current and all child nodes recursively. We call OctreeNode::GatherAllDataSources, and pass the static vector created in Octree::Query() by reference.
      • Intersects: We individually frustum check each OctreeDataSource::AABB within this node's data source vector, then we recursively call OctreeNode::Query on each of the children to perform this function recursively.

OctreeNode::GatherAllDataSources (the problem child)

I have used profiling macros to measure the accumulated amount of time this function takes each frame. If I call Query once in my main engine game loop, the GatherAllDataSources() takes roughly 60% if not more of the entire Query method time.

Octree Profile

You can also see from these profile results that the Octree Query is taking double the time as "Forward Plus - Frustum Culling (MESHES)" which is the brute force approach to frustum checking every mesh within the scene (the scene has 948 meshes with AABBs).

I've narrowed the issue down to the line of code with the comment below:

void GatherAllDataSources(std::vector<OctreeData>& out_data) {
    
    L_PROFILE_SCOPE_ACCUMULATIVE_TIMER("Octree Query - GatherAllDataSources"); // Accumulates a profile timer results each time this method is called. Profiler starts time on construction and stops timer and accumulates result within a ProfilerResults class.
    if (Count() == 0) {
        CheckShouldDeleteNode();
        return;
    }

    if (!m_DataSources.empty()) {
        // This is the line of code which is taking most of the queries search time
        // As you can see below aswell, the time complexity increases because 
        // I am calling this function recursively for all children, practically, 
        // gathering all data sources within this node and all children
        out_data.insert(out_data.end(), m_DataSources.begin(), m_DataSources.end()); 
    }               

    if (!IsNodeSplit()) 
        return;
        
    // Recursively gather data from child nodes
    for (const auto& child : m_ChildrenNodes) {
        if (child) {
            child->GatherAllDataSources(out_data); // Pass the same vector to avoid memory allocations
        }
    }       
}

Question Time

How can I significantly improve the efficiency of Gathering data sources recursively from my child nodes?

I am open to entirely changing the approach of how data sources are stored within the Octree, and how the overall structure of the Octree is designed, but this is where I get stuck.

I'm very inexperienced when it comes to algorithm optimisation or C++ optimisation, and as this is a new algorithm I have attempted to implement, I'm finding it very difficult to find a solution to this problem.

Any tips/tricks are welcome!

You can find the full version of my current Octree implementation code here (please note I am not finished yet with other functionality, and I will probably be back if I can't find solutions for Insert and Remove optimisation!).

Here are some resources I have reviewed:

If you're also interested in the rest of my code base it can be found on GitHub through this link. I mostly operate in the Development branch. These changes haven't been pushed yet, but I've faced a lot of challenges during this project's journey so if you have any further insights to my code or have any questions about how I've implemented different features, please give me a shout!

Character controller that can handle sloped terrain and boxy ledge traversal

I am working on a character controller for a 3D platformer in Unity. I cannot find an approach that satisfies me.

I have experimented with these approaches in order to learn about their virtues and pitfalls:

  1. Rigidbody + CapsuleCollider + native physics system (gives you something like Fall Guys)
  2. Rigidbody + CapsuleCollider + custom velocity handling, only using physics system to resolve collisions (this method is illustrated in Catlike Coding tutorial here)
  3. Built-in CharacterController
  4. Custom character controller that uses Unity methods to detect geometric collisions, but does its own collision resolution via depenetration (this method is illustrated in Roystan Ross tutorial here)

See also this video by iHeartGameDev summarizing different approaches.


For my particular use case, each one of these has been better than the last.

After following Roystan's tutorial, I am a big fan of the pushback method of handling collision. Rather than use casts to catch collision before you move your object, you move your object, then find collisions, then resolve them using depenetration.

Roystan's method represents the character as a stack of three spheres for the same reason people favor capsule colliders in 3D: it makes handling slopes much easier (and also because depenetration is easier when you think in terms of spheres).

But the thing I am struggling with is that I don't want the player to be able to slide up or down ledges when traversing them.

Basically, when jumping up or walking off a ledge, I want my character to be treated as a box.

So I am struggling to find a way to accommodate both of the following:

  • I want to support sloped MeshCollider ground (not too noisy, but will definitely be possible to have 4 collision points at a time)
  • I want ledge traversal (up and down) to treat my player as a box

Here are diagrams illustrating what you normally get with a capsule, versus what I want.

Down ledge: enter image description here

Up ledge: enter image description here

My thinking is that I have two options:

  1. Represent the character as a box and use box depenetration techniques to move him along sloped ground (for example, using Unity's ComputePenetration())
  2. Represent the character as a capsule (or stack of three spheres like in Roystan's tutorial) and add special case logic to get the boxy ledge traversal I want

One problem I can foresee with approach 1 is properly doing the depenentration on noisy sloped ground, and one problem I can foresee with approach 2 is properly writing the special cases. (My game is relatively low-poly and retro-styled, so I wouldn't mind the player not appearing perfectly flush with slopes that comes with the box representation of approach 1.)

In any event, I am just looking for advice on how to proceed with this problem. How can I get the boxy handling of ledges while also getting traversal on sloped MeshCollider terrain.

Is either of these approaches better than the other for what I am after, and is there an alternative approach I haven't considered?

Rectangular shape NavMeshAgent for Unity?

I have an issue with the tank character. The Unity's navmesh agent only has a cylinder shape, so when the enemies are nearby, it causes unwanted results. If the navmesh is too big(to cover the entire tank body), the enemies can't approach the side of the vehicle. If the navmesh is too small(only covering the center), the enemies pass through the front of the car and back. Putting the collider doesn't make any changes, the unity's navmesh agent ignores whether it has a collider. How do I make navmeshagent shape fit the mesh?

Stop an object from rotating past a certain rotation value in Unity

Learning how to program in Unity, so bare with me. I'm making a game called Flappy Bird and I'm having issues with my z-rotation boundaries. Let's say I have some gameObject (call it Bird) that goes up and falls down from the y-axis. However, I want this bird, when it goes down, the z-axis rotates clockwise (so negative rotation). Once it starts falling down, the z-values in the rotation ramps from 0 to -90 in float values. Now, my bird should not keep spinning but stay fixed to the limit until I start flying again. When I make my fly action, the bird should reset the z-rotation back to 0 in a gradual manner, not -45 to 0 immediately.

From what I have achieved, there was no luck for me to stop the spin on the bird. It is just continuously spinning without stopping from the range I want. My range is from 0 to -45 z-axis rotation.enter image description hereenter image description here

I have tried to play around with the transformation of my z values to get an idea, but nada. From what I have gathered and tried, I was playing around with the eulerAngles values, Rigidbody.freezeRotation(), transform.Rotate() method, and even the Quanterion.Euler() method.

here is the code function example I'm making:

public float zTest;
public Vector3 movementDirection;

private void FallSpeed()
{
    movementDirection.y +=  my_gravity * Time.deltaTime; //my_gravity is set to -9.81f
    transform.position += movementDirection * Time.deltaTime;
    zTest += 1 * movemenntDirection.y;
    transform.rotation = Quaternion.Euler(0, 0, zTest);
    if ((transform.rotation.z >= -45.0f && transform.rotation.z <= 0.0f))
    {
        transform.Rotate(0, 0, zTest); //I have a feeling this is completely bad, but I was trying to reset my rotation values.
        // transform.rotation = Quaternion.Euler(0, 0, zTest); //Another way I was trying it
        // currentEuler = new Vector3(transform.rotation.x, transform.rotation.y, -69); //Another way I was trying it
    }
}

To be honest, a lot of reading documentation made me more confused in how this interaction is happening and I'm not fully thinking straight at this point. Anyone has suggestions in tackling this problem or pointing me to the right direction? If anything, I will make more edits if needed for clarification and documentation for myself and others.

How to keep a camera confined inside a 3d Collider

I am trying to create a confiner for my camera using the bounds of a collider, the issue is that when I hit the wall of the confiner, I need to disable the camera movement, but since I disable it, I now cannot move it at all.

I've tried all I can think of like storing the last "valid" position and restoring it if confiner is hit, but that does not seem to work.

void HandleInput()
    {
        if (inputDisabled)
            return;

        //Speed controls
        if (Input.GetKey(KeyCode.LeftShift))
        {
            movementSpeed = fastSpeed;
        }
        else
        {
            movementSpeed = normalSpeed;
        }

        // Adjust movement speed based on camera zoom
        movementSpeed *= (cameraTransform.localPosition.y / zoomSpeedFactor);

        Vector3 adjustedForward = transform.forward;
        adjustedForward.y = 0;

        //Movement controls
        if (Input.GetKey(KeyCode.W) || Input.GetKey(KeyCode.UpArrow))
        {
            newPosition += (adjustedForward * movementSpeed);
        }
        if (Input.GetKey(KeyCode.S) || Input.GetKey(KeyCode.DownArrow))
        {
            newPosition += (adjustedForward * -movementSpeed);
        }
        if (Input.GetKey(KeyCode.D) || Input.GetKey(KeyCode.RightArrow))
        {
            newPosition += (transform.right * movementSpeed);
        }
        if (Input.GetKey(KeyCode.A) || Input.GetKey(KeyCode.LeftArrow))
        {
            newPosition += (transform.right * -movementSpeed);
        }

        //Zoom controls
        if (Input.mouseScrollDelta.y != 0 && !EventSystem.current.IsPointerOverGameObject())
        {
            newZoom -= Input.mouseScrollDelta.y * zoomAmount;
            newZoom.y = ClampValue(newZoom.y, zoomClamp.x, zoomClamp.y);
        }

        if (!collider.bounds.Contains(newPos)) //DISABLE MOVEMENT
            return;

        transform.position = Vector3.Lerp(transform.position, newPosition, Time.unscaledDeltaTime * acceleration);
        cameraTransform.localPosition = Vector3.Lerp(cameraTransform.localPosition, newZoom, Time.unscaledDeltaTime * acceleration);
    }

Custom inspector values in prefab mode not save to prefab

I have a prefab with a custom GridModel component which has a custom editor. For some reason whenever I edit a value in Prefab Mode, the value doesn't save in the prefab asset and vice versa. How can I fix this issue?

Here is my code:

public class GridModel : MonoBehaviour, IGridModel
{
    public Vector2Int dimensions;
}
[CustomEditor(typeof(GridModel))]
public class GridModelEditor : Editor
{
    private GridModel gridModel;

    void OnEnable()
    {
        gridModel = (GridModel)target;
    }

    public override void OnInspectorGUI()
    {
        gridModel.dimensions = EditorGUILayout.Vector2IntField("Size", gridModel.dimensions);
    }
}

How to stop the slide on slopes?

I'm struggling with a wee issue where if my character walks up a slope he slides back down when at rest, and bounces down when running down the slope. I've followed a few videos but none seem to address the issue. I've posted my movement code so far and I'm not opposed to fundamentally changing this, however with the other aspects of my game, the rigid body and collider setup seems to be working quite well. Any ideas?

 //inputs
    if (Input.GetKey(buttonKey["Left"]))
    {
        inputHorizontal = -1;
    }
    else if (Input.GetKey(buttonKey["Right"]))
    {
        inputHorizontal = 1;
    }
    else
    {
        inputHorizontal = 0;
    }

    //jump
    if (Input.GetKey(buttonKey["Jump"]) && isgrounded.Grounded && canJump)
    {
        jump();
        jumpTimerCurrent = 0;
        canJump = false;
    }

    if (jumpTimerCurrent <= jumpTimerReset)
    {
        jumpTimerCurrent += Time.fixedDeltaTime;
    }
    else
    {
        canJump = true;
    }

 void FixedUpdate()
{
    rb.velocity = new Vector2(inputHorizontal * Time.fixedDeltaTime * runSpeed, rb.velocity.y);
}

    void jump()
{
    rb.velocity = new Vector2(rb.velocity.x, 0.0f);
    rb.AddForce(Vector2.up * jumpForce, ForceMode.Force);
}

Why does reading my depth texture in GLSL return less than one?

I've created a depth texture in OpenGL (using C#) as follows:

// Create the framebuffer.
var framebuffer = 0u;

glGenFramebuffers(1, &framebuffer);
glBindFramebuffer(GL_FRAMEBUFFER, framebuffer);

// Create the depth texture.
var depthTexture = 0u;

glGenTextures(1, &depthTexture);
glBindTexture(GL_TEXTURE_2D, depthTexture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT24, 800, 600, 0, GL_DEPTH_COMPONENT, GL_FLOAT, null);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, depthTexture, 0);

Later, I sample from the depth texture as follows:

float depth = texture(depthTexture, texCoords).r;

But even when no geometry has been rendered to that pixel, the depth value coming back is less than 1 (seems to be very slightly above 0.5). This is confusing to me since, per the documentation on glClearDepth, the default value is 1. Note that this is not a problem of linearizing depth since I'm attemping to compare depth directly (using the same near and far planes), not convert that depth back to world space.

Why is my depth texture sample returning <1 when no geometry has been rendered?

Emotiv Epoc integration with Unity

I'm coding a small game on Unity to control with the Emotiv Epoc headset. However, as I'm new to Unity, I'm having some trouble setting everything up to connect the device with my game.

I downloaded the SDK from Github and tried the sample project and worked fine, but I can't translate that to my own project. I opened an issue on that github repository and I was suggested to use this new plugin.

What I see is that there's a file called EdkDll.cs that contains all the functions that I'll need in my project. I suppose that I have to build it to create the .dll that will be used on Unity, but when I try to do so on Visual Studio or MonoDevelop (and after adding the reference to UnityEngine.dll), a big part of the EdkDll.cs goes gray and I get an error on the calls to the grayed functions ("... doesn't exist in the current context"). I tried changing the .NET framework from 3.5 to other, but that didn't work either.

I think it can't be so complicated and there must be something I'm doing wrong, or maybe there's another way of getting the EdkDll.cs functions to be available from the scripts in my project.

EDIT:

In the example provided in the SDK (here), there's a script that controls the connection to the device (EmotivCtrl.cs) and another to control the player (playerCtrl.cs), and also others to control the camera and a text window, but I think I should be able to make my project work without those. The thing is that I don't understand where or how is the connection script called, because if I include it in the assets folder of my project and put a print inside the Awake method, it never shows.

My guess is that is called here (movm.cs): EmoEngine.Instance.EmoStateUpdated += new EmoEngine.EmoStateUpdatedEventHandler(engine_EmoStateUpdated);, but the function engine_EmoStateUpdated doesn't seem to be accessed either.

These are the scripts to control the device and the player of my project:

movm.cs (to control the player):

using System.Collections;
using System.Collections.Generic;
using UnityEngine;

public class movm : MonoBehaviour
{
    private Rigidbody rb;
    EmoEngine engine;

    void engine_EmoStateUpdated(object sender, EmoStateUpdatedEventArgs e)
    {
        Debug.Log("empieza engine");
        EmoState es = e.emoState;
        Debug.Log("despues");
        /*if (e.userId != 0) 
            return;*/
        Debug.Log("Corrent action: " + es.MentalCommandGetCurrentAction().ToString());
        if (es.MentalCommandGetCurrentAction() == EdkDll.IEE_MentalCommandAction_t.MC_PUSH)
        {
            //Vector3 movement = new Vector3(cam.transform.forward.x, cam.transform.forward.y, cam.transform.forward.z);
            //rb.AddForce(movement * speed);
            rb.AddForce(Vector3.up);
            Debug.Log("Push");
        }
    }
    // Start is called before the first frame update
    void Start()
    {
        Debug.Log("empieza movm");
        rb = GetComponent<Rigidbody>();
        Debug.Log("rigid");
        EmoEngine.Instance.EmoStateUpdated += new EmoEngine.EmoStateUpdatedEventHandler(engine_EmoStateUpdated);
        Debug.Log("asdasd");
    }

    void FixedUpdate()
    {
        if (Input.GetKey("w"))
        {
            gameObject.transform.Translate(2f * Time.deltaTime, 0, 0);
        }
    }

}

EmotivCtrl.cs (to connect to the Emotiv Epoc device):

using UnityEngine;
using UnityEngine.UI;
using System.Collections;
using System.Collections.Generic;

public class EmotivCtrl : MonoBehaviour {
    public GameObject modal;
    public Text message_box;
    public InputField userName;
    public InputField password;
    public InputField profileName;

    public static EmoEngine engine;
    public static int engineUserID = -1;
    public static int userCloudID = 0;
    static int version  = -1; 

    /*
     * Create instance of EmoEngine and set up his handlers for 
     * user events, connection events and mental command training events.
     * Init the connection
    */
    void Awake () 
    {
        Debug.Log("awake");
        engine = EmoEngine.Instance;
        engine.UserAdded                      += new EmoEngine.UserAddedEventHandler (UserAddedEvent);
        engine.UserRemoved                    += new EmoEngine.UserRemovedEventHandler (UserRemovedEvent);
        engine.EmoEngineConnected             += new EmoEngine.EmoEngineConnectedEventHandler (EmotivConnected);
        engine.EmoEngineDisconnected          += new EmoEngine.EmoEngineDisconnectedEventHandler (EmotivDisconnected);
        engine.MentalCommandTrainingStarted   += new EmoEngine.MentalCommandTrainingStartedEventEventHandler (TrainingStarted);
        engine.MentalCommandTrainingSucceeded += new EmoEngine.MentalCommandTrainingSucceededEventHandler (TrainingSucceeded);
        engine.MentalCommandTrainingCompleted += new EmoEngine.MentalCommandTrainingCompletedEventHandler (TrainingCompleted);
        engine.MentalCommandTrainingRejected  += new EmoEngine.MentalCommandTrainingRejectedEventHandler (TrainingRejected);
        engine.MentalCommandTrainingReset     += new EmoEngine.MentalCommandTrainingResetEventHandler (TrainingReset);
        engine.Connect ();
        Debug.Log("fini");
    }

    /*
     * Init the user, password and profile name if you want it
    */
    void Start(){
        Debug.Log("START");
        userName.text = "";
        password.text = "";
        profileName.text = "";
    }

    /*
     * Call the ProcessEvents() method in Update once per frame
    */
    void Update () {
        engine.ProcessEvents ();
    }

    /*
     * Close the connection on application exit
    */
    void OnApplicationQuit() {
        Debug.Log("Application ending after " + Time.time + " seconds");
        engine.Disconnect();
    }

    /*
     * Several methods for handling the EmoEngine events.
     * They are self explanatory.
    */
    void UserAddedEvent(object sender, EmoEngineEventArgs e)
    {
        message_box.text = "User Added";
        engineUserID = (int)e.userId;
    }

    void UserRemovedEvent(object sender, EmoEngineEventArgs e)
    {
        message_box.text = "User Removed";  
    }

    void EmotivConnected(object sender, EmoEngineEventArgs e)
    {
        Debug.Log ("conectado");
        message_box.text = "Connected!!";
    }

    void EmotivDisconnected(object sender, EmoEngineEventArgs e)
    {
        message_box.text = "Disconnected :(";
    }

    public bool CloudConnected()
    {
        if (EmotivCloudClient.EC_Connect () == EdkDll.EDK_OK) {
            message_box.text = "Connection to server OK";
            if (EmotivCloudClient.EC_Login (userName.text, password.text)== EdkDll.EDK_OK) {
                message_box.text = "Login as " + userName.text;
                if (EmotivCloudClient.EC_GetUserDetail (ref userCloudID) == EdkDll.EDK_OK) {
                    message_box.text = "CloudID: " + userCloudID;
                    return true;
                }
            } 
            else 
            {
                message_box.text = "Cant login as "+userName.text+", check password is correct";
            }
        } 
        else 
        {
            message_box.text = "Cant connect to server";
        }
        return false;
    }

    public void SaveProfile(){
        if (CloudConnected ()) {
            int profileId = -1;
            EmotivCloudClient.EC_GetProfileId(userCloudID, profileName.text);
            if (profileId >= 0) {
                if (EmotivCloudClient.EC_UpdateUserProfile (userCloudID, (int)engineUserID, profileId) == EdkDll.EDK_OK) {
                    message_box.text = "Profile updated";
                } else {
                    message_box.text = "Error saving profile, aborting";
                }
            } else {
                if (EmotivCloudClient.EC_SaveUserProfile (
                    userCloudID, engineUserID, profileName.text, 
                    EmotivCloudClient.profileFileType.TRAINING) == EdkDll.EDK_OK) {
                    message_box.text = "Profiled saved successfully";
                } else {
                    message_box.text = "Error saving profile, aborting";
                }
            }
        }

    }

    public void LoadProfile(){
        if (CloudConnected ()) {
            int profileId = -1;
            EmotivCloudClient.EC_GetProfileId(userCloudID, profileName.text);

            if (EmotivCloudClient.EC_LoadUserProfile (
                userCloudID, (int)engineUserID, 
                profileId, 
                (int)version) == EdkDll.EDK_OK) {
                message_box.text = "Load finished";
            } 
            else {
                message_box.text = "Problem loading";
            }
        }
    }

    public void TrainPush(){
        engine.MentalCommandSetTrainingAction((uint)engineUserID, EdkDll.IEE_MentalCommandAction_t.MC_PUSH);
        engine.MentalCommandSetTrainingControl((uint)engineUserID, EdkDll.IEE_MentalCommandTrainingControl_t.MC_START);
    }

    public void TrainNeutral(){
        engine.MentalCommandSetTrainingAction ((uint)engineUserID, EdkDll.IEE_MentalCommandAction_t.MC_NEUTRAL);
        engine.MentalCommandSetTrainingControl((uint)engineUserID, EdkDll.IEE_MentalCommandTrainingControl_t.MC_START);
    }

    public void TrainingStarted(object sender, EmoEngineEventArgs e){
        message_box.text = "Trainig started";
    }

    public void TrainingCompleted(object sender, EmoEngineEventArgs e){
        message_box.text = "Training completed!!";
    }

    public void TrainingRejected(object sender, EmoEngineEventArgs e){
        message_box.text = "Trainig rejected";
    }

    public void TrainingSucceeded(object sender, EmoEngineEventArgs e){
        message_box.text = "Training Succeeded!!";
        //modal.GetComponent<MessageBox> ().init ("Training Succeeded!!", "Do you want to use this session?", new Decision (AceptTrainig));
    }

    public void AceptTrainig(bool accept){
        if (accept) {
            engine.MentalCommandSetTrainingControl ((uint)engineUserID, EdkDll.IEE_MentalCommandTrainingControl_t.MC_ACCEPT);
        } else {
            engine.MentalCommandSetTrainingControl ((uint)engineUserID, EdkDll.IEE_MentalCommandTrainingControl_t.MC_REJECT);
        }
    }

    public void TrainingReset(object sender, EmoEngineEventArgs e){
        message_box.text = "Command reseted";
    }

    public void Close(){
        Application.Quit ();
    }
}

Particles not rendering over projectors

I am using projectors for shadows...When I use particles for bike speed up i.e., nitro speed the particles get cutout by those shadows....

Here is screenshot of it,

enter image description here

Here is my shader code of projectors ,

Shader "Projector/Projector Multiply Black"
{
    Properties
    {
        _ShadowTex("Cookie", 2D) = "gray" { TexGen ObjectLinear }
    _ShadowStrength("Strength",float) = 1
    }

        Subshader
    {
        Tags{ "RenderType" = "Transparent"  "Queue" = "Transparent+100" }
        Pass
    {
        ZWrite Off

        //Fog { Mode Off }

        Blend DstColor Zero

        CGPROGRAM
#pragma vertex vert
#pragma fragment frag
#pragma fragmentoption ARB_fog_exp2
#pragma fragmentoption ARB_precision_hint_fastest
#include "UnityCG.cginc"


        struct v2f
    {
        float4 pos : SV_POSITION;
        float2 uv_Main     : TEXCOORD0;
    };

    sampler2D _ShadowTex;
    float4x4 unity_Projector;
    float _ShadowStrength;

    v2f vert(appdata_tan v)
    {
        v2f o;


        o.pos = mul(UNITY_MATRIX_MVP, v.vertex);

        o.uv_Main = mul(unity_Projector, v.vertex).xy;


        return o;
    }

    half4 frag(v2f i) : COLOR
    {
        half4 tex = tex2D(_ShadowTex, i.uv_Main);
        half strength = (1 - tex.a*_ShadowStrength);
        tex = (strength,strength,strength,strength);
        return tex;
    }
        ENDCG

    }
    }
}

Here is my particle code,

// Simple additive particle shader.

Shader "Custom/Particle additive"
{
Properties
{
    _MainTexture ("Particle Texture (Alpha8)", 2D) = "white" {}
}

Category
{
    Tags { "Queue"="Transparent" "IgnoreProjector"="True" "RenderType"="Transparent" }
    Blend SrcAlpha One
    Cull Off Lighting Off ZWrite Off Fog {Color (0,0,0,0)}

    BindChannels
    {
        Bind "Color", color
        Bind "Vertex", vertex
        Bind "TexCoord", texcoord
    }

    SubShader
    {
        Pass
        {
            SetTexture [_MainTexture]
            {
                combine primary, texture * primary
            }
        }
    }
}
}

NVIDIA Warp Defomable Mesh using add_soft_mesh

I am trying to simulate a deformable/soft mesh falling onto the ground plane using the NVIDIA Warp framework, but I get the message "inverted tetrahedral element". I downloaded the Stanford bunny as bunny.obj so I am assuming the mesh data is fine? Could it be because of how they are loaded in the code?

The code I am using is:

    import os
    import numpy as np
    import warp as wp
    import warp.examples
    import warp.sim
    import openmesh
    import meshio
    import warp.sim.render
    from SimulationDataConfig import *
    from pxr import Usd, UsdGeom

class Example:
    def __init__(self, stage_path="bunny.usd"):
        self.sim_width = 8
        self.sim_height = 8



        fps = 60
        self.frame_dt = 1.0 / fps
        self.sim_substeps = 32
        self.sim_dt = self.frame_dt / self.sim_substeps
        self.sim_time = 0.0
        self.sim_iterations = 1
        self.sim_relaxation = 1.0
        self.profiler = {}

        builder = wp.sim.ModelBuilder()

        #m = openmesh.read_trimesh("cylinder.obj")
        mesh_points, mesh_indices = wp.sim.utils.load_mesh(filename="bunny.obj", method="meshio")
        print(mesh_points)        

        mesh_p = np.array(mesh_points, dtype=np.int32).reshape(-1, 3)
        mesh_ind = np.array(mesh_indices, dtype=np.int32).flatten()
        print(mesh_ind)

        #correct_indices = preprocess_tetrahedra(mesh_points, mesh_indices)
        
        #mesh = wp.sim.Mesh(mesh_points, mesh_indices)
        builder.default_particle_radius = 0.01

        builder.add_soft_mesh(
            pos=wp.vec3(0.0, 10.0, 0.0), 
            rot=wp.quat_identity(),
            scale=1.0,
            vel=wp.vec3(0.0, 0.0, 0.0), 
            vertices=mesh_p, 
            indices=mesh_ind, 
            density=100.0,
            k_mu=500.0,
            k_lambda=200.0,
            k_damp=0.0)

        self.model = builder.finalize()
        self.model.ground = True
        self.model.soft_contact_ke = 1.0e3
        self.model.soft_contact_kd = 0.0
        self.model.soft_contact_kf = 1.0e3

        self.integrator = wp.sim.SemiImplicitIntegrator()

        output_dir_root = "example_sims/output"
        output = os.path.join(output_dir_root,"h5_f_{:010d}.h5")


        output_dir = os.path.dirname(output)
        config_file = os.path.join(output_dir, 'config.h5')
        #config = SimulationConfig(self.model, self.sim_dt)
        #self.config = config
        #self.config.write_to_file(config_file)

        self.state_0 = self.model.state()
        self.state_1 = self.model.state()

        if stage_path:
            self.renderer = wp.sim.render.SimRenderer(self.model, stage_path, scaling=1.0)
        else:
            self.renderer = None

        self.use_cuda_graph = wp.get_device().is_cuda
        if self.use_cuda_graph:
            with wp.ScopedCapture() as capture:
                self.simulate()
            self.graph = capture.graph


    def simulate(self):
        for _s in range(self.sim_substeps):
            wp.sim.collide(self.model, self.state_0)

            self.state_0.clear_forces()
            self.state_1.clear_forces()

            self.integrator.simulate(self.model, self.state_0, self.state_1, self.sim_dt)

            # swap states
            (self.state_0, self.state_1) = (self.state_1, self.state_0)

    def step(self):
        with wp.ScopedTimer("step", dict=self.profiler):
            if self.use_cuda_graph:
                wp.capture_launch(self.graph)
            else:
                self.simulate()
            self.sim_time += self.frame_dt

    def render(self):
        if self.renderer is None:
            return

        with wp.ScopedTimer("render"):
            self.renderer.begin_frame(self.sim_time)
            self.renderer.render(self.state_0)
            self.renderer.end_frame()

if __name__ == "__main__":
    import argparse

    parser = argparse.ArgumentParser(formatter_class=argparse.ArgumentDefaultsHelpFormatter)
    parser.add_argument("--device", type=str, default=None, help="Override the default Warp device.")
    parser.add_argument(
        "--stage_path",
        type=lambda x: None if x == "None" else str(x),
        default="bunny.usd",
        help="Path to the output USD file.",
    )
    parser.add_argument("--num_frames", type=int, default=300, help="Total number of frames.")

    args = parser.parse_known_args()[0]

    with wp.ScopedDevice(args.device):
        example = Example(stage_path=args.stage_path)

        for _ in range(args.num_frames):
            example.step()
            example.render()

        if example.renderer:
            example.renderer.save()

What could be the reason for this message? Does anyone know a fix or how I should approach it

Inconsistent android build size

I have build a simple 2D game with unity. When build for android, the build size is around 32 MB.

Build size

However when I install and run the app, we're looking about double the size.

App size

(App - 60.22 MB, Data - 106 KB, Cache - 172 KB, Total - 60.49 MB)

I am using Samsung Galaxy A 40, but the same symptom keeps showing on other devices.

Where does the extra 30 MB come from? Why does unity create 106 KB data for seemingly no reason (I know this isn't much, but I can't find where those files are located)

LWJGL and JOML rotation issues

I'm making a scene builder in LWJGL. Objects in the scene have positions and rotations. Like most scene builders/modelers, I have colors handles to show the objects orientation. I have a typical setup, red points in the positive x direction, blue in the positive z.

The problem is the handles don't point in the correct direction. I have attached a screenshot showing the issue. The cube on the right has a rotation of 0, 0, 0, and the handles are correct. The cube on the left has a rotation of 0, 30, 0. Where I'm confused is why is the blue handle rotated 30 degrees clockwise, and the mesh is 30 degrees COUNTER-clockwise?

enter image description here

I compute the cube's rotation with

public Matrix4f getLocalMatrix() {
    // I update the position, rotation and scale directly, so I recalculate the matrix every time.
    return this.localMatrix.translationRotateScale(this.position, this.rotation, this.scale);
}

And to draw the handles I use

Matrix4f m = gameObject.getLocalMatrix();
gizmos.drawRay(gameObject.position, m.positiveZ(new Vector3f()));

...

public void drawRay(Vector3f start, Vector3f direction) {
    glBegin(GL_LINES);
    glVertex3f(start.x, start.y, start.z);
    glVertex3f(start.x + direction.x, start.y + direction.y, start.z + direction.z);
    glEnd();
}

I use a simple vertex shader, I don't think that's the issue,

layout (location=0) in vec3 inPosition;
layout (location=1) in vec2 texCoord;

out vec2 outTextCoord;

uniform mat4 projectionMatrix;
uniform mat4 viewMatrix;
uniform mat4 modelMatrix;

void main() {
    gl_Position = projectionMatrix * viewMatrix * modelMatrix * vec4(inPosition, 1.0);
    outTextCoord = texCoord;
}

The uniforms are being set correctly (I'm assuming). modelMatrix is set to gameObject.getLocalMatrix());

The only thing I can think of is some of my code is using right handed coordinates, and some is left handed?

Google play console warning about Content labeling and Touch target size Issue

I'm trying to "production publish" my app from Google Play through Google Play Console that stuck on reviewing, My app was build using Unity3D engine, and its game app, for some reason I'm facing the warning issue about Content labeling and Touch target size Issue maybe because this issue, my app hasn't approved or declined by google because it says:

Warnings found. We recommend fixing before releasing to production.

I've search about this, and it's an issue that can be fixed only inside Android Studio on XML file with this structure res>layout, however there is no layout folder inside the res, is there anything I can do to resolve this issue on Unity?

currently using Unity LTS 2022.3.12f1

What I've done

  • Export unity project to android studio.
  • Try to add some code on the layout, but can't since there is lack of layout folder.

Preview

  • The warnings from Google Play console

Content Labelling

Touch target size

  • Project structure from Android Studio

Android studio project structure

Libgdx Bullet Physics not applying gravity to model instance

I create a modelinstance in Libgdx that I call yellowInstance. The modelinstance is defined as follows: I need it to fall down under the force of gravity, But it just stays in the air!

 ModelBuilder yellowBuilder = new ModelBuilder();
            yellowBuilder.begin();
            Node nod = yellowBuilder.node();
            nod.id = "yellowboxy";
            Material yellowMat = new Material();
            yellowMat.set(PBRColorAttribute.createBaseColorFactor(Color.YELLOW));
            MeshPartBuilder yellowPartBuilder = yellowBuilder.part("yellowboxy", GL20.GL_TRIANGLES, VertexAttributes.Usage.Position | VertexAttributes.Usage.Normal, yellowMat);
            BoxShapeBuilder.build(yellowPartBuilder, 10f, 20f, 10f, 2f, 2f, 2f);
            yellowInstance = new ModelInstance(yellowBuilder.end());

I create the physics for it using Bullet Physics library as such:

btCollisionShape btBox = new btBoxShape(new Vector3(1,1,1)); //notice we take halves!
    Vector3 localInertia = new Vector3();
    btBox.calculateLocalInertia(5f,localInertia);

    //MotionStateForPhys msphys = new MotionStateForPhys(yellowInstance.transform);

    btRigidBody.btRigidBodyConstructionInfo info = new btRigidBody.btRigidBodyConstructionInfo(5f,null,btBox,localInertia);
    btRigidBody btYellowBody = new btRigidBody(info);

    /*btYellowBody.setMotionState(msphys);*/

    btYellowBody.setWorldTransform(yellowInstance.transform);

    dynamicsWorld.addRigidBody(btYellowBody);

    btYellowBody.activate(true);

here are the definitions for the important physics variables needed by the library

private btCollisionConfiguration collisionConfiguration;
private com.badlogic.gdx.physics.bullet.collision.btDispatcher btDispatcher;
private btDiscreteDynamicsWorld dynamicsWorld;
private btSequentialImpulseConstraintSolver solver;

collisionConfiguration = new btDefaultCollisionConfiguration();
    btDispatcher = new btCollisionDispatcher(collisionConfiguration);
    btInterface = new btDbvtBroadphase();
    solver = new btSequentialImpulseConstraintSolver();
    dynamicsWorld = new 
btDiscreteDynamicsWorld(btDispatcher,btInterface,solver,collisionConfiguration);
    dynamicsWorld.setGravity(new Vector3(0,-10f,0));

Here is how I update the timeStep for the Physics library:

private void update(float deltatime){
btYellowBody.activate(true);
    dynamicsWorld.stepSimulation(deltatime , 5 , 1/60f);

} 

Here is my render method that calls the update method where my Physics variables are:

inputHandler.UpdateAfterKeyPress(Gdx.graphics.getDeltaTime(),"levelone");
    
    worldBuilder.update(delta); //will update the physics in levelone!

    Gdx.gl.glClearColor(0, .25f, 0, 1);
    Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT);

    ScreenUtils.clear(BACKGROUND_COLOUR, true);

managerScenes.update(Gdx.graphics.getDeltaTime());
    managerScenes.render();

if (Gdx.input.isKeyJustPressed(Input.Keys.ESCAPE))
        Gdx.app.exit();

EDIT: IF I REMOVE THE FLOOR FROM THE PHYSICS SIMULATION, THE YELLOW BOX FALLS UNDER THE FORCE OF GRAVITY! WHY REMOVING THE FLOOR WILL MAKE THE BOX FALL DOWN!

this is how the floor is added: THERE IS SOMETHING I'M MISSING HERE!!

ModelBuilder mBuilder = new ModelBuilder();
    mBuilder.begin();

    // Start a new node with a specific name
    Node node = mBuilder.node();
    node.id = "floory"; // Set the node's id

    Material mat = new Material();
    mat.set(PBRColorAttribute.createBaseColorFactor(Color.BLACK));
    MeshPartBuilder mpartbuilder = mBuilder.part("floory", GL20.GL_TRIANGLES, VertexAttributes.Usage.Position|VertexAttributes.Usage.Normal,mat);
    BoxShapeBuilder.build(mpartbuilder,0,-0.5f,0,300f,1f,400f);
    ModelInstance mInstance = new ModelInstance(mBuilder.end());

    sManager.addScene(new Scene(mInstance));

    //create the physics id and body/shape properties
   

    btCollisionShape shape = Bullet.obtainStaticNodeShape(mInstance.nodes);


    btBoxShape/*btCollisionShape*/ btBox = new btBoxShape(new Vector3(150,0.5f,200)); //notice we take halves!
    btRigidBody.btRigidBodyConstructionInfo info = new btRigidBody.btRigidBodyConstructionInfo(0,null,btBox,Vector3.Zero);
    btRigidBody btBody = new btRigidBody(info);
    btBody.setWorldTransform(mInstance.transform);
  

    dynamicsWorld.addCollisionObject(btBody);

Images that I load from Resources folder are not included into the build

I am new to Unity.

What I do: on button click, I load an image from the "Resources" folder by Resources.Load(<file path>) function. The file path is composed dynamically, depending on conditions. It works in editor, but they are not shown in the build.

It looks like Unity optimizes the assets folder while building, and if there is no explicit reference to an image, then it excludes it from the bundle? The files themselves are of small sizes, about ~10Kb each one, so it's not the file size that's the issue.

If I'm guessing right, what I can do to include those images into a build?

If not - what can be a reason for them to be absent in the build?

Get Points Inside Staticmesh

I am working with UE5 and want to do simulations using Lidar Point Clouds Plugin. i am building a plugin but am pretty new to C++. I want to detect points of a Pointcloudactor that overlap with Staticmeshactors and then color them. I want to build a node that allows those 2 types of inputs, but i cant get to set the input for the Pointcloudactor. Is there any possibility to see the C++ code of the UE5 plugin to use thei method?

Hope you understood my english.

How to modify a prefab Explosion effect with your own particle sprite?

My main goal is to create an explosion effect when I clicked the object. Therefore, I searched in Unity Assets, and found a cool framework called CartoonFX, which is really ok for me. Cartoon FX - Unity Store

I am using prefab -> CFXR4 Firework 1 Cyan-Purple. However, the particles scattered around are default particles, they are provided by framework itself. I want to change them with my own particle sprites. Therefore, the explosion will look more realistic, as the object itself is really exploding.

What I tried?

  1. I opened the prefab, and its particle effect system. In there, I enabled Texture Sheet Animation, change the mode to "Sprite" from "Grid". I did not add a sprite here, because I want to add the particle sprite in the script itself, since an object could be different types, like Red, Blue etc.
  2. I tried to use Claude for that, because I could not write it myself. I am sorry but I just needed to see a working prototype, then I could fix it later. However, it only gave me unvalid code pieces (like there are no attributes of ParticleSystem that Claude claims).

I already have a particle sprite, that I load it on the script, called particleSprites, a Sprite array. However, I could not change the default particle sprite of the explosion prefab to my particle. I am not very familiar with ParticleSystem and framework itself, so I could not guess what to do at this point. I want to keep all of the other things same (how particles scatter around, how they move etc.), but only change the particle sprite element.

If you can help me, or show me a way, or a resource, I would be glad! This type of things really takes so much time, and I am really searching for it for a while. Other than that, I can understand the code part itself, its OK. Thanks in advance, love you guys.

Mesh normals create square pattern on surface

Whether I import a smooth shaded mesh from Blender or I generate a mesh in Unity manually using Unity's built in normal calculation function, I get a square grid pattern showing for the shading of my mesh (red line highlights a couple of the squares).

Each square outline is where a quad is located.

Is this normal? Is there any way that I can make this appear more smooth?

Edit:

Each point on the triangle exists only once. Points are shared between triangles. They are created with a simple loop:

        List<Vector3> vertexList = new List<Vector3>();

        for (int z = 0; z < zSize; z++)
        {
            for (int x = 0; x < xSize; x++)
            {
                Vector3 v = new Vector3(0, 0, 0);
                v.x = (x * vertexSpacing) - vertexSpacing;
                v.z = (z * vertexSpacing) - vertexSpacing;
                v.y = Mathf.PerlinNoise(v.x + g.transform.position.x, v.z + g.transform.position.z);
                vertexList.Add(v);
            }
        }

Grid pattern with no wireframe

Wireframe edges visible

For a unity 3D multiplayer game, how to spawn and despawn multiple gameobjects for specific client?

I am working on a Unity 3D multiplayer game. There are 3 gameobjects (table, chair, pen) and apart from host, I have two client - student and teacher, consider them as two roles. I know how to instantiate a prefab and sync it across clients. But if i want to spawn-despawn one or two gameobjects for client 1 and not for client 2, how can I achieve that?

private void Update()
{
    if (!IsOwner) return;
    if (Input.GetKeyDown(KeyCode.T))
    {
        spawnedObjectTransform = Instantiate(spawnedObjectPrefab);
        spawnedObjectTransform.GetComponent<NetworkObject>().Spawn(true);

    }
    if (Input.GetKeyDown(KeyCode.Y))
    {
        Destroy(spawnedObjectTransform.gameObject);

    }
    Vector3 moveDir = new Vector3(0, 0, 0);
    if (Input.GetKey(KeyCode.W)) moveDir.z = +1f;
    if (Input.GetKey(KeyCode.S)) moveDir.z = -1f;
    if (Input.GetKey(KeyCode.A)) moveDir.x = -1f;
    if (Input.GetKey(KeyCode.D)) moveDir.x = +1f;

    float moveSpeed = 3f;
    transform.position += moveDir * moveSpeed * Time.deltaTime;

}

Or is there any other alternative like defining roles for clients and then setting permissions in the scene based on that role and again how can I do that?

Thanks in advance!

Collision of two rigid spheres with spin

I'm trying to create a particle simulation (solar system kind). Until now my particles have no spin so the collision is rather simple

    void collide(const Particle& b) {
        const Vector3d normal = math::normal(position(), b.position());
        const Vector3d relativeVelocity = velocity() - b.velocity();
        const double dot = math::dot(relativeVelocity, normal);
        const Vector3d work = normal * dot;

        _velocity = _velocity - work;
    }

Since I've read that particle spin plays a huge part in such simulations I'm trying to implement angular momentum but unfortunately this exceeds my math skills. I tried searching the internet for quite a while now for any source I understand but I'm at the brink of giving up.

How can I integrate particle spin into my code? I can work with any kind of (pseudo) code as long as the variables are somewhat clear. Particles are rigid spheres with mass = volume.

Edit: Dont get hang up on what the simulation is trying to achieve. The task can be simplified down to: Two rigid spheres collide in space. Calculate their motion and spin after the collision.

How to set the hair_mesh to follow the animation?

How to set the hair mesh to follow the animation? I am trying this.

.cpp
Hair = CreateDefaultSubobject<USkeletalMeshComponent>("Hair");
    Hair->SetupAttachment(GetMesh());
    Hair->AttachToComponent(MyCharacter::GetMesh(), FAttachmentTransformRules::KeepWorldTransform, TEXT("head"));

.h
UPROPERTY(EditAnywhere, Category = "Components")
    class USkeletalMeshComponent* Hair;

RESULT

I have attached the hair mesh to the Head socket but the hair mesh is not following the animation. The hair mesh is called Hair_Mesh and has been attached in the editor while the Hair skeletal mesh component is already sets to UPROPERTY(EditAnywhere, Category = "Components")

Is it a bad idea to store functions inside components in ECS?

Say I have three entities: Player, Spikes, and Zombie. All of them are just rectangles and they can collide with each other. All of them have the BoxCollision component.

So, the BoxCollison system would look something like this:

function detectCollisions () {
  // for each entity with box collision
    // check if they collide
      // then do something
}

The issue is, the sole purpose of the BoxCollision component is to detect collision, and that's it. Where should I put the game rules, such as "if the Player collided with Spikes, diminish its health" or "if the Zombie collided with Spikes, instantly kill the Zombie"?

I came up with the idea that each Entity should have its onCollision function.

Programming languages such as Javascript and F# have high-order functions, so I can easily pass functions around. So when assembling my Player entity, I could do something like:

function onPlayerCollision (player) {
  return function (entity) {
    if (entity.tag === 'Zombie') {
      player.getComponent('Health').hp -= 1
    } else if (entity.tag === 'Spikes') {
      player.getComponent('Health').hp -= 5
    }
  }
}

const player = new Entity()
player.addComponent('Health', { hp: 100 })
player.addComponent('BoxCollision', { onCollision: onPlayerCollision(player) } 
// notice I store a reference to a function here, so now the BoxCollision component will execute this passing the entity the player has collided with
function detectCollisions () {
  // for each entity with box collision
    // check if they collide
      onCollision(entity)

onPlayerCollision is a curried/closure function that receives a player, and then returns a new function that wants another Entity.

Are there any flaws with this? Is it okay for components to store references to functions? What are other ways of avoiding game rules in components? Events?

Thanks!

How do I use the Minecraft Coder Pack on Linux

I downloaded the Minecraft coder pack to mod the game and decompile it, but how do I use it on Linux. Some sources seem to give specific directions, but they are not clear and they involve executing the .bat files. My Ubuntu does not recognize .bat file. How do I decompile a class file from changing a Minecraft version jar file to a zip and unzipping it? How do I decompile one of those class files using Minecraft coder pack? I have been wondering what at least some of the Minecraft source code looks like.

In Unity/UNet: How do you properly spawn a `NetworkPlayer`?

In Unity/UNet: How do you properly spawn a NetworkPlayer? Right now, I'm doing it like this from inside a NetworkManager derived class:

   public override void OnServerAddPlayer(NetworkConnection conn, short playerControllerId) {
        NetworkPlayer newPlayer = Instantiate<NetworkPlayer>(m_NetworkPlayerPrefab);
        DontDestroyOnLoad(newPlayer);
        NetworkServer.AddPlayerForConnection(conn, newPlayer.gameObject, playerControllerId);
   }

This code snippet works pretty well and both clients can communicate with each other. However, there are a few little issues that arise only on the host:

  1. In Unity's hierarchy-view on the host, there are only two NetworkPlayer instances. Shouldn't there be four NetworkPlayer instances on the host? Two client instances and two server instances? If so, do you have any ideas what could cause the missing NetworkPlayer instances?
  2. The two NetworkPlayer instances have both, their isClient and isServer flags set to true. But only one of the has it's isLocalPlayer flag set. Now I wonder if this behavior is as intended? And if so, how do you distinguish between the client and the server instance of a NetworkPlayer?
  3. Two player behavior: If the remote client sends a [Command] that changes a [SyncVar] on the server, then on the host, the [SyncVar]-hook is called only on the NetworkPlayer instance that represents the remote NetworkPlayer. The [SyncVar]-hook is not called on the host's "isLocalPlayer-NetworkPlayer" instance. Shouldn't the [SyncVar]-hook be called on both NetworkPlayer instances?

Any advise is welcome. Thank you!

How to implement D&D 4e's line of sight algorithm?

D&D 4th Edition (the tabletop game) has combat on a 2D map with square tiles. A creature occupies an entire single tile.

The attacker has clear sight on the defender if lines can be drawn from one corner of the attacker's square to all four corners of the defender's square and none of these lines are blocked.

The rules are as follows:

To determine if a target has cover, choose a corner of your square and trace imaginary lines from that corner to every corner of the target's square. If one or two of those lines are blocked by an obstacle, the target has cover. (A line isn’t blocked if it runs along the edge of an obstacle’s or an enemy’s square.) If three or four of those lines are blocked but you have line of effect, the target has superior cover.

So, in the following situation:

Map of a D&D situation

  • A can fully see B, but C has superior cover from A (the unblocked line is from topright corner of A to topright corner of C), and A cannot see D at all.
  • B can fully see A, C and D.

How can I implement this?

Over the years, I have tried several solutions: some forms of Bresenham's line, testing for walls pixel by pixel, giving some tolerance around corners, and even dividing the map into line segments and comparing rays from the attacker to these line segments using a line-intersection formula. But everything either wasn't sufficiently rules-accurate or was too computationally expensive.

Can this line-of-sight algorithm be implemented efficiently (enough so that hundreds of checks may be performed for maps of 100x100 tiles per second) and accurately, and if so, how?

Use Stage's draw() method to invoke Actor's draw method in Libgdx

Question : In the following snippet, I wish to draw different shapes such as rectangles, circles, triangles etc. I will be creating different class files for each of such objects, as I did for Rectangle class here.

I've been trying to invoke the draw method of Rectangle object from Stage object, but I am unable to do so. If I make a plain call to the draw method of the Rectangle object, I can draw the object.

Can some one suggest me how is this possible ? Also, I had many other queries which I'd tried to put in comments. Please have a look at them and kindly help me in figuring out the concept beneath it.

Disclaimer : I am new to game programming, started with Libgdx.

Following is the Scan class :

package com.mygdx.scan.game;

import com.badlogic.gdx.ApplicationAdapter;
import com.badlogic.gdx.Gdx;
import com.badlogic.gdx.graphics.Color;
import com.badlogic.gdx.graphics.GL20;
import com.badlogic.gdx.graphics.glutils.ShapeRenderer;
import com.badlogic.gdx.graphics.glutils.ShapeRenderer.ShapeType;
import com.badlogic.gdx.scenes.scene2d.Stage;

public class Scan extends ApplicationAdapter {
private Stage stage;
public static ShapeRenderer shapeRenderer;

@Override
public void create () {
    stage = new Stage();
    shapeRenderer = new ShapeRenderer();
    Rectangle rect1 = new Rectangle(50, 50, 100, 100);
    stage.addActor(rect1);

}

@Override
public void render () {
    Gdx.gl.glClearColor(1, 1, 1, 1);
    Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT);
    shapeRenderer.setProjectionMatrix(stage.getCamera().combined);
    shapeRenderer.begin(ShapeType.Filled);
    shapeRenderer.setColor(Color.BLACK);
    //new Rectangle(50, 50, 100, 100).draw();
    shapeRenderer.end();
    // This should call Rectangles draw method. I want a Shapetype.Filled Rectangle. 
    // But I'm unable to invoke this method if I add the actor in stage object and invoke it's draw method. 
    stage.draw(); 
}

public void dispose() {
    stage.dispose();
}
}

Following is the Rectangle Actor class :

package com.mygdx.scan.game;

import com.badlogic.gdx.scenes.scene2d.Actor;

public class Rectangle extends Actor {

float xcord, ycord, width, height;
public Rectangle(float x , float y , float width, float height) {
    this.xcord = x;
    this.ycord = y;
    this.width = width;
    this.height = height;
}

// This needs to be called in the Scan class. I just want the draw       method to be invoked. 
// Also I wish to draw many such rectangles. What is the best practice to get hold of ShapeRenderer object from Scan class ?
// Should I use one instance of  ShapeRenderer object in Scan class or should I create ShapeRenderer object for each of the Rectangle object ??
// I also plan to repeat the activity for another object such as circles. So I would like to know the best practice here.
public void draw() {
    Scan.shapeRenderer.rect(this.xcord, this.ycord, this.width, this.height);
}

}

keyPressed is not working after adding ActionListener to JButton

I have a serious problem while trying to build a menu for my game. I've added two JButton to a main JPanel and added an ActionListener for each of them. The main JPanel also contains the game JPanel which have the keyPressed method inside keyController.

That's how it looks -
Main ->
      JPanel ->
        JButton, JButton,
        JPanel which contains the game and keyPressed function inside KeyController class which worked fine before I added the ActionListener for JButton.

For some reason after I added an ActionListener for each of the button, the game JPanel is not getting any keyPreseed events nor KeyRealesed.

Does anyone know the solution for my situation?
Thank you very much!

Main window -

  Scanner in = new Scanner(System.in);

    JFrame f = new JFrame("Square V.S Circles");
    f.setUndecorated(true);
    f.setResizable(false);
    f.setDefaultCloseOperation(JFrame.DISPOSE_ON_CLOSE);


    f.add(new JPanelHandler());
    f.pack();

    f.setVisible(true);
    f.setLocationRelativeTo(null);

JPanelHandler(main JPanel) -

  super.setFocusable(true);


        JButton mybutton = new JButton("Quit");
        JButton sayhi = new JButton("Say hi");

        sayhi.addActionListener(new ActionListener() {
        @Override
        public void actionPerformed(ActionEvent e)
        {
            System.out.println("Hi");
        }
        });      

         mybutton.addActionListener(new ActionListener() {
        @Override
        public void actionPerformed(ActionEvent e)
        {
           System.exit(0);
        }
        });      
        add(mybutton);
        add(sayhi);
        add(new Board(2));

Board KeyController(The code inside is working so it's unnecessary to put it here) -

private class KeyController extends KeyAdapter {


    public KeyController()
    {
        ..Code
    }


    @Override
    public void keyPressed(KeyEvent e) {

    ...Code

    }

    @Override
    public void keyReleased(KeyEvent e){

     ...Code

    }


}

Is there any method to perform this action automatically or all at once? [closed]

I would like to know if there is a way to build a js, a script for the following: https://www.chunkbase.com/apps/seed-map#seed=999&platform=bedrock_1_21&dimension=overworld&x=1214&z=-353&zoom=0.82 so that with a single key combination, or just by pressing a button, all the generated coordinates can be selected and displayed on the screen, if possible, functional for Tampermonkey

Setting a meshcollider's sharedmesh to a mesh which has been generated directly on the GPU gives "Failed extracting collision mesh"

I've been attempting to modify this example project https://github.com/keijiro/ComputeMarchingCubes

I'm trying to repurpose it to build terrain. After the Update() method in Assets/NoiseField/NoiseFieldVisualizer.cs I want to set a MeshCollider's sharedMesh to use the mesh that's been generated.

All I've done is add a line after the mesh is set:

GetComponent<MeshCollider>().sharedMesh = GetComponent<MeshFilter>().sharedMesh;

Currently I get an error:

Failed extracting collision mesh because vertex at index 2817 contains a non-finite value (0.000000, -nan, 1.000000). Mesh asset path "" Mesh name ""

When I iterate over sharedMesh.vertices and log them to console, I get either (0, 0, 0) or (-431602100.00, -431602100.00, -431602100.00) for each vertex value. Presumably because the values haven't been sent back to the CPU?

I have mesh cleaning enabled for the MeshCollider.

Is it possible to generate a mesh collider with a GPU-only mesh? Preferably without transferring the points back to the CPU.

❌