In the documentation for versions 3.5 and 4.3, it mentions that hot reloading is possible:
3.5: https://docs.godotengine.org/en/3.5/getting_started/introduction/godot_design_philosophy.html
4.3: https://docs.godotengine.org/en/4.3/getting_started/introduction/godot_design_philosophy.html
Godot tries to provide its own tools to answer most common needs. It has a dedicated scripting workspace, an animation editor, a tilemap editor, a shader editor, a debugger, a profiler, the ability to hot-reloa
Godot tries to provide its own tools to answer most common needs. It has a dedicated scripting workspace, an animation editor, a tilemap editor, a shader editor, a debugger, a profiler, the ability to hot-reload locally and on remote devices, etc.
The Godot editor runs on the game engine. It uses the engine's own UI system, it can hot-reload code and scenes when you test your projects, or run game code in the editor. This means you can use the same code and scenes for your games, or build plugins and extend the editor.
But how exactly do you perform it?
For example, in Flutter, if you type r or shift + r in the terminal after running $ flutter run, it performs a hot reload. How do you do something similar in Godot? Does simply saving a GDScript file trigger hot reload? (It doesn't seem to work that way for me...)
(More specifically, in Flutter, r is hot reload and shift + r is hot restart.)
I am editing GDScript using either VSCode or the editor included with Godot, and I am running it using the play button(attached below) in Godot.
Since this is a large project, providing a minimal code example might be time-consuming or even impossible. Could you simply explain the steps for performing a hot reload? Is it possible to do this using only the inspector? Is it not possible to directly modify the GDScript file for this? Also, is the hot reload triggered automatically, or do I need to press a button to initiate it?
If I can understand the correct patterns and limitations for performing hot reloads accurately, I think I’ll be able to experiment on my own. (Or, I can give up on the patterns that don’t work.)
The storyline unfolds in the realm of Arkanor, a land where magic and technology coexist, but their balance teeters on the edge of collapse. The world's once-mighty enchanters, the Lumina Arcana, who maintained the harmony between magic and technology, have mysteriously vanished, leaving Arkanor in turmoil.
The storyline unfolds in the realm of Arkanor, a land where magic and technology coexist, but their balance teeters on the edge of collapse. The world's once-mighty enchanters, the Lumina Arcana, who maintained the harmony between magic and technology, have mysteriously vanished, leaving Arkanor in turmoil.
OpenAI has developed a method to detect when someone uses ChatGPT to write essays or assignments.
The method utilizes a watermarking system that is 99.9% effective at identifying AI-generated text.
However, the tool has not yet been rolled out due to internal concerns and mixed reactions within the company.
When OpenAI launched ChatGPT towards the end of 2022, educators expressed concerns that students would use the platform to cheat on assignments and tests. To prevent this, numerous comp
OpenAI has developed a method to detect when someone uses ChatGPT to write essays or assignments.
The method utilizes a watermarking system that is 99.9% effective at identifying AI-generated text.
However, the tool has not yet been rolled out due to internal concerns and mixed reactions within the company.
When OpenAI launched ChatGPT towards the end of 2022, educators expressed concerns that students would use the platform to cheat on assignments and tests. To prevent this, numerous companies have rolled out AI detection tools, but they haven’t been the best at producing reliable results.
OpenAI has now revealed that it has developed a method to detect when someone uses ChatGPT to write (via The Washington Post). The technology is said to be 99.9% effective and essentially uses a system capable of predicting what word or phrase (called “token”) would come next in a sentence. The AI-detection tool slightly alters the tokens, which then leaves a watermark. This watermark is undetectable to the human eye but can be spotted by the tool in question.
Credit: Edgar Cervantes / Android Authority
Everything comes at a cost, and AI is no different. While ChatGPT and Gemini may be free to use, they require a staggering amount of computational power to operate. And if that wasn’t enough, Big Tech is currently engaged in an arms race to build bigger and better models like GPT-5. Critics argue that this growing demand for powerful — and energy-intensive — hardware will have a devastating impact on climate change. So just how much energy does AI
Everything comes at a cost, and AI is no different. While ChatGPT and Gemini may be free to use, they require a staggering amount of computational power to operate. And if that wasn’t enough, Big Tech is currently engaged in an arms race to build bigger and better models like GPT-5. Critics argue that this growing demand for powerful — and energy-intensive — hardware will have a devastating impact on climate change. So just how much energy does AI like ChatGPT use and what does this electricity use mean from an environmental perspective? Let’s break it down.
ChatGPT energy consumption: How much electricity does AI need?
OpenAI has developed a system for "watermarking" the output that ChatGPT generates, reports The Wall Street Journal, but has chosen not to deploy it. Google has deployed such a system with Gemini.
OpenAI has a method to reliably detect when someone uses ChatGPT to write an essay or research paper.
— Read the rest
The post OpenAI could watermark the text ChatGPT generates, but hasn't appeared first on Boing Boing.
OpenAI has developed a system for "watermarking" the output that ChatGPT generates, reports The Wall Street Journal, but has chosen not to deploy it. Google has deployed such a system with Gemini.
OpenAI has a method to reliably detect when someone uses ChatGPT to write an essay or research paper.
Say I have three entities: Player, Spikes, and Zombie. All of them are just rectangles and they can collide with each other. All of them have the BoxCollision component.
So, the BoxCollison system would look something like this:
function detectCollisions () {
// for each entity with box collision
// check if they collide
// then do something
}
The issue is, the sole purpose of the BoxCollision component is to detect collision, and that's it. Where should I put the game rules, such
Say I have three entities: Player, Spikes, and Zombie. All of them are just rectangles and they can collide with each other. All of them have the BoxCollision component.
So, the BoxCollison system would look something like this:
function detectCollisions () {
// for each entity with box collision
// check if they collide
// then do something
}
The issue is, the sole purpose of the BoxCollision component is to detect collision, and that's it. Where should I put the game rules, such as "if the Player collided with Spikes, diminish its health" or "if the Zombie collided with Spikes, instantly kill the Zombie"?
I came up with the idea that each Entity should have its onCollision function.
Programming languages such as Javascript and F# have high-order functions, so I can easily pass functions around. So when assembling my Player entity, I could do something like:
function onPlayerCollision (player) {
return function (entity) {
if (entity.tag === 'Zombie') {
player.getComponent('Health').hp -= 1
} else if (entity.tag === 'Spikes') {
player.getComponent('Health').hp -= 5
}
}
}
const player = new Entity()
player.addComponent('Health', { hp: 100 })
player.addComponent('BoxCollision', { onCollision: onPlayerCollision(player) }
// notice I store a reference to a function here, so now the BoxCollision component will execute this passing the entity the player has collided with
function detectCollisions () {
// for each entity with box collision
// check if they collide
onCollision(entity)
onPlayerCollision is a curried/closure function that receives a player, and then returns a new function that wants another Entity.
Are there any flaws with this? Is it okay for components to store references to functions? What are other ways of avoiding game rules in components? Events?
Enlarge (credit: Getty Images)
According to The Wall Street Journal, there's internal conflict at OpenAI over whether or not to release a watermarking tool that would allow people to test text to see whether it was generated by ChatGPT or not.
To deploy the tool, OpenAI would make tweaks to ChatGPT that would lead it to leave a trail in the text it generates that can be detected by a special tool. The watermark would be undetectable by human readers without the tool, and the
According to The Wall Street Journal, there's internal conflict at OpenAI over whether or not to release a watermarking tool that would allow people to test text to see whether it was generated by ChatGPT or not.
To deploy the tool, OpenAI would make tweaks to ChatGPT that would lead it to leave a trail in the text it generates that can be detected by a special tool. The watermark would be undetectable by human readers without the tool, and the company's internal testing has shown that it does not negatively affect the quality of outputs. The detector would be accurate 99.9 percent of the time. It's important to note that the watermark would be a pattern in the text itself, meaning it would be preserved if the user copies and pastes the text or even if they make modest edits to it.
Some OpenAI employees have campaigned for the tool's release, but others believe that would be the wrong move, citing a few specific problems.
Enlarge / A camel chills next to the Step Pyramid of Djoser in the Saqqara necropolis in Egypt, built around 2680 BCE. (credit: Charles J. Sharp/CC BY-SA 3.0)
It's long been a hotly debated open question regarding how the great pyramids of Egypt were built, given the sheer size and weight of the limestone blocks used for the construction. Numerous speculative (and controversial) hypotheses have been proposed, including the use of ramps, levers, cranes, winches, hoists, pivots
It's long been a hotly debated open question regarding how the great pyramids of Egypt were built, given the sheer size and weight of the limestone blocks used for the construction. Numerous speculative (and controversial) hypotheses have been proposed, including the use of ramps, levers, cranes, winches, hoists, pivots, or any combination thereof. Now we can add the possible use of a hydraulic lift to those speculative scenarios. According to a new paper published in the journal PLoS ONE, ancient Egyptians during the Third Dynasty may have at least partly relied on hydraulics to build the Step Pyramid of Djoser.
"Many theories on pyramid construction suggest that pure human strength, possibly aided by basic mechanical devices like levers and ramps, was utilized," co-author Xavier Landreau, of Paleotechnic in Paris and Universite Grenoble Alpes, told Ars. "Our analysis led us to the utilization of water as a means of raising stones. We are skeptical that the largest pyramids were built using only known ramp and lever methods."
The Step Pyramid was built around 2680 BCE, part of a funerary complex for the Third Dynasty pharaoh Djoser. It's located in the Saqqara necropolis and was the first pyramid to be built, almost a "proto-pyramid" that originally stood some 205 feet high. (The Great Pyramid of Giza, by contrast, stood 481 feet high and was the tallest human-made structure for nearly 4,000 years.) Previous monuments were made of mud brick, but Djoser's Step Pyramid is made of stone (specifically limestone); it's widely thought that Djoser's vizier, Imhotep, designed and built the complex. The third century BCE historian Manetho once described Imhotep as the "inventor of building in stone." As such, the Djoser Pyramid influenced the construction of later, larger pyramids during the Fourth, Fifth, and Sixth Dynasties.
Enlarge / CT scans and other techniques allowed scientists to "virtually dissect" this 3,500-year-old "Screaming Woman" mummy. (credit: Sahar Saleem/CC BY)
There have been a handful of ancient Egyptian mummies discovered with their mouths wide open, as if mid-scream. This has puzzled archaeologists because Egyptian mummification typically involved bandaging the mandible to the skull to keep the mouth closed. Scientists have "virtually dissected" one such "Screaming Woman" mum
There have been a handful of ancient Egyptian mummies discovered with their mouths wide open, as if mid-scream. This has puzzled archaeologists because Egyptian mummification typically involved bandaging the mandible to the skull to keep the mouth closed. Scientists have "virtually dissected" one such "Screaming Woman" mummy and concluded that the wide-open mouth is not the result of poor mummification, according to a new paper published in the journal Frontiers in Medicine. There was no clear cause of death, but the authors suggest the mummy's expression could indicate she died in excruciating pain.
"The Screaming Woman is a true ‘time capsule’ of the way that she died and was mummified,” said co-author Sahar Saleem, a professor of radiology at Cairo University in Egypt. "Here we show that she was embalmed with costly, imported embalming material. This, and the mummy's well-preserved appearance, contradicts the traditional belief that a failure to remove her inner organs implied poor mummification."
Saleem has long been involved in paleoradiology and archaeometry of "screaming" Egyptian mummies. For instance, she co-authored a 2020 paper applying similar techniques to the study of another "Screaming Woman" mummy, dubbed Unknown Woman A by the then-head of the Egyptian Antiquities Service, Gaston Maspero, and one of two such mummies discovered in the Royal Cache at Deir el Bahari near Luxor in 1881. This was where 21st and 22nd Dynasty priests would hide the remains of royal members from earlier dynasties to thwart grave robbers.
I would like to know if there is a way to build a js, a script for the following:
https://www.chunkbase.com/apps/seed-map#seed=999&platform=bedrock_1_21&dimension=overworld&x=1214&z=-353&zoom=0.82
so that with a single key combination, or just by pressing a button, all the generated coordinates can be selected and displayed on the screen,
if possible, functional for Tampermonkey
I've been beginning to work with different kinds of splines (e.g. Bezier, Hermite, B-splines) in some of my work with computer graphics and I am trying to get a grasp of how they work (mathematically and programmatically). I know that generally (at least for cubic splines), you will have 4 control points that influence the position of points along an interpolated line. For example, with Bezier curves (as far as I understand), there can be two end points where the line will begin and terminate an
I've been beginning to work with different kinds of splines (e.g. Bezier, Hermite, B-splines) in some of my work with computer graphics and I am trying to get a grasp of how they work (mathematically and programmatically). I know that generally (at least for cubic splines), you will have 4 control points that influence the position of points along an interpolated line. For example, with Bezier curves (as far as I understand), there can be two end points where the line will begin and terminate and two control points that influence the direction of the line.
I've been programming Bezier, Hermite, and B-Splines (in javascript on an html5 canvas) and what I don't understand is how to choose these 4 points in order to see a specific curve rendered on the screen. So far, I have only been able to render at least part of a curve by randomly playing around with different numbers for each point.
This is one of the many questions I have about the inner workings of splines so could someone provide an overview on how they work and how they can be programmed (most of the examples online are more theoretical and rather unspecific)?
Mika’s beloved tale comes to Xbox for the first time, with updated graphics and a new photo mode. Explore Kaltenbach, bond with Windstorm, and uncover hidden secrets.
Embark on a journey like no other as Windstorm: Start of a Great Friendship – Remastered gallops onto Xbox for the first time! Dive into the captivating world of Windstorm and experience the heartwarming tale of Mika, a young girl with a remarkable gift for understanding horses. As Mika forms an unbreakable bond with the magnifi
Mika’s beloved tale comes to Xbox for the first time, with updated graphics and a new photo mode. Explore Kaltenbach, bond with Windstorm, and uncover hidden secrets.
Embark on a journey like no other as Windstorm: Start of a Great Friendship – Remastered gallops onto Xbox for the first time! Dive into the captivating world of Windstorm and experience the heartwarming tale of Mika, a young girl with a remarkable gift for understanding horses. As Mika forms an unbreakable bond with the magnificent black stallion, Windstorm, players are invited to join them on an unforgettable adventure.
Based on the beloved “Windstorm” movies and bestselling books, this Xbox release promises an immersive experience like never before. With fully updated graphics and a brand-new photo mode, players can explore the breathtaking landscapes of the Kaltenbach estate in stunning detail.
Engage in exhilarating riding tasks alongside Windstorm, as you navigate the picturesque Alpine area and uncover hidden treasures scattered throughout the terrain. Feel the thrill of horseback riding as you interact naturally with Windstorm, learning to communicate and understand each other’s movements with ease.
But the journey doesn’t end there. Care for Windstorm like a true friend, tending to his needs with love and dedication. From grooming to training, ensure Windstorm is in peak condition as you forge an unbreakable bond that will withstand any challenge.
Experience the world of Windstorm like never before with this Unreal Engine 5-powered remaster that boasts updated graphics, enhanced character details, and new animations for all creatures, including Windstorm himself. Explore every corner of the game world to discover new collectibles and hidden surprises, all while capturing stunning moments with the new photo mode.
Windstorm: Start of Great Friendship – Remastered on Xbox promises to deliver an unforgettable gaming experience. So, saddle up, embrace the adventure, and write with every gallop legends of your own.
Windstorm: Start of a Great Friendship – Remastered
Discover the breathtaking world of Windstorm and experience the fascinating story of Mika, a young girl known as a horse whisperer, as she befriends the majestic black stallion called Windstorm.
Based on the successful Windstorm movies and best-selling books, experience exciting riding tasks together with Windstorm and freely explore the breathtaking surroundings of the Kaltenbach estate. Ride alongside other horses, look for hidden objects and try to become the racing champion. Numerous secrets and hidden locations are waiting to be discovered!
After a long day of adventuring across the beautiful hills of Kaltenbach, take care of your animal friend with all of your love! Grooming, scraping out hooves and much more is necessary to keep your newly made friend in good shape. Windstorm is happy when you clean his stable or play around with him in the paddock. In return, he will always be by your side during your adventures and you will push each other to become the best you can be!
Become an unbeatable team and form an unforgettable bond with Windstorm.
• Ride Windstorm and freely explore the gorgeous Alpine area
• Learn to interact naturally with Windstorm and enjoy the freedom of horseback riding
• Carefully groom Windstorm and keep an eye on his health, training and happiness
Explore the world of Windstorm – completely Remastered!
• Updated graphics, more details to the characters you know and love
• All animals, including Windstorm, have been updated with new animations
• Updated nature scenes with new trees and plants, lighting and weather effects
• Explore every corner of the game world to find new collectibles
• A new photo mode to take beautiful pictures. Edit and add filters to make your photos unique!
It’s been announced that Reddit is going to be used to train OpenAI’s ChatGPT model on current topics (and probably more closely resemble human interactions.)
Redditors agreed to it in the terms of service.
When Your Content is created with or submitted to the Services, you grant us a worldwide, royalty-free, perpetual, irrevocable, non-exclusive, transferable, and sublicensable license to use, copy, modify, adapt, prepare derivative works of, distribute, store, perform, and display Your
It’s been announced that Reddit is going to be used to train OpenAI’s ChatGPT model on current topics (and probably more closely resemble human interactions.)
When Your Content is created with or submitted to the Services, you grant us a worldwide, royalty-free, perpetual, irrevocable, non-exclusive, transferable, and sublicensable license to use, copy, modify, adapt, prepare derivative works of, distribute, store, perform, and display Your Content and any name, username, voice, or likeness provided in connection with Your Content in all media formats and channels now known or later developed anywhere in the world. This license includes the right for us to make Your Content available for syndication, broadcast, distribution, or publication by other companies, organizations, or individuals who partner with Reddit. You also agree that we may remove metadata associated with Your Content, and you irrevocably waive any claims and assertions of moral rights or attribution with respect to Your Content.
In other words if you’re not paying for the product, you are the product.
I suspect using the voting combined with the commentary is going to help reveal what is a useful comment and not, but I can’t help thinking that ChatGPT is going to start making some pretty snarky responses on current events if it’s trained on the groups I’ve looked at.
I suspect were I a regular contributor to Reddit I’d be annoyed that a chatbot is being trained to comment like me as I thought I was only being used for advertising purposes and not training Skynet to replace me.
It appears the main focus is on more recent content rather than resurrecting deceased redditors as AI ghouls to comment on the state of the post-IPO reddit, but everything Reddit now feeds the machine. Your work for your friends is being sold as a commodity. Fun times.
You've got to love Steam Next Fest. The video team has already put together a list of its must-play Steam Next Fest demos, but there's just so many new and exciting demos to try that we couldn't fit them all into one listicle! Well, OK, I guess we could have done, but that would have been one very long list video indeed...On the video player above (or over on our YouTube channel if you'd prefer), you'll be able to watch today's livestream, where I took a look at the upcoming Steam Next Fest dem
You've got to love Steam Next Fest. The video team has already put together a list of its must-play Steam Next Fest demos, but there's just so many new and exciting demos to try that we couldn't fit them all into one listicle! Well, OK, I guess we could have done, but that would have been one very long list video indeed...
On the video player above (or over on our YouTube channel if you'd prefer), you'll be able to watch today's livestream, where I took a look at the upcoming Steam Next Fest demo for Conscript. Published by Team17, Conscript is a survival horror experience set in the trenches of the Battle of Verdun. While this isn't the first time we've seen a horror game set during the First World War, Conscript still feels rather unique, even though its developer Jordan Mochi admits that he has drawn a lot of inspiration from classic horror games like Resident Evil and Silent Hill.
While I love a good retro-inspired horror game as much as the next person, one of the first things I noticed when I played the demo was how slow and clunky the combat was. This is certainly in keeping with games like the first Resident Evil but modern gamers may be put off by what appears to be a very sluggish and unforgiving control scheme. In Conscript you can only shoot, reload and melee attack whilst standing still and I found this very frustrating during the opening 30 minutes of the demo when I had to single handedly hold off a German trench invasion with only a rifle and a shovel. This mainly involved kiting enemies around the trenches until I could get in the right position to shoot at them or bonk them on the head, something that ended up feeling a bit like being chased around a Pac-Man maze by a bunch of Stahlhelm-wearing ghosts.
Apple has officially announced a partnership with OpenAI, the startup behind ChatGPT.
Later this year, ChatGPT will occasionally chime in to answer creative and complex questions when you invoke Siri.
Siri will ask for your consent before sharing individual prompts with ChatGPT.
After months of anticipation and leaks, Apple has finally announced that it’s teaming up with AI startup OpenAI. The partnership is set to bring ChatGPT-esque smarts to Siri on iPhone, iPad, and Mac. Notably, thi
Apple has officially announced a partnership with OpenAI, the startup behind ChatGPT.
Later this year, ChatGPT will occasionally chime in to answer creative and complex questions when you invoke Siri.
Siri will ask for your consent before sharing individual prompts with ChatGPT.
After months of anticipation and leaks, Apple has finally announced that it’s teaming up with AI startup OpenAI. The partnership is set to bring ChatGPT-esque smarts to Siri on iPhone, iPad, and Mac. Notably, this ChatGPT integration was only one of several new AI features launched under the banner of Apple Intelligence at the company’s WWDC event today.
When iOS 18 launches later in 2024, you’ll be able to converse with Siri via natural language prompts similar to Google’s Gemini chatbot on Android. This marks a major leap forward for Siri, transforming it from a rigidly structured assistant into a conversational AI chatbot. However, Siri will not rely on OpenAI’s models for most of its responses. Instead, Apple says that it will only pass on select questions to ChatGPT.
It’s been announced that Reddit is going to be used to train OpenAI’s ChatGPT model on current topics (and probably more closely resemble human interactions.)
Redditors agreed to it in the terms of service.
When Your Content is created with or submitted to the Services, you grant us a worldwide, royalty-free, perpetual, irrevocable, non-exclusive, transferable, and sublicensable license to use, copy, modify, adapt, prepare derivative works of, distribute, store, perform, and display Your
It’s been announced that Reddit is going to be used to train OpenAI’s ChatGPT model on current topics (and probably more closely resemble human interactions.)
When Your Content is created with or submitted to the Services, you grant us a worldwide, royalty-free, perpetual, irrevocable, non-exclusive, transferable, and sublicensable license to use, copy, modify, adapt, prepare derivative works of, distribute, store, perform, and display Your Content and any name, username, voice, or likeness provided in connection with Your Content in all media formats and channels now known or later developed anywhere in the world. This license includes the right for us to make Your Content available for syndication, broadcast, distribution, or publication by other companies, organizations, or individuals who partner with Reddit. You also agree that we may remove metadata associated with Your Content, and you irrevocably waive any claims and assertions of moral rights or attribution with respect to Your Content.
In other words if you’re not paying for the product, you are the product.
I suspect using the voting combined with the commentary is going to help reveal what is a useful comment and not, but I can’t help thinking that ChatGPT is going to start making some pretty snarky responses on current events if it’s trained on the groups I’ve looked at.
I suspect were I a regular contributor to Reddit I’d be annoyed that a chatbot is being trained to comment like me as I thought I was only being used for advertising purposes and not training Skynet to replace me.
It appears the main focus is on more recent content rather than resurrecting deceased redditors as AI ghouls to comment on the state of the post-IPO reddit, but everything Reddit now feeds the machine. Your work for your friends is being sold as a commodity. Fun times.
Steam Next Fest, which kicked off in 2021 as the successor to the Steam Game Festival, remains one of the best ways to see what good games are on the horizon. The event, which runs several times a year, acts as a hub for in-development titles to offer demos. We’ve been through the latest batch to highlight some of the…Read more...
Steam Next Fest, which kicked off in 2021 as the successor to the Steam Game Festival, remains one of the best ways to see what good games are on the horizon. The event, which runs several times a year, acts as a hub for in-development titles to offer demos. We’ve been through the latest batch to highlight some of the…
The MINISFORUM AtomMan X7 Pt is a compact desktop computer with an AMD Ryzen 9 8945HS processor, support for up to 64GB of dual-channel DDR5-5600 memory, two M.2 2280 slots for PCIe 4.0 SSDs and I/O features that include dual 2.5 GbE LaN ports, an OCuLink port, and two USB4 ports. It’s also a water-cooled computer, which […]
The post MINISFORUM unveils AtomMan X7 Pt is a water-cooled mini PC appeared first on Liliputing.
The MINISFORUM AtomMan X7 Pt is a compact desktop computer with an AMD Ryzen 9 8945HS processor, support for up to 64GB of dual-channel DDR5-5600 memory, two M.2 2280 slots for PCIe 4.0 SSDs and I/O features that include dual 2.5 GbE LaN ports, an OCuLink port, and two USB4 ports. It’s also a water-cooled computer, which […]
Enlarge (credit: Apple)
On Monday, Apple debuted "Apple Intelligence," a new suite of free AI-powered features for iOS 18, iPadOS 18, macOS Sequoia that includes creating email summaries, generating images and emoji, and allowing Siri to take actions on your behalf. These features are achieved through a combination of on-device and cloud processing, with a strong emphasis on privacy. Apple says that Apple Intelligence features will be widely available later this year and will
On Monday, Apple debuted "Apple Intelligence," a new suite of free AI-powered features for iOS 18, iPadOS 18, macOS Sequoia that includes creating email summaries, generating images and emoji, and allowing Siri to take actions on your behalf. These features are achieved through a combination of on-device and cloud processing, with a strong emphasis on privacy. Apple says that Apple Intelligence features will be widely available later this year and will be available as a beta test for developers this summer.
The announcements came during a livestream WWDC keynote and a simultaneous event attended by the press on Apple's campus in Cupertino, California. In an introduction, Apple CEO Tim Cook said the company has been using machine learning for years, but the introduction of large language models (LLMs) presents new opportunities to elevate the capabilities of Apple products. He emphasized the need for both personalization and privacy in Apple's approach.
At last year's WWDC, Apple avoided using the term "AI" completely, instead preferring terms like "machine learning" as Apple's way of avoiding buzzy hype while integrating applications of AI into apps in useful ways. This year, Apple figured out a new way to largely avoid the abbreviation "AI" by coining "Apple Intelligence," a catchall branding term that refers to a broad group of machine learning, LLM, and image generation technologies. By our count, the term "AI" was used sparingly in the keynote—most notably near the end of the presentation when Apple executive Craig Federighi said, "It's AI for the rest of us."
GameFromScratch.com
ct.js Game Engine Gets New Catnip Visual Scripting Language
The ct.js game engine we first covered back in 2020 then revisited in 2023 has advanced a great deal since then. Catnip is a free and open source 2D game engine available for Windows, Mac and Linux. Since we last looked at Ct.js they have released version 4.0 with several […]
The post ct.js Game Engine Gets New Catnip Visual Scripting Language appeared first on GameFromScratch.com.
The ct.js game engine we first covered back in 2020 then revisited in 2023 has advanced a great deal since then. Catnip is a free and open source 2D game engine available for Windows, Mac and Linux. Since we last looked at Ct.js they have released version 4.0 with several […]
Credit: Edgar Cervantes / Android Authority
Apple’s own large language models (LLMs) reportedly aren’t capable enough to replicate ChatGPT, which has pushed it toward third-party partnerships.
The company may have internally tested a ChatGPT-powered Siri, and iOS 18 users could get their hands on it later this year.
Microsoft could be worried about the Apple-OpenAI deal, as it would have to accommodate the increasing server demand and compete against Apple’s features.
In under two weeks
Apple’s own large language models (LLMs) reportedly aren’t capable enough to replicate ChatGPT, which has pushed it toward third-party partnerships.
The company may have internally tested a ChatGPT-powered Siri, and iOS 18 users could get their hands on it later this year.
Microsoft could be worried about the Apple-OpenAI deal, as it would have to accommodate the increasing server demand and compete against Apple’s features.
In under two weeks, Apple will finally reveal iOS 18 and its rumored AI additions at WWDC24. Given that the iPhone maker’s own AI efforts may still be lacking, it has reportedly resorted to third-party partnerships to power some of these smart features. As a result, OpenAI’s ChatGPT could be fueling Siri and other AI functionalities on iOS 18, and Microsoft is worried about it.
According to a report by The Information, Apple has internally tested a version of Siri that relies on ChatGPT’s smarts. The Cupertino firm is reportedly not ready to offer its own chatbot yet, pushing it to seek third-party alternatives for the time being. Meanwhile, it will likely use its own LLMs to power the less demanding iOS 18 features, such as on-device summarization.
It’s been announced that Reddit is going to be used to train OpenAI’s ChatGPT model on current topics (and probably more closely resemble human interactions.)
Redditors agreed to it in the terms of service.
When Your Content is created with or submitted to the Services, you grant us a worldwide, royalty-free, perpetual, irrevocable, non-exclusive, transferable, and sublicensable license to use, copy, modify, adapt, prepare derivative works of, distribute, store, perform, and display Your
It’s been announced that Reddit is going to be used to train OpenAI’s ChatGPT model on current topics (and probably more closely resemble human interactions.)
When Your Content is created with or submitted to the Services, you grant us a worldwide, royalty-free, perpetual, irrevocable, non-exclusive, transferable, and sublicensable license to use, copy, modify, adapt, prepare derivative works of, distribute, store, perform, and display Your Content and any name, username, voice, or likeness provided in connection with Your Content in all media formats and channels now known or later developed anywhere in the world. This license includes the right for us to make Your Content available for syndication, broadcast, distribution, or publication by other companies, organizations, or individuals who partner with Reddit. You also agree that we may remove metadata associated with Your Content, and you irrevocably waive any claims and assertions of moral rights or attribution with respect to Your Content.
In other words if you’re not paying for the product, you are the product.
I suspect using the voting combined with the commentary is going to help reveal what is a useful comment and not, but I can’t help thinking that ChatGPT is going to start making some pretty snarky responses on current events if it’s trained on the groups I’ve looked at.
I suspect were I a regular contributor to Reddit I’d be annoyed that a chatbot is being trained to comment like me as I thought I was only being used for advertising purposes and not training Skynet to replace me.
It appears the main focus is on more recent content rather than resurrecting deceased redditors as AI ghouls to comment on the state of the post-IPO reddit, but everything Reddit now feeds the machine. Your work for your friends is being sold as a commodity. Fun times.
It may not matter, but I'm following How to make a Video Game - Godot Beginner Tutorial. I want to implement the dying animation instead of removing CollisionShape2D from the body (parameter passed to Area2D's _on_body_entered signal).
The player:
The player animations:
The code to remove the CollisionShape2D (player falls off the screen before restarting, based on tutorial) that works:
func _on_body_entered(body):
print("You die")
Engine.time_scale = 0.5
body.get_node("CollisionSh
It may not matter, but I'm following How to make a Video Game - Godot Beginner Tutorial. I want to implement the dying animation instead of removing CollisionShape2D from the body (parameter passed to Area2D's _on_body_entered signal).
The player:
The player animations:
The code to remove the CollisionShape2D (player falls off the screen before restarting, based on tutorial) that works:
func _on_body_entered(body):
print("You die")
var player_anim = body.get_node("AnimatedSprite2D") # tried a single line as well
Engine.time_scale = 0.5
player_anim.play("dying" )
timer.start()
I get no errors, it just resets after the designated amount of time. Is it possible to trigger player dying animation from Area2D script?
Large language models, the AI systems that power chatbots like ChatGPT, are getting better and better—but they’re also getting bigger and bigger, demanding more energy and computational power. For LLMs that are cheap, fast, and environmentally friendly, they’ll need to shrink, ideally small enough to run directly on devices like cellphones. Researchers are finding ways to do just that by drastically rounding off the many high-precision numbers that store their memories to equal just 1 or -1.LLMs
Large language models, the AI systems that power chatbots like ChatGPT, are getting better and better—but they’re also getting bigger and bigger, demanding more energy and computational power. For LLMs that are cheap, fast, and environmentally friendly, they’ll need to shrink, ideally small enough to run directly on devices like cellphones. Researchers are finding ways to do just that by drastically rounding off the many high-precision numbers that store their memories to equal just 1 or -1.
LLMs, like all neural networks, are trained by altering the strengths of connections between their artificial neurons. These strengths are stored as mathematical parameters. Researchers have long compressed networks by reducing the precision of these parameters—a process called quantization—so that instead of taking up 16 bits each, they might take up 8 or 4. Now researchers are pushing the envelope to a single bit.
How to Make a 1-bit LLM
There are two general approaches. One approach, called post-training quantization (PTQ) is to quantize the parameters of a full-precision network. The other approach, quantization-aware training (QAT), is to train a network from scratch to have low-precision parameters. So far, PTQ has been more popular with researchers.
In February, a team including Haotong Qin at ETH Zurich, Xianglong Liu at Beihang University, and Wei Huang at the University of Hong Kong introduced a PTQ method called BiLLM. It approximates most parameters in a network using 1 bit, but represents a few salient weights—those most influential to performance—using 2 bits. In one test, the team binarized a version of Meta’s LLaMa LLM that has 13 billion parameters.
“One-bit LLMs open new doors for designing custom hardware and systems specifically optimized for 1-bit LLMs.” —Furu Wei, Microsoft Research Asia
To score performance, the researchers used a metric calledperplexity, which is basically a measure of how surprised the trained model was by each ensuing piece of text. For one dataset, the original model had a perplexity of around 5, and the BiLLM version scored around 15, much better than the closest binarization competitor, which scored around 37 (for perplexity, lower numbers are better). That said, the BiLLM model required about a tenth of the memory capacity as the original.
PTQ has several advantages over QAT, says Wanxiang Che, a computer scientist at Harbin Institute of Technology, in China. It doesn’t require collecting training data, it doesn’t require training a model from scratch, and the training process is more stable. QAT, on the other hand, has the potential to make models more accurate, since quantization is built into the model from the beginning.
1-bit LLMs Find Success Against Their Larger Cousins
Last year, a team led by Furu Wei and Shuming Ma, at Microsoft Research Asia, in Beijing, created BitNet, the first 1-bit QAT method for LLMs. After fiddling with the rate at which the network adjusts its parameters, in order to stabilize training, they created LLMs that performed better than those created using PTQ methods. They were still not as good as full-precision networks, but roughly 10 times as energy efficient.
In February, Wei’s team announced BitNet 1.58b, in which parameters can equal -1, 0, or 1, which means they take up roughly 1.58 bits of memory per parameter. A BitNet model with 3 billion parameters performed just as well on various language tasks as a full-precision LLaMA model with the same number of parameters and amount of training, but it was 2.71 times as fast, used 72 percent less GPU memory, and used 94 percent less GPU energy. Wei called this an “aha moment.” Further, the researchers found that as they trained larger models, efficiency advantages improved.
A BitNet model with 3 billion parameters performed just as well on various language tasks as a full-precision LLaMA model.
This year, a team led by Che, of Harbin Institute of Technology, released a preprint on another LLM binarization method, called OneBit. OneBit combines elements of both PTQ and QAT. It uses a full-precision pretrained LLM to generate data for training a quantized version. The team’s 13-billion-parameter model achieved a perplexity score of around 9 on one dataset, versus 5 for a LLaMA model with 13 billion parameters. Meanwhile, OneBit occupied only 10 percent as much memory. On customized chips, it could presumably run much faster.
Wei, of Microsoft, says quantized models have multiple advantages. They can fit on smaller chips, they require less data transfer between memory and processors, and they allow for faster processing. Current hardware can’t take full advantage of these models, though. LLMs often run on GPUs like those made by Nvidia, which represent weights using higher precision and spend most of their energy multiplying them. New hardware could natively represent each parameter as a -1 or 1 (or 0), and then simply add and subtract values and avoid multiplication. “One-bit LLMs open new doors for designing custom hardware and systems specifically optimized for 1-bit LLMs,” Wei says.
“They should grow up together,” Huang, of the University of Hong Kong, says of 1-bit models and processors. “But it’s a long way to develop new hardware.”
Enlarge / The Google "G" logo surrounded by whimsical characters, all of which look stunned and surprised. (credit: Google)
On Thursday, Google capped off a rough week of providing inaccurate and sometimes dangerous answers through its experimental AI Overview feature by authoring a follow-up blog post titled, "AI Overviews: About last week." In the post, attributed to Google VP Liz Reid, head of Google Search, the firm formally acknowledged issues with the feature and outlin
On Thursday, Google capped off a rough week of providing inaccurate and sometimes dangerous answers through its experimental AI Overview feature by authoring a follow-up blog post titled, "AI Overviews: About last week." In the post, attributed to Google VP Liz Reid, head of Google Search, the firm formally acknowledged issues with the feature and outlined steps taken to improve a system that appears flawed by design, even if it doesn't realize it is admitting it.
To recap, the AI Overview feature—which the company showed off at Google I/O a few weeks ago—aims to provide search users with summarized answers to questions by using an AI model integrated with Google's web ranking systems. Right now, it's an experimental feature that is not active for everyone, but when a participating user searches for a topic, they might see an AI-generated answer at the top of the results, pulled from highly ranked web content and summarized by an AI model.
While Google claims this approach is "highly effective" and on par with its Featured Snippets in terms of accuracy, the past week has seen numerous examples of the AI system generating bizarre, incorrect, or even potentially harmful responses, as we detailed in a recent feature where Ars reporter Kyle Orland replicated many of the unusual outputs.
Enlarge / OpenAI CEO Sam Altman. (credit: JASON REDMOND / Contributor | AFP)
Apple and OpenAI have successfully made a deal to include OpenAI's generative AI technology in Apple's software, according to The Information, which cites a source who has spoken to OpenAI CEO Sam Altman about the deal.
It was previously reported by Bloomberg that the deal was in the works. The news appeared in a longer article about Altman and his growing influence within the company.
"Now, [Altman]
Apple and OpenAI have successfully made a deal to include OpenAI's generative AI technology in Apple's software, according to The Information, which cites a source who has spoken to OpenAI CEO Sam Altman about the deal.
It was previously reported by Bloomberg that the deal was in the works. The news appeared in a longer article about Altman and his growing influence within the company.
"Now, [Altman] has fulfilled a longtime goal by striking a deal with Apple to use OpenAI’s conversational artificial intelligence in its products, which could be worth billions of dollars to the startup if it goes well," according to The Information's source.
Enlarge (credit: Getty Images)
On Thursday, several major tech companies, including Google, Intel, Microsoft, Meta, AMD, Hewlett-Packard Enterprise, Cisco, and Broadcom, announced the formation of the Ultra Accelerator Link (UALink) Promoter Group to develop a new interconnect standard for AI accelerator chips in data centers. The group aims to create an alternative to Nvidia's proprietary NVLink interconnect technology, which links together multiple servers that power today'
On Thursday, several major tech companies, including Google, Intel, Microsoft, Meta, AMD, Hewlett-Packard Enterprise, Cisco, and Broadcom, announced the formation of the Ultra Accelerator Link (UALink) Promoter Group to develop a new interconnect standard for AI accelerator chips in data centers. The group aims to create an alternative to Nvidia's proprietary NVLink interconnect technology, which links together multiple servers that power today's AI applications like ChatGPT.
The beating heart of AI these days lies in GPUs, which can perform massive numbers of matrix multiplications—necessary for running neural network architecture—in parallel. But one GPU often isn't enough for complex AI systems. NVLink can connect multiple AI accelerator chips within a server or across multiple servers. These interconnects enable faster data transfer and communication between the accelerators, allowing them to work together more efficiently on complex tasks like training large AI models.
This linkage is a key part of any modern AI data center system, and whoever controls the link standard can effectively dictate which hardware the tech companies will use. Along those lines, the UALink group seeks to establish an open standard that allows multiple companies to contribute and develop AI hardware advancements instead of being locked into Nvidia's proprietary ecosystem. This approach is similar to other open standards, such as Compute Express Link (CXL)—created by Intel in 2019—which provides high-speed, high-capacity connections between CPUs and devices or memory in data centers.
A producer on Kerbal Space Program 2 has confirmed that those working on the space flight sim are being laid off en masse. We already knew that the developers at Intercept Games would be losing their jobs thanks to a closure announcement from Washington State. Until remarks from Strauss Zelnick, the CEO of Take-Two Interactive, muddied the waters. Zelnick refused to acknowledge that the studio was being closed when asked by a reporter, even going so far as to claim the opposite. "We didn't shut
A producer on Kerbal Space Program 2 has confirmed that those working on the space flight sim are being laid off en masse. We already knew that the developers at Intercept Games would be losing their jobs thanks to a closure announcement from Washington State. Until remarks from Strauss Zelnick, the CEO of Take-Two Interactive, muddied the waters. Zelnick refused to acknowledge that the studio was being closed when asked by a reporter, even going so far as to claim the opposite. "We didn't shutter those studios," he told IGN. But it seems clear from one producer's testimony that Zelnick's remarks are inaccurate.
It seems like every time Konami shows off the upcoming Silent Hill 2 remake, something doesn’t sit right with fans. It started with the game’s announcement when it was revealed Layers of Fear and The Medium developer Bloober Team would be helming the reimaging of the seminal survival horror title, which did not…Read more...
It seems like every time Konami shows off the upcoming Silent Hill 2 remake, something doesn’t sit right with fans. It started with the game’s announcement when it was revealed Layers of Fear and The Medium developer Bloober Team would be helming the reimaging of the seminal survival horror title, which did not…
Lorelei and the Laser Eyes, the incredible new release from developer Simogo, is not a horror game. It’s a cerebral puzzle game that tests your in-game and real-world knowledge as you progress through a series of complex challenges. It is, however, full of eerie vibes that could convince you something is waiting to…Read more...
Lorelei and the Laser Eyes, the incredible new release from developer Simogo, is not a horror game. It’s a cerebral puzzle game that tests your in-game and real-world knowledge as you progress through a series of complex challenges. It is, however, full of eerie vibes that could convince you something is waiting to…
Credit: Edgar Cervantes / Android Authority
Reddit and OpenAI have announced a new partnership.
OpenAI will gain access to Reddit’s vast and diverse conversational data to train its language models.
Reddit will get OpenAI as an advertising partner, along with new AI-powered features for its platform.
In what seems like a significant move for the future of artificial intelligence and the online community in general, Reddit and OpenAI have announced a new partnership aimed at enhancing us
Reddit and OpenAI have announced a new partnership.
OpenAI will gain access to Reddit’s vast and diverse conversational data to train its language models.
Reddit will get OpenAI as an advertising partner, along with new AI-powered features for its platform.
In what seems like a significant move for the future of artificial intelligence and the online community in general, Reddit and OpenAI have announced a new partnership aimed at enhancing user experiences on both platforms.
Generative AI models, such as OpenAI’s ChatGPT, rely heavily on real-world data and conversations to learn and refine their language generation capabilities. Reddit, with its millions of active users engaging in discussions on virtually every topic imaginable, is a treasure trove of authentic, up-to-date human interaction. This makes it an ideal resource for OpenAI to train its AI models, potentially leading to more nuanced, contextually aware, and relevant interactions with users.
It’s been announced that Reddit is going to be used to train OpenAI’s ChatGPT model on current topics (and probably more closely resemble human interactions.)
Redditors agreed to it in the terms of service.
When Your Content is created with or submitted to the Services, you grant us a worldwide, royalty-free, perpetual, irrevocable, non-exclusive, transferable, and sublicensable license to use, copy, modify, adapt, prepare derivative works of, distribute, store, perform, and display Your
It’s been announced that Reddit is going to be used to train OpenAI’s ChatGPT model on current topics (and probably more closely resemble human interactions.)
When Your Content is created with or submitted to the Services, you grant us a worldwide, royalty-free, perpetual, irrevocable, non-exclusive, transferable, and sublicensable license to use, copy, modify, adapt, prepare derivative works of, distribute, store, perform, and display Your Content and any name, username, voice, or likeness provided in connection with Your Content in all media formats and channels now known or later developed anywhere in the world. This license includes the right for us to make Your Content available for syndication, broadcast, distribution, or publication by other companies, organizations, or individuals who partner with Reddit. You also agree that we may remove metadata associated with Your Content, and you irrevocably waive any claims and assertions of moral rights or attribution with respect to Your Content.
In other words if you’re not paying for the product, you are the product.
I suspect using the voting combined with the commentary is going to help reveal what is a useful comment and not, but I can’t help thinking that ChatGPT is going to start making some pretty snarky responses on current events if it’s trained on the groups I’ve looked at.
I suspect were I a regular contributor to Reddit I’d be annoyed that a chatbot is being trained to comment like me as I thought I was only being used for advertising purposes and not training Skynet to replace me.
It appears the main focus is on more recent content rather than resurrecting deceased redditors as AI ghouls to comment on the state of the post-IPO reddit, but everything Reddit now feeds the machine. Your work for your friends is being sold as a commodity. Fun times.
Enlarge (credit: Benj Edwards)
In July last year, OpenAI announced the formation of a new research team that would prepare for the advent of supersmart artificial intelligence capable of outwitting and overpowering its creators. Ilya Sutskever, OpenAI’s chief scientist and one of the company’s co-founders, was named as the co-lead of this new team. OpenAI said the team would receive 20 percent of its computing power.
Now OpenAI’s “superalignment team” is no more, the company
In July last year, OpenAI announced the formation of a new research team that would prepare for the advent of supersmart artificial intelligence capable of outwitting and overpowering its creators. Ilya Sutskever, OpenAI’s chief scientist and one of the company’s co-founders, was named as the co-lead of this new team. OpenAI said the team would receive 20 percent of its computing power.
Now OpenAI’s “superalignment team” is no more, the company confirms. That comes after the departures of several researchers involved, Tuesday’s news that Sutskever was leaving the company, and the resignation of the team’s other co-lead. The group’s work will be absorbed into OpenAI’s other research efforts.
Enlarge (credit: Getty)
Stuff posted on Reddit is getting incorporated into ChatGPT, Reddit and OpenAI announced on Thursday. The new partnership grants OpenAI access to Reddit’s Data API, giving the generative AI firm real-time access to Reddit posts.
Reddit content will be incorporated into ChatGPT "and new products," Reddit's blog post said. The social media firm claims the partnership will "enable OpenAI’s AI tools to better understand and showcase Reddit content, especia
Stuff posted on Reddit is getting incorporated into ChatGPT, Reddit and OpenAI announced on Thursday. The new partnership grants OpenAI access to Reddit’s Data API, giving the generative AI firm real-time access to Reddit posts.
Reddit content will be incorporated into ChatGPT "and new products," Reddit's blog post said. The social media firm claims the partnership will "enable OpenAI’s AI tools to better understand and showcase Reddit content, especially on recent topics." OpenAI will also start advertising on Reddit.
The deal is similar to one that Reddit struck with Google in February that allows the tech giant to make "new ways to display Reddit content" and provide "more efficient ways to train models," Reddit said at the time. Neither Reddit nor OpenAI disclosed the financial terms of their partnership, but Reddit's partnership with Google was reportedly worth $60 million.
As you can see the tiled Map start at a height of 320px bcs originally the map have this height and if I change the game height to 320px everything works fine , but my question is if i want to make the tiledMap responsive to innerHeight and width for the screen how can i do this so the tiled map start at the bottom of the screen and not at the 320px
you can see how the tiled map layer is starting in the middle of the screen . is there by any chance something i can do to make it start at the bottom of the screen
Credit: Edgar Cervantes / Android Authority
OpenAI will announce updates to ChatGPT and GPT 4 at its “Spring Updates” event today.
The livestream for the announcements is set to start at 10 AM PT (1 PM ET).
The company is reportedly working on a ChatGPT-powered search engine and a new multimodal assistant.
As expected, OpenAI is all set to make some important ChatGPT and GPT 4 announcements later today. The company has scheduled a “Spring Updates” livestream on its own website for 10
OpenAI will announce updates to ChatGPT and GPT 4 at its “Spring Updates” event today.
The livestream for the announcements is set to start at 10 AM PT (1 PM ET).
The company is reportedly working on a ChatGPT-powered search engine and a new multimodal assistant.
As expected, OpenAI is all set to make some important ChatGPT and GPT 4 announcements later today. The company has scheduled a “Spring Updates” livestream on its own website for 10 AM PT (1 PM ET).
Credit: Robert Triggs / Android Authority
According to a trusted industry analyst, Apple and OpenAI could be finalizing a deal to bring ChatGPT features to iOS.
It is unclear if Apple’s AI features based on its own LLM would debut on iOS alongside OpenAI features.
Meanwhile, a separate negotiation with Google to bring Gemini features to iOS is still ongoing.
Over the past six months, Google has been hitting Gemini hard. It seems Gemini is now in everything Google does, including the And
According to a trusted industry analyst, Apple and OpenAI could be finalizing a deal to bring ChatGPT features to iOS.
It is unclear if Apple’s AI features based on its own LLM would debut on iOS alongside OpenAI features.
Meanwhile, a separate negotiation with Google to bring Gemini features to iOS is still ongoing.
Over the past six months, Google has been hitting Gemini hard. It seems Gemini is now in everything Google does, including the Android operating system, the most popular mobile OS in the world. Meanwhile, Apple hasn’t done that much at all with generative AI and large language models (LLM). All signs point to that changing very soon — just not through Apple itself.
Over the past few months, we’ve learned that Apple has been in discussions with both Google and OpenAI (which owns ChatGPT) about using their respective LLMs to power future features coming to iOS. Now, according to industry analyst Mark Gurman, Apple’s deal with OpenAI might be close to finalized.
Credit: Edgar Cervantes / Android Authority
OpenAI is now expected to launch its ChatGPT-powered search engine on May 13.
If the information is accurate, the new service might end up eclipsing Google’s big I/O 2024 announcements on May 14.
OpenAI has been brewing up a new Google Search competitor, as per reports. The ChatGPT-powered search engine was previously expected to launch on May 9, but it looks like the company now wants to one-up Google’s all-important announcements next week
OpenAI is now expected to launch its ChatGPT-powered search engine on May 13.
If the information is accurate, the new service might end up eclipsing Google’s big I/O 2024 announcements on May 14.
OpenAI has been brewing up a new Google Search competitor, as per reports. The ChatGPT-powered search engine was previously expected to launch on May 9, but it looks like the company now wants to one-up Google’s all-important announcements next week.
Credit: Calvin Wankhede / Android Authority
OpenAI is working on a Context Connector feature for ChatGPT, with initial support for Google Drive and Microsoft OneDrive.
This would make it easy for ChatGPT Plus users to feed files directly to ChatGPT from these online service solutions without needing to download the file and reupload it.
ChatGPT is an amazing tool once you learn how to use it properly. If you are a ChatGPT Plus subscriber, you can supercharge ChatGPT by uploading your fi
OpenAI is working on a Context Connector feature for ChatGPT, with initial support for Google Drive and Microsoft OneDrive.
This would make it easy for ChatGPT Plus users to feed files directly to ChatGPT from these online service solutions without needing to download the file and reupload it.
ChatGPT is an amazing tool once you learn how to use it properly. If you are a ChatGPT Plus subscriber, you can supercharge ChatGPT by uploading your files and asking the AI assistant questions based on the data in your file. For folks who have migrated most of their lives to online storage solutions, ChatGPT appears to be working on a Context Connector feature, which would connect with Google Drive and Microsoft OneDrive and make it easier for you to feed online files to the AI assistant.
X user legit_rumors has shared an early look at the upcoming Context Connector feature. This feature connects Google Drive, OneDrive Personal, and OneDrive Business to ChatGPT, making it super convenient to feed any file stored in these online storage services.
Ctrl-Alt-Speech is a weekly podcast about the latest news in online speech, from Mike Masnick and Everything in Moderation‘s Ben Whitelaw.
Subscribe now on Apple Podcasts, Overcast, Spotify, Pocket Casts, YouTube, or your podcast app of choice — or go straight to the RSS feed.
In this week’s round-up of the latest news in online speech, content moderation and internet regulation, Mike and Ben cover:
Stack Overflow bans users en masse for rebelling against OpenAI partnership (Tom’s Hardware)
T
I am developing a VR application for Meta Quest using Unity. In this application, the user interacts using a controller, and the user's hand is visually represented in the virtual space.
Objective:
I want to cast a ray into the virtual space corresponding to the point touched by the hand on an object (hereafter referred to as the "panel") that has a RenderTexture showing part of the virtual space. For debugging purposes, a cube with scale (0.1, 0.1, 0.1) is displayed at the hit position for 0.1
I am developing a VR application for Meta Quest using Unity. In this application, the user interacts using a controller, and the user's hand is visually represented in the virtual space.
Objective:
I want to cast a ray into the virtual space corresponding to the point touched by the hand on an object (hereafter referred to as the "panel") that has a RenderTexture showing part of the virtual space. For debugging purposes, a cube with scale (0.1, 0.1, 0.1) is displayed at the hit position for 0.1 seconds. Eventually, I plan to activate a particle system on the cube existing at the hit position.
Additionally, the camera that renders to the RenderTexture is attached to other players and is always in motion.
Current Issue:
Currently, touching the panel does not trigger any actions. Using a previous method, a cube is indeed created when touched, but the coordinates where the ray is cast are significantly misaligned.
The second link is from a question I previously asked. With this method, a cube is generated upon touch, but the coordinates where the ray is cast are greatly misaligned.
Code:
Here is the main code attached to the panel. Any suggestions or modifications to correct the issue would be greatly appreciated.
using UnityEngine;
using Photon.Pun;
using UnityEngine.UI;
public class PanelManager : MonoBehaviourPun
{
public Camera displayRenderCamera; // Camera that renders to the RenderTexture
private RawImage displayGameObject; // GameObject displaying the RenderTexture
private Vector3? colliderPoint = null; // Intersection point with the collider
void Start()
{
InitializeCameraAndPanel();
}
void Update()
{
bool gripHeld = OVRInput.Get(OVRInput.Button.PrimaryHandTrigger, OVRInput.Controller.RTouch);
bool triggerNotPressed = !OVRInput.Get(OVRInput.Button.PrimaryIndexTrigger, OVRInput.Controller.RTouch);
if (gripHeld && triggerNotPressed && colliderPoint != null) // Holding grip and not pressing trigger (pointing gesture)
{
InteractWithRenderTexture();
}
InitializeCameraAndPanel();
}
private void InitializeCameraAndPanel()
{
PhotonView[] allPhotonViews = FindObjectsOfType<PhotonView>();
foreach (PhotonView view in allPhotonViews)
{
if (view.Owner != null)
{
if (view.Owner.ActorNumber != PhotonNetwork.LocalPlayer.ActorNumber)
{
GameObject camera = view.gameObject.transform.Find("Head/ViewCamera")?.gameObject;
if (camera != null)
{
displayRenderCamera = camera.GetComponent<Camera>();
Debug.Log(displayRenderCamera);
}
}
else if (view.Owner.ActorNumber == PhotonNetwork.LocalPlayer.ActorNumber)
{
GameObject panel = view.gameObject.transform.Find("Panel/Panel")?.gameObject;
if (panel != null)
{
displayGameObject = panel.GetComponent<RawImage>();
}
}
}
}
}
private void InteractWithRenderTexture()
{
if (colliderPoint == null) return;
Vector3 worldSpaceHitPoint = colliderPoint.Value;
Vector2 localHitPoint = displayGameObject.rectTransform.InverseTransformPoint(worldSpaceHitPoint);
var rect = displayGameObject.rectTransform.rect;
Vector2 textureCoord = localHitPoint - rect.min;
textureCoord.x *= displayGameObject.uvRect.width / rect.width;
textureCoord.y *= displayGameObject.uvRect.height / rect.height;
textureCoord += displayGameObject.uvRect.min;
Ray ray = displayRenderCamera.ViewportPointToRay(new Vector3(textureCoord.x, textureCoord.y, 0));
// Debug: Show a red cube at the touch location
Vector3 point = ray.GetPoint(2.0f);
GameObject cube = GameObject.CreatePrimitive(PrimitiveType.Cube);
cube.transform.position = point;
cube.transform.localScale = new Vector3(0.1f, 0.1f, 0.1f);
cube.GetComponent<Renderer>().material.color = Color.red;
Destroy(cube, 0.1f);
if (Physics.Raycast(ray, out var hit, 10.0f))
{
if (hit.transform.TryGetComponent<CubeManager>(out var cubeManager))
{
cubeManager.StartParticleSystem();
}
}
}
void OnTriggerEnter(Collider other)
{
if (other.CompareTag("rightHand"))
{
var plane = new Plane(transform.forward, transform.position);
colliderPoint = plane.ClosestPointOnPlane(other.bounds.center);
}
}
void OnTriggerExit(Collider other)
{
if (other.CompareTag("rightHand"))
{
colliderPoint = null;
}
}
}
I've revisited the RenderTexture settings to ensure the virtual space is rendered correctly.
How should I modify the code to accurately cast rays based on the touch position?
For example, if I press W key and D key, that would produce a movement in the Northeast position. However, if I release either key (meaning I am holding only W or D at the moment), the movement still continues going northeast when I want it to go east if it is D only, and north if it is W only. However, if I release both of the keys and then press either W or D it works perfectly.
The code for my main CharacterBody2D:
extends CharacterBody2D
class_name Player
var x_input: float = 0.0
var y_inpu
For example, if I press W key and D key, that would produce a movement in the Northeast position. However, if I release either key (meaning I am holding only W or D at the moment), the movement still continues going northeast when I want it to go east if it is D only, and north if it is W only. However, if I release both of the keys and then press either W or D it works perfectly.
The code for my main CharacterBody2D:
extends CharacterBody2D
class_name Player
var x_input: float = 0.0
var y_input: float = 0.0
var pause_lock: bool = false
@onready var sprite = $PlayerSprite
@onready var f_arm = $PlayerSprite/Torso/R_Arm
@onready var b_arm = $PlayerSprite/Torso/L_Arm
@onready var hand = $PlayerSprite/Torso/R_Arm/Hand
@onready var aim_pivot = $PlayerSprite/Torso/AimPivot
@onready var fsm = $FSM
@onready var leg_anim = $LegsAnimation
@export var MAX_SPEED : int = 50
@export var ACCELERATION : int = 1000
@export var FRICTION : float = 1.0
@export var base_hp : int = 0
@onready var hp: int = base_hp
var motion : Vector2 = Vector2.ZERO
var direction : Vector2 = Vector2.ZERO
func _player_input():
x_input = Input.get_action_strength("move_right") - Input.get_action_strength("move_left")
y_input = Input.get_action_strength("move_down") - Input.get_action_strength("move_up")
direction = Vector2(x_input, y_input).normalized()
func _idle():
motion.x = lerp(motion.x, 0.0, FRICTION)
motion.y = lerp(motion.y, 0.0, FRICTION)
func _move(delta, direction):
motion += direction * ACCELERATION * delta
motion.x = clamp(motion.x, -MAX_SPEED, MAX_SPEED)
motion.y = clamp(motion.y, -MAX_SPEED, MAX_SPEED)
print(motion)
velocity = motion
move_and_slide()
func aim(pos: Vector2):
_flip_player_sprite(pos.x < self.global_position.x)
if (pos.x < self.global_position.x):
f_arm.rotation = lerp_angle(f_arm.rotation, -(aim_pivot.global_position - pos).angle(), (0.10))
else:
f_arm.rotation = lerp_angle(f_arm.rotation, (pos - aim_pivot.global_position).angle(), (0.10))
b_arm.look_at(hand.global_position)
func _flip_player_sprite(flip: bool):
match flip:
true:
sprite.scale.x = -1
false:
sprite.scale.x = 1
func _animate_legs():
if (direction == Vector2.ZERO):
leg_anim.play("Idle")
else:
var is_forward: bool = (
(sprite.scale.x == 1)
or (sprite.scale == Vector2(-1,1) and x_input < 0)
)
match is_forward:
true:
leg_anim.play("Walk_Forward")
false:
leg_anim.play("Walk_Backward")
The code for the finite state machine attached as a child:
extends Node
enum STATES {IDLE, MOVE}
var state: int = 0
@onready var parent = get_parent()
func _physics_process(delta):
run_state(delta)
func run_state(delta):
parent._player_input()
parent.aim(parent.get_global_mouse_position())
match state:
STATES.IDLE:
parent._idle()
if (parent.direction != Vector2.ZERO):
_set_state(STATES.MOVE)
STATES.MOVE:
parent._animate_legs()
parent._move(delta, parent.direction)
if (parent.direction == Vector2.ZERO):
_set_state(STATES.IDLE)
func _set_state(new_state: int):
if (state == new_state):
return
state = new_state
Credit: Robert Triggs / Android Authority
According to a trusted industry analyst, Apple and OpenAI could be finalizing a deal to bring ChatGPT features to iOS.
It is unclear if Apple’s AI features based on its own LLM would debut on iOS alongside OpenAI features.
Meanwhile, a separate negotiation with Google to bring Gemini features to iOS is still ongoing.
Over the past six months, Google has been hitting Gemini hard. It seems Gemini is now in everything Google does, including the And
According to a trusted industry analyst, Apple and OpenAI could be finalizing a deal to bring ChatGPT features to iOS.
It is unclear if Apple’s AI features based on its own LLM would debut on iOS alongside OpenAI features.
Meanwhile, a separate negotiation with Google to bring Gemini features to iOS is still ongoing.
Over the past six months, Google has been hitting Gemini hard. It seems Gemini is now in everything Google does, including the Android operating system, the most popular mobile OS in the world. Meanwhile, Apple hasn’t done that much at all with generative AI and large language models (LLM). All signs point to that changing very soon — just not through Apple itself.
Over the past few months, we’ve learned that Apple has been in discussions with both Google and OpenAI (which owns ChatGPT) about using their respective LLMs to power future features coming to iOS. Now, according to industry analyst Mark Gurman, Apple’s deal with OpenAI might be close to finalized.
Credit: Edgar Cervantes / Android Authority
OpenAI is now expected to launch its ChatGPT-powered search engine on May 13.
If the information is accurate, the new service might end up eclipsing Google’s big I/O 2024 announcements on May 14.
OpenAI has been brewing up a new Google Search competitor, as per reports. The ChatGPT-powered search engine was previously expected to launch on May 9, but it looks like the company now wants to one-up Google’s all-important announcements next week
OpenAI is now expected to launch its ChatGPT-powered search engine on May 13.
If the information is accurate, the new service might end up eclipsing Google’s big I/O 2024 announcements on May 14.
OpenAI has been brewing up a new Google Search competitor, as per reports. The ChatGPT-powered search engine was previously expected to launch on May 9, but it looks like the company now wants to one-up Google’s all-important announcements next week.
Credit: Calvin Wankhede / Android Authority
OpenAI is working on a Context Connector feature for ChatGPT, with initial support for Google Drive and Microsoft OneDrive.
This would make it easy for ChatGPT Plus users to feed files directly to ChatGPT from these online service solutions without needing to download the file and reupload it.
ChatGPT is an amazing tool once you learn how to use it properly. If you are a ChatGPT Plus subscriber, you can supercharge ChatGPT by uploading your fi
OpenAI is working on a Context Connector feature for ChatGPT, with initial support for Google Drive and Microsoft OneDrive.
This would make it easy for ChatGPT Plus users to feed files directly to ChatGPT from these online service solutions without needing to download the file and reupload it.
ChatGPT is an amazing tool once you learn how to use it properly. If you are a ChatGPT Plus subscriber, you can supercharge ChatGPT by uploading your files and asking the AI assistant questions based on the data in your file. For folks who have migrated most of their lives to online storage solutions, ChatGPT appears to be working on a Context Connector feature, which would connect with Google Drive and Microsoft OneDrive and make it easier for you to feed online files to the AI assistant.
X user legit_rumors has shared an early look at the upcoming Context Connector feature. This feature connects Google Drive, OneDrive Personal, and OneDrive Business to ChatGPT, making it super convenient to feed any file stored in these online storage services.
Where will you be on May 13th at 10 am PT? If you've got some time spare, you may want to check out what OpenAI is up to. The company has announced that it's prepping some updates on its projects via a live stream on its website. While we're unsure as to what the company will announce, we now know what won't be. The CEO of OpenAI, Sam Altman, has made a "de-announcement" of several potential ChatGPT features, so now we know what not to expect on Monday.
Where will you be on May 13th at 10 am PT? If you've got some time spare, you may want to check out what OpenAI is up to. The company has announced that it's prepping some updates on its projects via a live stream on its website. While we're unsure as to what the company will announce, we now know what won't be. The CEO of OpenAI, Sam Altman, has made a "de-announcement" of several potential ChatGPT features, so now we know what not to expect on Monday.
Ctrl-Alt-Speech is a weekly podcast about the latest news in online speech, from Mike Masnick and Everything in Moderation‘s Ben Whitelaw.
Subscribe now on Apple Podcasts, Overcast, Spotify, Pocket Casts, YouTube, or your podcast app of choice — or go straight to the RSS feed.
In this week’s round-up of the latest news in online speech, content moderation and internet regulation, Mike and Ben cover:
Stack Overflow bans users en masse for rebelling against OpenAI partnership (Tom’s Hardware)
T
I am creating a 2D game in javascript using pure DOM, on my computer and on my cell phone WebGL or OpenGL do not work.
How can I create Lights and shadows effects using Line Of Sight, for example, I want to create a javascript class that adds the tag a circular object called "light", and I want this light to throw rays in all directions from the center of the light, what if the light touches an object of the sprite class it projects a shadow of the object
I am creating a 2D game in javascript using pure DOM, on my computer and on my cell phone WebGL or OpenGL do not work.
How can I create Lights and shadows effects using Line Of Sight, for example, I want to create a javascript class that adds the tag a circular object called "light", and I want this light to throw rays in all directions from the center of the light, what if the light touches an object of the sprite class it projects a shadow of the object
When using Horizontal or Vertical input keys, the the movement is normal. But when trying to move diagonal, multiple keys are to be pressed, and the speed just stacks up.
Yes there are many answers to such questions. But I'm having a hard time figuring out a solution. I am actually using an asset, and the code is kinda hard to understand for me. The script is on my drive and the link is below:
https://drive.google.com/open?id=1Lt4DZBw7Jv2LNyYR-03aNUpV2a29F7dm
Please provide a solution. I am no
When using Horizontal or Vertical input keys, the the movement is normal. But when trying to move diagonal, multiple keys are to be pressed, and the speed just stacks up.
Yes there are many answers to such questions. But I'm having a hard time figuring out a solution. I am actually using an asset, and the code is kinda hard to understand for me. The script is on my drive and the link is below:
https://drive.google.com/open?id=1Lt4DZBw7Jv2LNyYR-03aNUpV2a29F7dm
Please provide a solution. I am not so good at coding. And I'm in real need to make that game because of many reasons as fast as I can so I will learn all coding later. (and in the script please search for Move, Movement or related words to go to specific movement code, thank you a lot)
Credit: Robert Triggs / Android Authority
According to a trusted industry analyst, Apple and OpenAI could be finalizing a deal to bring ChatGPT features to iOS.
It is unclear if Apple’s AI features based on its own LLM would debut on iOS alongside OpenAI features.
Meanwhile, a separate negotiation with Google to bring Gemini features to iOS is still ongoing.
Over the past six months, Google has been hitting Gemini hard. It seems Gemini is now in everything Google does, including the And
According to a trusted industry analyst, Apple and OpenAI could be finalizing a deal to bring ChatGPT features to iOS.
It is unclear if Apple’s AI features based on its own LLM would debut on iOS alongside OpenAI features.
Meanwhile, a separate negotiation with Google to bring Gemini features to iOS is still ongoing.
Over the past six months, Google has been hitting Gemini hard. It seems Gemini is now in everything Google does, including the Android operating system, the most popular mobile OS in the world. Meanwhile, Apple hasn’t done that much at all with generative AI and large language models (LLM). All signs point to that changing very soon — just not through Apple itself.
Over the past few months, we’ve learned that Apple has been in discussions with both Google and OpenAI (which owns ChatGPT) about using their respective LLMs to power future features coming to iOS. Now, according to industry analyst Mark Gurman, Apple’s deal with OpenAI might be close to finalized.