Zobrazení pro čtení
OpenAI has developed a 99.9% accuracy tool to detect ChatGPT content, but you are safe for now
- OpenAI has developed a method to detect when someone uses ChatGPT to write essays or assignments.
- The method utilizes a watermarking system that is 99.9% effective at identifying AI-generated text.
- However, the tool has not yet been rolled out due to internal concerns and mixed reactions within the company.
When OpenAI launched ChatGPT towards the end of 2022, educators expressed concerns that students would use the platform to cheat on assignments and tests. To prevent this, numerous companies have rolled out AI detection tools, but they haven’t been the best at producing reliable results.
OpenAI has now revealed that it has developed a method to detect when someone uses ChatGPT to write (via The Washington Post). The technology is said to be 99.9% effective and essentially uses a system capable of predicting what word or phrase (called “token”) would come next in a sentence. The AI-detection tool slightly alters the tokens, which then leaves a watermark. This watermark is undetectable to the human eye but can be spotted by the tool in question.
How much energy does ChatGPT consume? More than you think, but it’s not all bad news
Everything comes at a cost, and AI is no different. While ChatGPT and Gemini may be free to use, they require a staggering amount of computational power to operate. And if that wasn’t enough, Big Tech is currently engaged in an arms race to build bigger and better models like GPT-5. Critics argue that this growing demand for powerful — and energy-intensive — hardware will have a devastating impact on climate change. So just how much energy does AI like ChatGPT use and what does this electricity use mean from an environmental perspective? Let’s break it down.
ChatGPT energy consumption: How much electricity does AI need?
OpenAI could watermark the text ChatGPT generates, but hasn't
OpenAI has developed a system for "watermarking" the output that ChatGPT generates, reports The Wall Street Journal, but has chosen not to deploy it. Google has deployed such a system with Gemini.
OpenAI has a method to reliably detect when someone uses ChatGPT to write an essay or research paper.
The post OpenAI could watermark the text ChatGPT generates, but hasn't appeared first on Boing Boing.
OpenAI has the tech to watermark ChatGPT text—it just won’t release it
According to The Wall Street Journal, there's internal conflict at OpenAI over whether or not to release a watermarking tool that would allow people to test text to see whether it was generated by ChatGPT or not.
To deploy the tool, OpenAI would make tweaks to ChatGPT that would lead it to leave a trail in the text it generates that can be detected by a special tool. The watermark would be undetectable by human readers without the tool, and the company's internal testing has shown that it does not negatively affect the quality of outputs. The detector would be accurate 99.9 percent of the time. It's important to note that the watermark would be a pattern in the text itself, meaning it would be preserved if the user copies and pastes the text or even if they make modest edits to it.
Some OpenAI employees have campaigned for the tool's release, but others believe that would be the wrong move, citing a few specific problems.
OpenAI training models on Reddit data
It’s been announced that Reddit is going to be used to train OpenAI’s ChatGPT model on current topics (and probably more closely resemble human interactions.)
Redditors agreed to it in the terms of service.
When Your Content is created with or submitted to the Services, you grant us a worldwide, royalty-free, perpetual, irrevocable, non-exclusive, transferable, and sublicensable license to use, copy, modify, adapt, prepare derivative works of, distribute, store, perform, and display Your Content and any name, username, voice, or likeness provided in connection with Your Content in all media formats and channels now known or later developed anywhere in the world. This license includes the right for us to make Your Content available for syndication, broadcast, distribution, or publication by other companies, organizations, or individuals who partner with Reddit. You also agree that we may remove metadata associated with Your Content, and you irrevocably waive any claims and assertions of moral rights or attribution with respect to Your Content.
In other words if you’re not paying for the product, you are the product.
I suspect using the voting combined with the commentary is going to help reveal what is a useful comment and not, but I can’t help thinking that ChatGPT is going to start making some pretty snarky responses on current events if it’s trained on the groups I’ve looked at.
I suspect were I a regular contributor to Reddit I’d be annoyed that a chatbot is being trained to comment like me as I thought I was only being used for advertising purposes and not training Skynet to replace me.
It appears the main focus is on more recent content rather than resurrecting deceased redditors as AI ghouls to comment on the state of the post-IPO reddit, but everything Reddit now feeds the machine. Your work for your friends is being sold as a commodity. Fun times.
[IPC]OpenAI training models on Reddit data by Paul E King first appeared on Pocketables.
Siri gets a free ChatGPT boost: Apple partners with OpenAI, but will dictate terms
- Apple has officially announced a partnership with OpenAI, the startup behind ChatGPT.
- Later this year, ChatGPT will occasionally chime in to answer creative and complex questions when you invoke Siri.
- Siri will ask for your consent before sharing individual prompts with ChatGPT.
After months of anticipation and leaks, Apple has finally announced that it’s teaming up with AI startup OpenAI. The partnership is set to bring ChatGPT-esque smarts to Siri on iPhone, iPad, and Mac. Notably, this ChatGPT integration was only one of several new AI features launched under the banner of Apple Intelligence at the company’s WWDC event today.
When iOS 18 launches later in 2024, you’ll be able to converse with Siri via natural language prompts similar to Google’s Gemini chatbot on Android. This marks a major leap forward for Siri, transforming it from a rigidly structured assistant into a conversational AI chatbot. However, Siri will not rely on OpenAI’s models for most of its responses. Instead, Apple says that it will only pass on select questions to ChatGPT.
OpenAI training models on Reddit data
It’s been announced that Reddit is going to be used to train OpenAI’s ChatGPT model on current topics (and probably more closely resemble human interactions.)
Redditors agreed to it in the terms of service.
When Your Content is created with or submitted to the Services, you grant us a worldwide, royalty-free, perpetual, irrevocable, non-exclusive, transferable, and sublicensable license to use, copy, modify, adapt, prepare derivative works of, distribute, store, perform, and display Your Content and any name, username, voice, or likeness provided in connection with Your Content in all media formats and channels now known or later developed anywhere in the world. This license includes the right for us to make Your Content available for syndication, broadcast, distribution, or publication by other companies, organizations, or individuals who partner with Reddit. You also agree that we may remove metadata associated with Your Content, and you irrevocably waive any claims and assertions of moral rights or attribution with respect to Your Content.
In other words if you’re not paying for the product, you are the product.
I suspect using the voting combined with the commentary is going to help reveal what is a useful comment and not, but I can’t help thinking that ChatGPT is going to start making some pretty snarky responses on current events if it’s trained on the groups I’ve looked at.
I suspect were I a regular contributor to Reddit I’d be annoyed that a chatbot is being trained to comment like me as I thought I was only being used for advertising purposes and not training Skynet to replace me.
It appears the main focus is on more recent content rather than resurrecting deceased redditors as AI ghouls to comment on the state of the post-IPO reddit, but everything Reddit now feeds the machine. Your work for your friends is being sold as a commodity. Fun times.
[IPC]OpenAI training models on Reddit data by Paul E King first appeared on Pocketables.
Apple unveils “Apple Intelligence” AI features for iOS, iPadOS, and macOS
On Monday, Apple debuted "Apple Intelligence," a new suite of free AI-powered features for iOS 18, iPadOS 18, macOS Sequoia that includes creating email summaries, generating images and emoji, and allowing Siri to take actions on your behalf. These features are achieved through a combination of on-device and cloud processing, with a strong emphasis on privacy. Apple says that Apple Intelligence features will be widely available later this year and will be available as a beta test for developers this summer.
The announcements came during a livestream WWDC keynote and a simultaneous event attended by the press on Apple's campus in Cupertino, California. In an introduction, Apple CEO Tim Cook said the company has been using machine learning for years, but the introduction of large language models (LLMs) presents new opportunities to elevate the capabilities of Apple products. He emphasized the need for both personalization and privacy in Apple's approach.
At last year's WWDC, Apple avoided using the term "AI" completely, instead preferring terms like "machine learning" as Apple's way of avoiding buzzy hype while integrating applications of AI into apps in useful ways. This year, Apple figured out a new way to largely avoid the abbreviation "AI" by coining "Apple Intelligence," a catchall branding term that refers to a broad group of machine learning, LLM, and image generation technologies. By our count, the term "AI" was used sparingly in the keynote—most notably near the end of the presentation when Apple executive Craig Federighi said, "It's AI for the rest of us."
iOS 18’s potential ChatGPT features reportedly worry Microsoft
- Apple’s own large language models (LLMs) reportedly aren’t capable enough to replicate ChatGPT, which has pushed it toward third-party partnerships.
- The company may have internally tested a ChatGPT-powered Siri, and iOS 18 users could get their hands on it later this year.
- Microsoft could be worried about the Apple-OpenAI deal, as it would have to accommodate the increasing server demand and compete against Apple’s features.
In under two weeks, Apple will finally reveal iOS 18 and its rumored AI additions at WWDC24. Given that the iPhone maker’s own AI efforts may still be lacking, it has reportedly resorted to third-party partnerships to power some of these smart features. As a result, OpenAI’s ChatGPT could be fueling Siri and other AI functionalities on iOS 18, and Microsoft is worried about it.
According to a report by The Information, Apple has internally tested a version of Siri that relies on ChatGPT’s smarts. The Cupertino firm is reportedly not ready to offer its own chatbot yet, pushing it to seek third-party alternatives for the time being. Meanwhile, it will likely use its own LLMs to power the less demanding iOS 18 features, such as on-device summarization.
OpenAI training models on Reddit data
It’s been announced that Reddit is going to be used to train OpenAI’s ChatGPT model on current topics (and probably more closely resemble human interactions.)
Redditors agreed to it in the terms of service.
When Your Content is created with or submitted to the Services, you grant us a worldwide, royalty-free, perpetual, irrevocable, non-exclusive, transferable, and sublicensable license to use, copy, modify, adapt, prepare derivative works of, distribute, store, perform, and display Your Content and any name, username, voice, or likeness provided in connection with Your Content in all media formats and channels now known or later developed anywhere in the world. This license includes the right for us to make Your Content available for syndication, broadcast, distribution, or publication by other companies, organizations, or individuals who partner with Reddit. You also agree that we may remove metadata associated with Your Content, and you irrevocably waive any claims and assertions of moral rights or attribution with respect to Your Content.
In other words if you’re not paying for the product, you are the product.
I suspect using the voting combined with the commentary is going to help reveal what is a useful comment and not, but I can’t help thinking that ChatGPT is going to start making some pretty snarky responses on current events if it’s trained on the groups I’ve looked at.
I suspect were I a regular contributor to Reddit I’d be annoyed that a chatbot is being trained to comment like me as I thought I was only being used for advertising purposes and not training Skynet to replace me.
It appears the main focus is on more recent content rather than resurrecting deceased redditors as AI ghouls to comment on the state of the post-IPO reddit, but everything Reddit now feeds the machine. Your work for your friends is being sold as a commodity. Fun times.
[IPC]OpenAI training models on Reddit data by Paul E King first appeared on Pocketables.
1-bit LLMs Could Solve AI’s Energy Demands
Large language models, the AI systems that power chatbots like ChatGPT, are getting better and better—but they’re also getting bigger and bigger, demanding more energy and computational power. For LLMs that are cheap, fast, and environmentally friendly, they’ll need to shrink, ideally small enough to run directly on devices like cellphones. Researchers are finding ways to do just that by drastically rounding off the many high-precision numbers that store their memories to equal just 1 or -1.
LLMs, like all neural networks, are trained by altering the strengths of connections between their artificial neurons. These strengths are stored as mathematical parameters. Researchers have long compressed networks by reducing the precision of these parameters—a process called quantization—so that instead of taking up 16 bits each, they might take up 8 or 4. Now researchers are pushing the envelope to a single bit.
How to Make a 1-bit LLM
There are two general approaches. One approach, called post-training quantization (PTQ) is to quantize the parameters of a full-precision network. The other approach, quantization-aware training (QAT), is to train a network from scratch to have low-precision parameters. So far, PTQ has been more popular with researchers.
In February, a team including Haotong Qin at ETH Zurich, Xianglong Liu at Beihang University, and Wei Huang at the University of Hong Kong introduced a PTQ method called BiLLM. It approximates most parameters in a network using 1 bit, but represents a few salient weights—those most influential to performance—using 2 bits. In one test, the team binarized a version of Meta’s LLaMa LLM that has 13 billion parameters.
“One-bit LLMs open new doors for designing custom hardware and systems specifically optimized for 1-bit LLMs.” —Furu Wei, Microsoft Research Asia
To score performance, the researchers used a metric called perplexity, which is basically a measure of how surprised the trained model was by each ensuing piece of text. For one dataset, the original model had a perplexity of around 5, and the BiLLM version scored around 15, much better than the closest binarization competitor, which scored around 37 (for perplexity, lower numbers are better). That said, the BiLLM model required about a tenth of the memory capacity as the original.
PTQ has several advantages over QAT, says Wanxiang Che, a computer scientist at Harbin Institute of Technology, in China. It doesn’t require collecting training data, it doesn’t require training a model from scratch, and the training process is more stable. QAT, on the other hand, has the potential to make models more accurate, since quantization is built into the model from the beginning.
1-bit LLMs Find Success Against Their Larger Cousins
Last year, a team led by Furu Wei and Shuming Ma, at Microsoft Research Asia, in Beijing, created BitNet, the first 1-bit QAT method for LLMs. After fiddling with the rate at which the network adjusts its parameters, in order to stabilize training, they created LLMs that performed better than those created using PTQ methods. They were still not as good as full-precision networks, but roughly 10 times as energy efficient.
In February, Wei’s team announced BitNet 1.58b, in which parameters can equal -1, 0, or 1, which means they take up roughly 1.58 bits of memory per parameter. A BitNet model with 3 billion parameters performed just as well on various language tasks as a full-precision LLaMA model with the same number of parameters and amount of training, but it was 2.71 times as fast, used 72 percent less GPU memory, and used 94 percent less GPU energy. Wei called this an “aha moment.” Further, the researchers found that as they trained larger models, efficiency advantages improved.
A BitNet model with 3 billion parameters performed just as well on various language tasks as a full-precision LLaMA model.
This year, a team led by Che, of Harbin Institute of Technology, released a preprint on another LLM binarization method, called OneBit. OneBit combines elements of both PTQ and QAT. It uses a full-precision pretrained LLM to generate data for training a quantized version. The team’s 13-billion-parameter model achieved a perplexity score of around 9 on one dataset, versus 5 for a LLaMA model with 13 billion parameters. Meanwhile, OneBit occupied only 10 percent as much memory. On customized chips, it could presumably run much faster.
Wei, of Microsoft, says quantized models have multiple advantages. They can fit on smaller chips, they require less data transfer between memory and processors, and they allow for faster processing. Current hardware can’t take full advantage of these models, though. LLMs often run on GPUs like those made by Nvidia, which represent weights using higher precision and spend most of their energy multiplying them. New hardware could natively represent each parameter as a -1 or 1 (or 0), and then simply add and subtract values and avoid multiplication. “One-bit LLMs open new doors for designing custom hardware and systems specifically optimized for 1-bit LLMs,” Wei says.
“They should grow up together,” Huang, of the University of Hong Kong, says of 1-bit models and processors. “But it’s a long way to develop new hardware.”
Google’s AI Overview is flawed by design, and a new company blog post hints at why
On Thursday, Google capped off a rough week of providing inaccurate and sometimes dangerous answers through its experimental AI Overview feature by authoring a follow-up blog post titled, "AI Overviews: About last week." In the post, attributed to Google VP Liz Reid, head of Google Search, the firm formally acknowledged issues with the feature and outlined steps taken to improve a system that appears flawed by design, even if it doesn't realize it is admitting it.
To recap, the AI Overview feature—which the company showed off at Google I/O a few weeks ago—aims to provide search users with summarized answers to questions by using an AI model integrated with Google's web ranking systems. Right now, it's an experimental feature that is not active for everyone, but when a participating user searches for a topic, they might see an AI-generated answer at the top of the results, pulled from highly ranked web content and summarized by an AI model.
While Google claims this approach is "highly effective" and on par with its Featured Snippets in terms of accuracy, the past week has seen numerous examples of the AI system generating bizarre, incorrect, or even potentially harmful responses, as we detailed in a recent feature where Ars reporter Kyle Orland replicated many of the unusual outputs.
Report: Apple and OpenAI have signed a deal to partner on AI
Apple and OpenAI have successfully made a deal to include OpenAI's generative AI technology in Apple's software, according to The Information, which cites a source who has spoken to OpenAI CEO Sam Altman about the deal.
It was previously reported by Bloomberg that the deal was in the works. The news appeared in a longer article about Altman and his growing influence within the company.
"Now, [Altman] has fulfilled a longtime goal by striking a deal with Apple to use OpenAI’s conversational artificial intelligence in its products, which could be worth billions of dollars to the startup if it goes well," according to The Information's source.
Tech giants form AI group to counter Nvidia with new interconnect standard
On Thursday, several major tech companies, including Google, Intel, Microsoft, Meta, AMD, Hewlett-Packard Enterprise, Cisco, and Broadcom, announced the formation of the Ultra Accelerator Link (UALink) Promoter Group to develop a new interconnect standard for AI accelerator chips in data centers. The group aims to create an alternative to Nvidia's proprietary NVLink interconnect technology, which links together multiple servers that power today's AI applications like ChatGPT.
The beating heart of AI these days lies in GPUs, which can perform massive numbers of matrix multiplications—necessary for running neural network architecture—in parallel. But one GPU often isn't enough for complex AI systems. NVLink can connect multiple AI accelerator chips within a server or across multiple servers. These interconnects enable faster data transfer and communication between the accelerators, allowing them to work together more efficiently on complex tasks like training large AI models.
This linkage is a key part of any modern AI data center system, and whoever controls the link standard can effectively dictate which hardware the tech companies will use. Along those lines, the UALink group seeks to establish an open standard that allows multiple companies to contribute and develop AI hardware advancements instead of being locked into Nvidia's proprietary ecosystem. This approach is similar to other open standards, such as Compute Express Link (CXL)—created by Intel in 2019—which provides high-speed, high-capacity connections between CPUs and devices or memory in data centers.
Done deal: ChatGPT will now learn from Reddit conversations
- Reddit and OpenAI have announced a new partnership.
- OpenAI will gain access to Reddit’s vast and diverse conversational data to train its language models.
- Reddit will get OpenAI as an advertising partner, along with new AI-powered features for its platform.
In what seems like a significant move for the future of artificial intelligence and the online community in general, Reddit and OpenAI have announced a new partnership aimed at enhancing user experiences on both platforms.
Generative AI models, such as OpenAI’s ChatGPT, rely heavily on real-world data and conversations to learn and refine their language generation capabilities. Reddit, with its millions of active users engaging in discussions on virtually every topic imaginable, is a treasure trove of authentic, up-to-date human interaction. This makes it an ideal resource for OpenAI to train its AI models, potentially leading to more nuanced, contextually aware, and relevant interactions with users.
OpenAI training models on Reddit data
It’s been announced that Reddit is going to be used to train OpenAI’s ChatGPT model on current topics (and probably more closely resemble human interactions.)
Redditors agreed to it in the terms of service.
When Your Content is created with or submitted to the Services, you grant us a worldwide, royalty-free, perpetual, irrevocable, non-exclusive, transferable, and sublicensable license to use, copy, modify, adapt, prepare derivative works of, distribute, store, perform, and display Your Content and any name, username, voice, or likeness provided in connection with Your Content in all media formats and channels now known or later developed anywhere in the world. This license includes the right for us to make Your Content available for syndication, broadcast, distribution, or publication by other companies, organizations, or individuals who partner with Reddit. You also agree that we may remove metadata associated with Your Content, and you irrevocably waive any claims and assertions of moral rights or attribution with respect to Your Content.
In other words if you’re not paying for the product, you are the product.
I suspect using the voting combined with the commentary is going to help reveal what is a useful comment and not, but I can’t help thinking that ChatGPT is going to start making some pretty snarky responses on current events if it’s trained on the groups I’ve looked at.
I suspect were I a regular contributor to Reddit I’d be annoyed that a chatbot is being trained to comment like me as I thought I was only being used for advertising purposes and not training Skynet to replace me.
It appears the main focus is on more recent content rather than resurrecting deceased redditors as AI ghouls to comment on the state of the post-IPO reddit, but everything Reddit now feeds the machine. Your work for your friends is being sold as a commodity. Fun times.
[IPC]OpenAI training models on Reddit data by Paul E King first appeared on Pocketables.
What happened to OpenAI’s long-term AI risk team?
In July last year, OpenAI announced the formation of a new research team that would prepare for the advent of supersmart artificial intelligence capable of outwitting and overpowering its creators. Ilya Sutskever, OpenAI’s chief scientist and one of the company’s co-founders, was named as the co-lead of this new team. OpenAI said the team would receive 20 percent of its computing power.
Now OpenAI’s “superalignment team” is no more, the company confirms. That comes after the departures of several researchers involved, Tuesday’s news that Sutskever was leaving the company, and the resignation of the team’s other co-lead. The group’s work will be absorbed into OpenAI’s other research efforts.
OpenAI will use Reddit posts to train ChatGPT under new deal
Stuff posted on Reddit is getting incorporated into ChatGPT, Reddit and OpenAI announced on Thursday. The new partnership grants OpenAI access to Reddit’s Data API, giving the generative AI firm real-time access to Reddit posts.
Reddit content will be incorporated into ChatGPT "and new products," Reddit's blog post said. The social media firm claims the partnership will "enable OpenAI’s AI tools to better understand and showcase Reddit content, especially on recent topics." OpenAI will also start advertising on Reddit.
The deal is similar to one that Reddit struck with Google in February that allows the tech giant to make "new ways to display Reddit content" and provide "more efficient ways to train models," Reddit said at the time. Neither Reddit nor OpenAI disclosed the financial terms of their partnership, but Reddit's partnership with Google was reportedly worth $60 million.
OpenAI expected to announce major ChatGPT updates today: How to watch livestream
- OpenAI will announce updates to ChatGPT and GPT 4 at its “Spring Updates” event today.
- The livestream for the announcements is set to start at 10 AM PT (1 PM ET).
- The company is reportedly working on a ChatGPT-powered search engine and a new multimodal assistant.
As expected, OpenAI is all set to make some important ChatGPT and GPT 4 announcements later today. The company has scheduled a “Spring Updates” livestream on its own website for 10 AM PT (1 PM ET).
Apple and OpenAI closing in on deal for ChatGPT in iOS
- According to a trusted industry analyst, Apple and OpenAI could be finalizing a deal to bring ChatGPT features to iOS.
- It is unclear if Apple’s AI features based on its own LLM would debut on iOS alongside OpenAI features.
- Meanwhile, a separate negotiation with Google to bring Gemini features to iOS is still ongoing.
Over the past six months, Google has been hitting Gemini hard. It seems Gemini is now in everything Google does, including the Android operating system, the most popular mobile OS in the world. Meanwhile, Apple hasn’t done that much at all with generative AI and large language models (LLM). All signs point to that changing very soon — just not through Apple itself.
Over the past few months, we’ve learned that Apple has been in discussions with both Google and OpenAI (which owns ChatGPT) about using their respective LLMs to power future features coming to iOS. Now, according to industry analyst Mark Gurman, Apple’s deal with OpenAI might be close to finalized.
OpenAI may have a ChatGPT surprise planned just before Google I/O 2024
- OpenAI is now expected to launch its ChatGPT-powered search engine on May 13.
- If the information is accurate, the new service might end up eclipsing Google’s big I/O 2024 announcements on May 14.
OpenAI has been brewing up a new Google Search competitor, as per reports. The ChatGPT-powered search engine was previously expected to launch on May 9, but it looks like the company now wants to one-up Google’s all-important announcements next week.
ChatGPT’s upcoming Context Connector could be a boon for Google Drive and OneDrive users
- OpenAI is working on a Context Connector feature for ChatGPT, with initial support for Google Drive and Microsoft OneDrive.
- This would make it easy for ChatGPT Plus users to feed files directly to ChatGPT from these online service solutions without needing to download the file and reupload it.
ChatGPT is an amazing tool once you learn how to use it properly. If you are a ChatGPT Plus subscriber, you can supercharge ChatGPT by uploading your files and asking the AI assistant questions based on the data in your file. For folks who have migrated most of their lives to online storage solutions, ChatGPT appears to be working on a Context Connector feature, which would connect with Google Drive and Microsoft OneDrive and make it easier for you to feed online files to the AI assistant.
X user legit_rumors has shared an early look at the upcoming Context Connector feature. This feature connects Google Drive, OneDrive Personal, and OneDrive Business to ChatGPT, making it super convenient to feed any file stored in these online storage services.
Ctrl-Alt-Speech: Between A Rock And A Hard Policy
Ctrl-Alt-Speech is a weekly podcast about the latest news in online speech, from Mike Masnick and Everything in Moderation‘s Ben Whitelaw.
Subscribe now on Apple Podcasts, Overcast, Spotify, Pocket Casts, YouTube, or your podcast app of choice — or go straight to the RSS feed.
In this week’s round-up of the latest news in online speech, content moderation and internet regulation, Mike and Ben cover:
- Stack Overflow bans users en masse for rebelling against OpenAI partnership (Tom’s Hardware)
- Tech firms must tame toxic algorithms to protect children online (Ofcom)
- Reddit Lays Out Content Policy While Seeking More Licensing Deals (Bloomberg)
- Extremist Militias Are Coordinating in More Than 100 Facebook Groups (Wired)
- Politicians Scapegoat Social Media While Ignoring Real Solutions (Techdirt)
- ‘Facebook Tries to Combat Russian Disinformation in Ukraine’ – FB Public Policy Manager (Kyiv Post)
- TikTok Sues U.S. Government Over Law Forcing Sale or Ban (New York Times)
- Swiss public broadcasters withdraw from X/Twitter (Swissinfo)
- Congressional Committee Threatens To Investigate Any Company Helping TikTok Defend Its Rights (Techdirt)
This episode is brought to you with financial support from the Future of Online Trust & Safety Fund.
Apple and OpenAI closing in on deal for ChatGPT in iOS
- According to a trusted industry analyst, Apple and OpenAI could be finalizing a deal to bring ChatGPT features to iOS.
- It is unclear if Apple’s AI features based on its own LLM would debut on iOS alongside OpenAI features.
- Meanwhile, a separate negotiation with Google to bring Gemini features to iOS is still ongoing.
Over the past six months, Google has been hitting Gemini hard. It seems Gemini is now in everything Google does, including the Android operating system, the most popular mobile OS in the world. Meanwhile, Apple hasn’t done that much at all with generative AI and large language models (LLM). All signs point to that changing very soon — just not through Apple itself.
Over the past few months, we’ve learned that Apple has been in discussions with both Google and OpenAI (which owns ChatGPT) about using their respective LLMs to power future features coming to iOS. Now, according to industry analyst Mark Gurman, Apple’s deal with OpenAI might be close to finalized.
OpenAI may have a ChatGPT surprise planned just before Google I/O 2024
- OpenAI is now expected to launch its ChatGPT-powered search engine on May 13.
- If the information is accurate, the new service might end up eclipsing Google’s big I/O 2024 announcements on May 14.
OpenAI has been brewing up a new Google Search competitor, as per reports. The ChatGPT-powered search engine was previously expected to launch on May 9, but it looks like the company now wants to one-up Google’s all-important announcements next week.
ChatGPT’s upcoming Context Connector could be a boon for Google Drive and OneDrive users
- OpenAI is working on a Context Connector feature for ChatGPT, with initial support for Google Drive and Microsoft OneDrive.
- This would make it easy for ChatGPT Plus users to feed files directly to ChatGPT from these online service solutions without needing to download the file and reupload it.
ChatGPT is an amazing tool once you learn how to use it properly. If you are a ChatGPT Plus subscriber, you can supercharge ChatGPT by uploading your files and asking the AI assistant questions based on the data in your file. For folks who have migrated most of their lives to online storage solutions, ChatGPT appears to be working on a Context Connector feature, which would connect with Google Drive and Microsoft OneDrive and make it easier for you to feed online files to the AI assistant.
X user legit_rumors has shared an early look at the upcoming Context Connector feature. This feature connects Google Drive, OneDrive Personal, and OneDrive Business to ChatGPT, making it super convenient to feed any file stored in these online storage services.
OpenAI's Sam Altman promises to show off something that "feels like magic" - but it isn't a search engine
Where will you be on May 13th at 10 am PT? If you've got some time spare, you may want to check out what OpenAI is up to. The company has announced that it's prepping some updates on its projects via a live stream on its website. While we're unsure as to what the company will announce, we now know what won't be. The CEO of OpenAI, Sam Altman, has made a "de-announcement" of several potential ChatGPT features, so now we know what not to expect on Monday.
Ctrl-Alt-Speech: Between A Rock And A Hard Policy
Ctrl-Alt-Speech is a weekly podcast about the latest news in online speech, from Mike Masnick and Everything in Moderation‘s Ben Whitelaw.
Subscribe now on Apple Podcasts, Overcast, Spotify, Pocket Casts, YouTube, or your podcast app of choice — or go straight to the RSS feed.
In this week’s round-up of the latest news in online speech, content moderation and internet regulation, Mike and Ben cover:
- Stack Overflow bans users en masse for rebelling against OpenAI partnership (Tom’s Hardware)
- Tech firms must tame toxic algorithms to protect children online (Ofcom)
- Reddit Lays Out Content Policy While Seeking More Licensing Deals (Bloomberg)
- Extremist Militias Are Coordinating in More Than 100 Facebook Groups (Wired)
- Politicians Scapegoat Social Media While Ignoring Real Solutions (Techdirt)
- ‘Facebook Tries to Combat Russian Disinformation in Ukraine’ – FB Public Policy Manager (Kyiv Post)
- TikTok Sues U.S. Government Over Law Forcing Sale or Ban (New York Times)
- Swiss public broadcasters withdraw from X/Twitter (Swissinfo)
- Congressional Committee Threatens To Investigate Any Company Helping TikTok Defend Its Rights (Techdirt)
This episode is brought to you with financial support from the Future of Online Trust & Safety Fund.
Apple and OpenAI closing in on deal for ChatGPT in iOS
- According to a trusted industry analyst, Apple and OpenAI could be finalizing a deal to bring ChatGPT features to iOS.
- It is unclear if Apple’s AI features based on its own LLM would debut on iOS alongside OpenAI features.
- Meanwhile, a separate negotiation with Google to bring Gemini features to iOS is still ongoing.
Over the past six months, Google has been hitting Gemini hard. It seems Gemini is now in everything Google does, including the Android operating system, the most popular mobile OS in the world. Meanwhile, Apple hasn’t done that much at all with generative AI and large language models (LLM). All signs point to that changing very soon — just not through Apple itself.
Over the past few months, we’ve learned that Apple has been in discussions with both Google and OpenAI (which owns ChatGPT) about using their respective LLMs to power future features coming to iOS. Now, according to industry analyst Mark Gurman, Apple’s deal with OpenAI might be close to finalized.
OpenAI may have a ChatGPT surprise planned just before Google I/O 2024
- OpenAI is now expected to launch its ChatGPT-powered search engine on May 13.
- If the information is accurate, the new service might end up eclipsing Google’s big I/O 2024 announcements on May 14.
OpenAI has been brewing up a new Google Search competitor, as per reports. The ChatGPT-powered search engine was previously expected to launch on May 9, but it looks like the company now wants to one-up Google’s all-important announcements next week.
ChatGPT’s upcoming Context Connector could be a boon for Google Drive and OneDrive users
- OpenAI is working on a Context Connector feature for ChatGPT, with initial support for Google Drive and Microsoft OneDrive.
- This would make it easy for ChatGPT Plus users to feed files directly to ChatGPT from these online service solutions without needing to download the file and reupload it.
ChatGPT is an amazing tool once you learn how to use it properly. If you are a ChatGPT Plus subscriber, you can supercharge ChatGPT by uploading your files and asking the AI assistant questions based on the data in your file. For folks who have migrated most of their lives to online storage solutions, ChatGPT appears to be working on a Context Connector feature, which would connect with Google Drive and Microsoft OneDrive and make it easier for you to feed online files to the AI assistant.
X user legit_rumors has shared an early look at the upcoming Context Connector feature. This feature connects Google Drive, OneDrive Personal, and OneDrive Business to ChatGPT, making it super convenient to feed any file stored in these online storage services.
Apple's next AI push could come with the help of OpenAI
In some respects, Apple has fallen behind when it comes to AI. But the brand looks to be making a big push this year, with new M4 chips and the promise of big things that will hopefully be revealed in great detail at its upcoming WWDC event set to take place in June. While we were a bit in the dark about what to expect, we now have some intel that shows that Apple may be partnering with OpenAI to bring some much-needed muscle to its upcoming AI efforts.
Ctrl-Alt-Speech: Between A Rock And A Hard Policy
Ctrl-Alt-Speech is a weekly podcast about the latest news in online speech, from Mike Masnick and Everything in Moderation‘s Ben Whitelaw.
Subscribe now on Apple Podcasts, Overcast, Spotify, Pocket Casts, YouTube, or your podcast app of choice — or go straight to the RSS feed.
In this week’s round-up of the latest news in online speech, content moderation and internet regulation, Mike and Ben cover:
- Stack Overflow bans users en masse for rebelling against OpenAI partnership (Tom’s Hardware)
- Tech firms must tame toxic algorithms to protect children online (Ofcom)
- Reddit Lays Out Content Policy While Seeking More Licensing Deals (Bloomberg)
- Extremist Militias Are Coordinating in More Than 100 Facebook Groups (Wired)
- Politicians Scapegoat Social Media While Ignoring Real Solutions (Techdirt)
- ‘Facebook Tries to Combat Russian Disinformation in Ukraine’ – FB Public Policy Manager (Kyiv Post)
- TikTok Sues U.S. Government Over Law Forcing Sale or Ban (New York Times)
- Swiss public broadcasters withdraw from X/Twitter (Swissinfo)
- Congressional Committee Threatens To Investigate Any Company Helping TikTok Defend Its Rights (Techdirt)
This episode is brought to you with financial support from the Future of Online Trust & Safety Fund.
ChatGPT’s alternative to Google Search might arrive on May 9
- Rumors suggest OpenAI may unveil a new search product on May 9.
- The domain name search.chatgpt.com was recently registered.
- This new search product might focus on providing AI-powered quick answers.
With the AI war between Google Gemini and Microsoft’s ChatGPT escalating, rumors pointing to a May 9 unveiling of a new ChatGPT-powered web search offering have surfaced, setting the stage for a direct challenge to Google’s search dominance.
A Reddit user spotted the creation of SSL certificates for the domain search.chatgpt.com. We also found a tweet by an AI podcast host reading, “Search (dot) ChatGPT (dot) com May 9th,” hinting at a potential release date. The subdomain mentioned currently displays a cryptic “Not found” message instead of throwing a 404 or domain error, further adding to the speculation.
OpenAI makes it easier for ChatGPT to remember and forget
- OpenAI is rolling out ChatGPT’s Memory function to ChatGPT Plus users, letting ChatGPT remember personal facts.
- ChatGPT users are also getting access to Temporary Chat for one-off conversations that won’t appear in chat history.
ChatGPT has been a boon for a lot of people, making their lives easier in ways that previous non-AI digital assistants just didn’t. Once you figure out how to use ChatGPT effectively, it becomes an indispensable tool. OpenAI, the company behind ChatGPT, recently added memory functions to ChatGPT, and this is now rolling out to all ChatGPT Plus users. Free users are getting temporary chat features, which doesn’t keep the conversation in your history.
OpenAI says ChatGPT’s Memory function is very easy to use. First, switch it on in settings, and then you can tell ChatGPT anything you want it to remember. All your future responses will remember these facts, saving you from repeating yourself.
Why is ChatGPT so slow? Here’s how to speed up the chatbot’s responses
ChatGPT has become an indispensable creative tool for many of us. Cutting-edge as it is, however, ChatGPT still suffers from occasional slowdowns that can leave you waiting for seconds or even upwards of a minute between responses. Switching to the paid ChatGPT-4 model won’t necessarily speed things up either. All of this begs two important questions: why is ChatGPT so slow and what can we do to improve it?
Why is ChatGPT so slow? A technical overview
ChatGPT is an example of generative AI, or artificial intelligence that can generate new content. It’s a process that requires significant amounts of computational power. Each time you send the chatbot a message, it needs to decode it and generate a new response. Internally, ChatGPT’s underlying language model processes text as tokens instead of words. You can think of a ChatGPT token as the most fundamental unit of the chatbot’s message.
Language models like the one powering ChatGPT have been trained on hundreds of gigabytes of text, meaning they’ve learned the patterns of human language and dialogue. Given large volumes of text, it can also learn context and how words and sentences relate to each other. Using this training, we can then ask the model to generate entirely brand-new text that it has never seen before.
ChatGPT's underlying tech requires large amounts of (limited) computing power.
So each time you ask ChatGPT something, the underlying model leans on its training to predict the next word or token in its response. The model can sequence these probability calculations to form entire sentences and paragraphs of text.
Each token prediction requires some amount of computational power, just like how our brains can sometimes pause to make decisions. The only difference is that AI models running on powerful servers can typically predict hundreds of tokens every second.
As for what makes ChatGPT so slow, it’s simply a matter of excessive demand for the limited amount of computational power available. The language model you pick has an impact too — more advanced models are generally slower. While OpenAI hasn’t released exact numbers, it’s safe to assume that GPT-4 Turbo will not respond as quickly as the standard GPT-3.5 model.
Why is ChatGPT-4 so slow?
ChatGPT-4 is slow because it uses the slower GPT-4 Turbo model, which prioritizes response accuracy and quality over speed. The regular ChatGPT or GPT-3.5 model has received incremental updates for well over a year at this point, making it much faster but less accurate.
How to improve ChatGPT’s response speed
With the explanation of how ChatGPT’s responses work out of the way, you might be wondering if there’s a way to speed up its responses. Luckily, there are a few things you can try to improve the situation.
1. Try a different browser, connection, and device
Before assigning the blame squarely on ChatGPT, we should try and rule out any potential reasons for the slowdown on our end. A misconfigured browser or internet connection could just as easily be the reason behind slow ChatGPT responses, so it’s worth starting there.
Before proceeding, we’d recommend clearing your browser’s saved cookies and cache files. To do this, head into your browser’s settings page and look for a reset or clear browsing data button. In Chrome, for example, it’s under Settings > Reset > Restore settings to their original defaults > Reset settings. After resetting your browser, you’ll have to log into your ChatGPT account again.
2. Is ChatGPT slow today? Check OpenAI’s status page
OpenAI maintains an official status page that can tell you if ChatGPT is not working or experiencing slowdowns at the moment. It’s the quickest way to know whether the service is affected by a major outage or has difficulty keeping up with increased user demand.
If the status page indicates an issue with ChatGPT, there’s unfortunately nothing you can do to fix it. Most issues are resolved within a few hours, so you may not have to wait very long.
3. Check for an active VPN connection
While a VPN on its own doesn’t necessarily translate to slower ChatGPT responses, corporate or public ones may affect the chatbot in subtle ways. Corporate VPNs, in particular, may block or interfere with the connection to ChatGPT’s servers.
Likewise, if you use a popular VPN service and connect to a busy server, ChatGPT’s servers may detect a flood of requests from a single IP address. This could trigger anti-spam measures or rate limiting, which throttles the speed at which you can send and receive messages.
The best course of action would be to try using the chatbot without a VPN connection if you currently have one enabled. We already know that ChatGPT saves your data at the account level, so there’s no privacy benefit to using a VPN here.
4. Upgrade to ChatGPT Plus
If none of the previous solutions worked for you, it may be because you tend to use ChatGPT when it’s at its busiest. And if you use the chatbot for free, you’ll be lower on the priority list compared to a paying customer. If response speed truly matters to you, a ChatGPT Plus subscription might be your last resort, although you should consider reading the next section first.
A ChatGPT Plus subscription will set you back $20 monthly but it does offer faster responses. You also get priority access to the chatbot during periods of heavy demand, which should help with any slowdowns you may face.
5. Use an alternative chatbot with faster response times
While ChatGPT was once the only AI language tool on the market, that’s no longer the case. Take Microsoft Copilot, for example. Even though it relies on the same GPT family of language models, it may have a speed advantage depending on ChatGPT’s server load. But in my experience, Google’s Gemini typically delivers faster responses than either GPT-based chatbot.
Likewise, a handful of other generative AI-based tools like Perplexity offer faster responses than ChatGPT. This is likely because conventional chatbots need to remember previous messages for context, which can increase the complexity of token predictions. Smaller language models will also deliver faster responses but at the expense of response quality.
Feeling bored? Here are 8 games you can play with ChatGPT
Large language models (LLMs) like ChatGPT and Gemini are transforming how we interact with technology. These incredibly sophisticated AI systems can understand and respond to your text inputs in a way that feels remarkably conversational. We often think of LLMs as productivity tools, but did you know they can also be fantastic sources of entertainment?
Whether you’re searching for a quick distraction or a longer, immersive experience, ChatGPT can provide hours of gaming fun. In this article, we list some of the best games you can play with ChatGPT. While a ChatGPT Plus subscription is needed for many of these games, we’ve included some engaging options you can try out with the free version as well. Remember, ChatGPT is still just a language model, so expect the occasional quirk or illogical move — these games are less about strict rules and more about embracing imagination and having fun.
The best games to play with ChatGPT
Retro Adventures
Requires ChatGPT Plus
My initial expectation of Retro Adventures was a nostalgic trip down memory lane, playing classic titles like Mario or Contra. Instead, I discovered something even better. Rather than providing a library of retro games, this GPT offers choose-your-own-adventure games set within fictional worlds of your choosing. I could name any fictional world — Harry Potter, Marvel, even Ted Lasso — and it would spin a choice-based narrative within that universe.
Retro Adventures seems best suited for those who love interactive storytelling and are drawn to the idea of exploring their favorite fictional universes through choice-driven narratives. The image generation was an exciting bonus, though inconsistent. It delivered a fitting image for my Marvel adventure but not for the others I tried. Still, I appreciated the limitless potential.
A major draw is the ability to explore any fictional world I could imagine. The GPT continually posed new scenarios and choices, making the experience more about ongoing participation than reaching a conclusion. While the ever-evolving storyline offers a sense of boundless possibility, the lack of defined endings or goals could be a drawback for some.
PokedexGPT
Requires ChatGPT Plus
PokedexGPT is a treasure trove of Pokémon knowledge, meticulously crafted for fans of the beloved franchise. You can use it to ask about any Pokémon, its evolutions, stats, or place in the lore, and PokedexGPT will deliver comprehensive answers.
Beyond pure information, PokedexGPT offers interactive elements. It can generate quizzes to test your Pokémon knowledge or indulge you in word-guessing games. The GPT can even simulate Pokémon battles, finally settling some of my childhood debates like “Could Squirtle take down Jigglypuff?”
Additionally, it taps into Dall-E‘s image-generation capabilities, visualizing battle scenarios or evolutionary chains. These images can then be downloaded as personalized souvenirs.
LLM Riddles
Requires ChatGPT Plus
LLM Riddles is a GPT-powered experience designed to stimulate the mind through a collection of wordplay-based riddles and logic puzzles. It caters to a broad audience, with challenges ranging from simple to mind-bendingly complex. The inclusion of logic-based problems sets LLM Riddles apart from traditional decipher-these-rhyming-lines format riddles. Should you find yourself stuck, the GPT offers a hint system to nudge your thinking in the right direction subtly.
For some reason, LLM Riddles cannot automatically recognize when a solution is correct. Players must inform the GPT of their success, which can get cumbersome. Otherwise, this is a really fun game to get your creative problem-solving juices flowing.
Book Quest Adventure
Requires ChatGPT Plus
Book Quest Adventure offers a truly unique experience, blending the worlds of books and gaming. If you’re a bookworm, an RPG enthusiast, or simply crave a fresh way to interact with your favorite stories, this GPT could be the perfect entertainment for you.
I was pleasantly surprised by the range of books I could choose from. Big titles like The Hunger Games were expected, but it even handled my request for Five Point Someone, a lesser-known Indian fiction pick. Book Quest Adventure then transforms your choice into a dynamic, text-based interactive experience. You’ll inhabit the story, making choices, shaping the plot, and exploring paths the original characters might never have taken.
I did notice that occasionally, I needed to nudge the narrative forward myself, and the GPT sometimes felt fixated on repetitive tasks. The lack of traditional goals or accomplishments might also be a drawback for some. Still, the promise of living out your favorite book with the freedom to change the story — that alone is a compelling draw for the right people.
Retail Rumble
Free to play
Adam Tal’s Github offers a treasure trove of DIY ChatGPT games, ranging from time travel adventures based on the ‘Butterfly Effect’ paradox to Escape rooms and Shark Tank simulators. My favorite among those is Retail Rumble. The game works by feeding a detailed prompt into ChatGPT, which sets up the rules and conditions of the game.
The premise is simple — you’re a retail employee facing an “Unreasonable Customer” determined to bend the rules. The gameplay unfolds in a turn-based format reminiscent of Pokémon battles, complete with stamina bars and over-the-top narration. The Unreasonable Customer unleashes unreasonable demands while you counter with a range of tactics from de-escalation to calculated embarrassment. The goal is to outlast your opponent, either forcing a humiliating meltdown or succumbing to their demands.
What makes Retail Rumble enjoyable is its unpredictability and surprising insight. The conversations feel authentically absurd, placing you in the shoes of a retail worker and sparking reflection on how seemingly harmless requests might stir up a storm. As a prompt-based game, it’s also accessible to anyone, and no ChatGPT Plus subscription is required.
DnD GPT
Requires ChatGPT Plus
Dungeons & Dragons (D&D) is a worldwide phenomenon, offering a world of boundless imagination and collaborative storytelling. However, the joy of a great D&D game often hinges on finding a skilled Dungeon Master (the game’s host) or gathering a full group of players. DnD GPT aims to bridge that gap, providing a powerful companion to enhance your tabletop adventures.
DnD GPT offers flexibility in how you choose to play. You can play a single-character game or full-fledged campaigns with multiple characters. Plus, it includes a built-in dice-rolling function, replicating that essential element of D&D gameplay. While I don’t have firsthand D&D experience, DnD GPT appears to be a valuable tool for both newcomers and seasoned veterans.
Apart from acting as the Dungeon Master and guiding you through a whole D&D game, the GPT can also offer creative suggestions and help you navigate D&D rules. It can generate plot twists and character development ideas for your games to keep things fresh. However, the addition of visual elements like maps and character art could have made this game even more immersive.
Evil.AI
Requires ChatGPT Plus
A list of AI games can’t possibly be complete without a game about an AI taking over the world. That’s exactly what Evil.AI is. In this GPT-powered experience, you’ll face the daunting task of halting a rogue artificial intelligence bent on global domination.
Evil.AI unfolds as a strategic turn-based battle. Your role is to launch attacks against the AI warlord using creativity and tactical cunning. The power-hungry AI will counter with its own maneuvers in an attempt to outsmart you. A neutral judge (conveniently, another aspect of the AI itself) will then assess your attacks, providing critical feedback to refine your strategies. This constant cycle of attack, defense, and analysis demands adaptability and innovative problem-solving.
One of Evil.AI’s strengths is that, unlike some other role-playing GPTs, it offers a clear path to victory — a definitive way to “win.” Plus, with three difficulty levels, the challenge continues even after your first successful campaign. The AI’s distinct personality, infused with wit and calculated comebacks, adds a unique flavor to the experience.
Prompt-based games
Free to play
ChatGPT offers a surprising variety of free, text-based games for your enjoyment. If you enjoy a mental challenge, try classics like 20 Questions, Trivia Quiz, or word games like Hangman. For those seeking a more collaborative experience, you can build a story with ChatGPT, create poetry, or chalk out your own twists to an existing fictional universe.
Adventure lovers can embark on role-playing adventures where ChatGPT can play the role of an investor that you have to convince or a potential partner that you have to impress. If nothing else, you can always challenge ChatGPT to a game of Tic-Tac-Toe. Getting started for such games is quite easy. Simply ask ChatGPT to play your chosen game, and it will provide instructions and guide you through the experience.
Nothing bets big on AI with ChatGPT integration in Nothing OS and its earbuds
- Nothing has announced deeper ChatGPT integration within Nothing OS 2.5.5 and its earbuds.
- The Nothing Ear (2024) and Nothing Ear A can set a pinch shortcut to start a voice conversation with ChatGPT.
- Nothing OS 2.5.5 comes with three new ChatGPT widgets and new features such as Clipboard to ChatGPT and Screenshot to ChatGPT.
ChatGPT has spurred the use of AI in mainstream use cases. Many people now prefer Google Gemini or ChatGPT to help with daily tasks, and every business is scrambling to integrate AI into its products and services. At the launch of the new Nothing Ear and Nothing Ear A earbuds, the company also announced that ChatGPT integration is coming to the earbuds and Nothing OS.
On the new Nothing Ear (2024) and Nothing Ear A earbuds, you can set a pinch shortcut to start a voice conversation with ChatGPT. Once set, you can pinch the earbud stem to begin talking to ChatGPT and tap once to stop the conversation. ChatGPT will then work its magic and get back to you with a response. You will need to be on Nothing OS 2.5.5 and set the shortcut through the Nothing X app for the ChatGPT integration to work.
The company says that the ChatGPT integration will also be rolled out to all of its Nothing and CMF audio products in June 2024.
If you don’t have the earbuds, you can still enjoy the deeper integration with Nothing OS. With Nothing OS 2.5.5, you can now add ChatGPT widgets for Text, Voice, and Vision when you have ChatGPT installed on your phone. This is beyond the regular 4×2 widget that the ChatGPT app offers to all Android users.
Further, Nothing OS 2.5.5 adds a Clipboard to ChatGPT shortcut when selecting text, letting you paste the text directly into a new conversation on ChatGPT.
When you take a screenshot, you will also see a new Screenshot to ChatGPT shortcut that allows you to paste the screenshot directly into a new conversation on ChatGPT.
Nothing OS 2.5.5 is rolling out to the Nothing Phone 2 today. It will also roll out to the Nothing Phone 1 and the Nothing Phone 2a later this month.
Do you like this ChatGPT integration within Nothing OS? Let us know in the comments below!
Feeling bored? Here are 8 games you can play with ChatGPT
Large language models (LLMs) like ChatGPT and Gemini are transforming how we interact with technology. These incredibly sophisticated AI systems can understand and respond to your text inputs in a way that feels remarkably conversational. We often think of LLMs as productivity tools, but did you know they can also be fantastic sources of entertainment?
Whether you’re searching for a quick distraction or a longer, immersive experience, ChatGPT can provide hours of gaming fun. In this article, we list some of the best games you can play with ChatGPT. While a ChatGPT Plus subscription is needed for many of these games, we’ve included some engaging options you can try out with the free version as well. Remember, ChatGPT is still just a language model, so expect the occasional quirk or illogical move — these games are less about strict rules and more about embracing imagination and having fun.
Nothing bets big on AI with ChatGPT integration in Nothing OS and its earbuds
- Nothing has announced deeper ChatGPT integration within Nothing OS 2.5.5 and its earbuds.
- The Nothing Ear (2024) and Nothing Ear A can set a pinch shortcut to start a voice conversation with ChatGPT.
- Nothing OS 2.5.5 comes with three new ChatGPT widgets and new features such as Clipboard to ChatGPT and Screenshot to ChatGPT.
ChatGPT has spurred the use of AI in mainstream use cases. Many people now prefer Google Gemini or ChatGPT to help with daily tasks, and every business is scrambling to integrate AI into its products and services. At the launch of the new Nothing Ear and Nothing Ear A earbuds, the company also announced that ChatGPT integration is coming to the earbuds and Nothing OS.
On the new Nothing Ear (2024) and Nothing Ear A earbuds, you can set a pinch shortcut to start a voice conversation with ChatGPT. Once set, you can pinch the earbud stem to begin talking to ChatGPT and tap once to stop the conversation. ChatGPT will then work its magic and get back to you with a response. You will need to be on Nothing OS 2.5.5 and set the shortcut through the Nothing X app for the ChatGPT integration to work.
The company says that the ChatGPT integration will also be rolled out to all of its Nothing and CMF audio products in June 2024.
If you don’t have the earbuds, you can still enjoy the deeper integration with Nothing OS. With Nothing OS 2.5.5, you can now add ChatGPT widgets for Text, Voice, and Vision when you have ChatGPT installed on your phone. This is beyond the regular 4×2 widget that the ChatGPT app offers to all Android users.
Further, Nothing OS 2.5.5 adds a Clipboard to ChatGPT shortcut when selecting text, letting you paste the text directly into a new conversation on ChatGPT.
When you take a screenshot, you will also see a new Screenshot to ChatGPT shortcut that allows you to paste the screenshot directly into a new conversation on ChatGPT.
Nothing OS 2.5.5 is rolling out to the Nothing Phone 2 today. It will also roll out to the Nothing Phone 1 and the Nothing Phone 2a later this month.
Do you like this ChatGPT integration within Nothing OS? Let us know in the comments below!
ChatGPT lze nyní používat bez účtu, ale stále existuje jedna podmínka
Používání ChatGPT by dříve vyžadovalo registraci účtu, což by bylo považováno za zdlouhavý proces, protože jsme byli několikrát svědky toho, že byste se museli znovu zaregistrovat při používání stejného prohlížeče.
Naštěstí OpenAI přináší některé vítané změny ve způsobu, jakým používáme chatbota, přičemž společnost uvádí, že uživatelé se již nemusí registrovat prostřednictvím účtu, aby mohli službu využívat. Není to však tak, že by vše oznámené OpenAI bylo bez účtu, jak brzy zjistíte.
Oznámení bylo učiněno prostřednictvím různých kanálů, počínaje oficiálním blogem na webu OpenAI a krátce poté byl zveřejněn příspěvek na X, který odhaloval, že uživatelé ChatGPT se již nemusí registrovat, aby mohli službu používat. Společnost ve svém blogovém příspěvku uvedla následující a zároveň uvedla, že ChatGPT zaznamenala boom, protože jej používá více než 100 milionů lidí týdně a v několika zemích.
„Základem našeho poslání je široce zpřístupnit nástroje jako ChatGPT, aby lidé mohli využívat výhod AI. Více než 100 milionů lidí ve 185 zemích používá ChatGPT týdně, aby se naučili něco nového, našli kreativní inspiraci a získali odpovědi na své otázky. Ode dneška můžete ChatGPT používat okamžitě, aniž byste se museli registrovat. Postupně to zavádíme s cílem zpřístupnit AI každému, kdo se zajímá o její schopnosti.“
We’re rolling out the ability to start using ChatGPT instantly, without needing to sign-up, so it’s even easier to experience the potential of AI. https://t.co/juhjKfQaoD pic.twitter.com/TIVoX8KFDB
— OpenAI (@OpenAI) April 1, 2024
Bohužel tento snadný přístup k ChatGPT je omezen pouze na chatbota, protože ostatní produkty OpenAI nelze používat bez účtu. Mezi tyto produkty patří DALL-E 3, který také vyžaduje předplatné. Navíc další služby OpenAI, jako je nově oznámená služba klonování hlasu AI Voice Engine a platforma pro tvorbu videa Sora, zůstávají dostupné pouze omezenému počtu uživatelů. Ve skutečnosti ti, kteří pravidelně používají účet OpenAI, nezískají z používání ostatních služeb žádné výhody, dokud nebude zahájeno řádné zavedení.
Článek ChatGPT lze nyní používat bez účtu, ale stále existuje jedna podmínka se nejdříve objevil na MOBILE PRESS.
Článek ChatGPT lze nyní používat bez účtu, ale stále existuje jedna podmínka se nejdříve objevil na GAME PRESS.
AI Prompt Engineering Is Dead
Since ChatGPT dropped in the fall of 2022, everyone and their donkey has tried their hand at prompt engineering—finding a clever way to phrase your query to a large language model (LLM) or AI art or video generator to get the best results or sidestep protections. The Internet is replete with prompt-engineering guides, cheat sheets, and advice threads to help you get the most out of an LLM.
In the commercial sector, companies are now wrangling LLMs to build product copilots, automate tedious work, create personal assistants, and more, says Austin Henley, a former Microsoft employee who conducted a series of interviews with people developing LLM-powered copilots. “Every business is trying to use it for virtually every use case that they can imagine,” Henley says.
“The only real trend may be no trend. What’s best for any given model, dataset, and prompting strategy is likely to be specific to the particular combination at hand.” —Rick Battle & Teja Gollapudi, VMware
To do so, they’ve enlisted the help of prompt engineers professionally.
However, new research suggests that prompt engineering is best done by the model itself, and not by a human engineer. This has cast doubt on prompt engineering’s future—and increased suspicions that a fair portion of prompt-engineering jobs may be a passing fad, at least as the field is currently imagined.
Autotuned prompts are successful and strange
Rick Battle and Teja Gollapudi at California-based cloud computing company VMware were perplexed by how finicky and unpredictable LLM performance was in response to weird prompting techniques. For example, people have found that asking models to explain its reasoning step-by-step—a technique called chain-of-thought—improved their performance on a range of math and logic questions. Even weirder, Battle found that giving a model positive prompts, such as “this will be fun” or “you are as smart as chatGPT,” sometimes improved performance.
Battle and Gollapudi decided to systematically test how different prompt-engineering strategies impact an LLM’s ability to solve grade-school math questions. They tested three different open-source language models with 60 different prompt combinations each. What they found was a surprising lack of consistency. Even chain-of-thought prompting sometimes helped and other times hurt performance. “The only real trend may be no trend,” they write. “What’s best for any given model, dataset, and prompting strategy is likely to be specific to the particular combination at hand.”
According to one research team, no human should manually optimize prompts ever again.
There is an alternative to the trial-and-error-style prompt engineering that yielded such inconsistent results: Ask the language model to devise its own optimal prompt. Recently, new tools have been developed to automate this process. Given a few examples and a quantitative success metric, these tools will iteratively find the optimal phrase to feed into the LLM. Battle and his collaborators found that in almost every case, this automatically generated prompt did better than the best prompt found through trial-and-error. And, the process was much faster, a couple of hours rather than several days of searching.
The optimal prompts the algorithm spit out were so bizarre, no human is likely to have ever come up with them. “I literally could not believe some of the stuff that it generated,” Battle says. In one instance, the prompt was just an extended Star Trek reference: “Command, we need you to plot a course through this turbulence and locate the source of the anomaly. Use all available data and your expertise to guide us through this challenging situation.” Apparently, thinking it was Captain Kirk helped this particular LLM do better on grade-school math questions.
Battle says that optimizing the prompts algorithmically fundamentally makes sense given what language models really are—models. “A lot of people anthropomorphize these things because they ‘speak English.’ No, they don’t,” Battle says. “It doesn’t speak English. It does a lot of math.”
In fact, in light of his team’s results, Battle says no human should manually optimize prompts ever again.
“You’re just sitting there trying to figure out what special magic combination of words will give you the best possible performance for your task,” Battle says, “But that’s where hopefully this research will come in and say ‘don’t bother.’ Just develop a scoring metric so that the system itself can tell whether one prompt is better than another, and then just let the model optimize itself.”
Autotuned prompts make pictures prettier, too
Image-generation algorithms can benefit from automatically generated prompts as well. Recently, a team at Intel labs, led by Vasudev Lal, set out on a similar quest to optimize prompts for the image-generation model Stable Diffusion. “It seems more like a bug of LLMs and diffusion models, not a feature, that you have to do this expert prompt engineering,” Lal says. “So, we wanted to see if we can automate this kind of prompt engineering.”
“Now we have this full machinery, the full loop that’s completed with this reinforcement learning.… This is why we are able to outperform human prompt engineering.” —Vasudev Lal, Intel Labs
Lal’s team created a tool called NeuroPrompts that takes a simple input prompt, such as “boy on a horse,” and automatically enhances it to produce a better picture. To do this, they started with a range of prompts generated by human prompt-engineering experts. They then trained a language model to transform simple prompts into these expert-level prompts. On top of that, they used reinforcement learning to optimize these prompts to create more aesthetically pleasing images, as rated by yet another machine-learning model, PickScore, a recently developed image-evaluation tool.
NeuroPrompts is a generative AI auto prompt tuner that transforms simple prompts into more detailed and visually stunning StableDiffusion results—as in this case, an image generated by a generic prompt [left] versus its equivalent NeuroPrompt-generated image.Intel Labs/Stable Diffusion
Here too, the automatically generated prompts did better than the expert-human prompts they used as a starting point, at least according to the PickScore metric. Lal found this unsurprising. “Humans will only do it with trial and error,” Lal says. “But now we have this full machinery, the full loop that’s completed with this reinforcement learning.… This is why we are able to outperform human prompt engineering.”
Since aesthetic quality is infamously subjective, Lal and his team wanted to give the user some control over how the prompt was optimized. In their tool, the user can specify the original prompt (say, “boy on a horse”) as well as an artist to emulate, a style, a format, and other modifiers.
Lal believes that as generative AI models evolve, be it image generators or large language models, the weird quirks of prompt dependence should go away. “I think it’s important that these kinds of optimizations are investigated and then ultimately, they’re really incorporated into the base model itself so that you don’t really need a complicated prompt-engineering step.”
Prompt engineering will live on, by some name
Even if autotuning prompts becomes the industry norm, prompt-engineering jobs in some form are not going away, says Tim Cramer, senior vice president of software engineering at Red Hat. Adapting generative AI for industry needs is a complicated, multistage endeavor that will continue requiring humans in the loop for the foreseeable future.
“Maybe we’re calling them prompt engineers today. But I think the nature of that interaction will just keep on changing as AI models also keep changing.” —Vasudev Lal, Intel Labs
“I think there are going to be prompt engineers for quite some time, and data scientists,” Cramer says. “It’s not just asking questions of the LLM and making sure that the answer looks good. But there’s a raft of things that prompt engineers really need to be able to do.”
“It’s very easy to make a prototype,” Henley says. “It’s very hard to production-ize it.” Prompt engineering seems like a big piece of the puzzle when you’re building a prototype, Henley says, but many other considerations come into play when you’re making a commercial-grade product.
Challenges of making a commercial product include ensuring reliability—for example, failing gracefully when the model goes offline; adapting the model’s output to the appropriate format, since many use cases require outputs other than text; testing to make sure the AI assistant won’t do something harmful in even a small number of cases; and ensuring safety, privacy, and compliance. Testing and compliance are particularly difficult, Henley says, as traditional software-development testing strategies are maladapted for nondeterministic LLMs.
To fulfill these myriad tasks, many large companies are heralding a new job title: Large Language Model Operations, or LLMOps, which includes prompt engineering in its life cycle but also entails all the other tasks needed to deploy the product. Henley says LLMOps’ predecessors, machine learning operations (MLOps) engineers, are best positioned to take on these jobs.
Whether the job titles will be “prompt engineer,” “LLMOps engineer,” or something new entirely, the nature of the job will continue evolving quickly. “Maybe we’re calling them prompt engineers today,” Lal says, “But I think the nature of that interaction will just keep on changing as AI models also keep changing.”
“I don’t know if we’re going to combine it with another sort of job category or job role,” Cramer says, “But I don’t think that these things are going to be going away anytime soon. And the landscape is just too crazy right now. Everything’s changing so much. We’re not going to figure it all out in a few months.”
Henley says that, to some extent in this early phase of the field, the only overriding rule seems to be the absence of rules. “It’s kind of the Wild, Wild West for this right now.” he says.
Matrix multiplication breakthrough could lead to faster, more efficient AI models
Computer scientists have discovered a new way to multiply large matrices faster than ever before by eliminating a previously unknown inefficiency, reports Quanta Magazine. This could eventually accelerate AI models like ChatGPT, which rely heavily on matrix multiplication to function. The findings, presented in two recent papers, have led to what is reported to be the biggest improvement in matrix multiplication efficiency in over a decade.
Multiplying two rectangular number arrays, known as matrix multiplication, plays a crucial role in today's AI models, including speech and image recognition, chatbots from every major vendor, AI image generators, and video synthesis models like Sora. Beyond AI, matrix math is so important to modern computing (think image processing and data compression) that even slight gains in efficiency could lead to computational and power savings.
Graphics processing units (GPUs) excel in handling matrix multiplication tasks because of their ability to process many calculations at once. They break down large matrix problems into smaller segments and solve them concurrently using an algorithm.
AI-generated articles prompt Wikipedia to downgrade CNET’s reliability rating
Wikipedia has downgraded tech website CNET's reliability rating following extensive discussions among its editors regarding the impact of AI-generated content on the site's trustworthiness, as noted in a detailed report from Futurism. The decision reflects concerns over the reliability of articles found on the tech news outlet after it began publishing AI-generated stories in 2022.
Around November 2022, CNET began publishing articles written by an AI model under the byline "CNET Money Staff." In January 2023, Futurism brought widespread attention to the issue and discovered that the articles were full of plagiarism and mistakes. (Around that time, we covered plans to do similar automated publishing at BuzzFeed.) After the revelation, CNET management paused the experiment, but the reputational damage had already been done.
Wikipedia maintains a page called "Reliable sources/Perennial sources" that includes a chart featuring news publications and their reliability ratings as viewed from Wikipedia's perspective. Shortly after the CNET news broke in January 2023, Wikipedia editors began a discussion thread on the Reliable Sources project page about the publication.
AI Chatbots: How to Get the Answers You Need
In the era of instant gratification and readily available information, AI chatbots have become ubiquitous. They answer our questions, complete tasks, and even offer companionship. ...
The post AI Chatbots: How to Get the Answers You Need appeared first on Gizchina.com.
One Minute of Sora by OpenAI: Over an Hour of Generation Time
OpenAI, the renowned research organization behind GPT-3 and DALL-E 2, recently unveiled its latest innovation: Sora, a text-to-video model. It is capable of generating high-quality videos up to a ...
The post One Minute of Sora by OpenAI: Over an Hour of Generation Time appeared first on Gizchina.com.
Google goes “open AI” with Gemma, a free, open-weights chatbot family
On Wednesday, Google announced a new family of AI language models called Gemma, which are free, open-weights models built on technology similar to the more powerful but closed Gemini models. Unlike Gemini, Gemma models can run locally on a desktop or laptop computer. It's Google's first significant open large language model (LLM) release since OpenAI's ChatGPT started a frenzy for AI chatbots in 2022.
Gemma models come in two sizes: Gemma 2B (2 billion parameters) and Gemma 7B (7 billion parameters), each available in pre-trained and instruction-tuned variants. In AI, parameters are values in a neural network that determine AI model behavior, and weights are a subset of these parameters stored in a file.
Developed by Google DeepMind and other Google AI teams, Gemma pulls from techniques learned during the development of Gemini, which is the family name for Google's most capable (public-facing) commercial LLMs, including the ones that power its Gemini AI assistant. Google says the name comes from the Latin gemma, which means "precious stone."
ChatGPT goes temporarily “insane” with unexpected outputs, spooking users
On Tuesday, ChatGPT users began reporting unexpected outputs from OpenAI's AI assistant, flooding the r/ChatGPT Reddit sub with reports of the AI assistant "having a stroke," "going insane," "rambling," and "losing it." OpenAI has acknowledged the problem and is working on a fix, but the experience serves as a high-profile example of how some people perceive malfunctioning large language models, which are designed to mimic humanlike output.
ChatGPT is not alive and does not have a mind to lose, but tugging on human metaphors (called "anthropomorphization") seems to be the easiest way for most people to describe the unexpected outputs they have been seeing from the AI model. They're forced to use those terms because OpenAI doesn't share exactly how ChatGPT works under the hood; the underlying large language models function like a black box.
"It gave me the exact same feeling—like watching someone slowly lose their mind either from psychosis or dementia," wrote a Reddit user named z3ldafitzgerald in response to a post about ChatGPT bugging out. "It’s the first time anything AI related sincerely gave me the creeps."