FreshRSS

Normální zobrazení

Jsou dostupné nové články, klikněte pro obnovení stránky.
PředevčíremHlavní kanál
  • ✇Android Authority
  • 5 ways talking to Gemini Live is much, much better than using Google AssistantRita El Khoury
    Since Gemini Live became available to me on my Pixel 8 Pro late last week, I’ve found myself using it very often. Not because it’s the latest and hottest trend, no, but because almost everything I hated about talking to Google Assistant is no longer an issue with Gemini Live. The difference is staggering. I have a lot to say about the topic, but for today, I want to focus on a few aspects that make talking to Gemini Live such a better experience compared to using Google Assistant or the regu
     

5 ways talking to Gemini Live is much, much better than using Google Assistant

20. Srpen 2024 v 21:18

Since Gemini Live became available to me on my Pixel 8 Pro late last week, I’ve found myself using it very often. Not because it’s the latest and hottest trend, no, but because almost everything I hated about talking to Google Assistant is no longer an issue with Gemini Live. The difference is staggering.

I have a lot to say about the topic, but for today, I want to focus on a few aspects that make talking to Gemini Live such a better experience compared to using Google Assistant or the regular Gemini.

1. Gemini Live understands me, the way I speak

google gemini live natural language question 2

Credit: Rita El Khoury / Android Authority

English is only my third language and even though I’ve been speaking it for decades, it’s still not the most natural language for me to use. Plus, I have the kind of brain that zips all over the place. So, every time I wanted to trigger Google Assistant, I had to think of the exact sentence or question before saying, “Hey Google.” For that reason, and that reason alone, talking to Assistant never felt natural to me. It’s always pre-meditated, and it always requires me to pause what I’m doing and give it my full attention.

Google Assistant wants me to speak like a robot to fit its mold. Gemini Live lets me speak however I want.

Gemini Live understands natural human speech. For me, it works around my own speech’s idiosyncracies, so I can start speaking without thinking or preparing my full question beforehand. I can “uhm” and “ah” mid-sentence, repeat myself, turn around the main question, and figure things out as I speak, and Live will still understand all of that.

I can even ask multiple questions and be as vague or as precise as possible. There’s really no restriction around how to speak or what to say, no specific commands, no specific ways to phrase questions — just no constraints whatsoever. That completely changes the usability of AI chatbots for me.

2. This is what real, continuous conversations should be like

google gemini live interruption correction

Credit: Rita El Khoury / Android Authority

Google Assistant added a setting for Continuous Conversations many years ago, but that never felt natural or all that continuous. I’d say “Hey Google,” ask it for something, wait for the full answer, wait an extra second for it to start listening again, and then say my second command. If I stay silent for a couple of seconds, the conversation is done and I have to re-trigger Assistant again.

Plus, Assistant treats every command separately. There’s no real ‘chat’ feeling, just a series of independent questions or commands and answers.

Interruptions, corrections, clarifications, idea continuity, topic changes — Gemini Live handles all of those.

Gemini Live works differently. Every session is a real open conversation, where I can talk back and forth for a while, and it still remembers everything that came before. So if I say I like Happy Endings and ask for similar TV show recommendations, I can listen in, then ask more questions, and it’ll keep in mind my preference for Happy Endings-like shows.

I can also interrupt it at any point in time and correct it if it misunderstood me or if the answer doesn’t satisfy me. I don’t have to manually scream at it to stop or wait for it as it drones on for two minutes with a wrong answer. I can also change the conversation topic in an instant or give it more precise questions if needed.

Plus, Gemini Live doesn’t shut off our chat after a few seconds of silence. So I can take a few seconds to properly assimilate the answer and think of other clarifications or questions to ask, you know, like a normal human, instead of a robot who has the follow-ups ready in a second.

Better yet, I can minimize Live and go use other apps while still keeping the chat going. I’ve found this excellent while browsing or chatting with friends. I can either invoke Live mid-browsing to ask questions and get clarifications about what I’m reading, or start a regular Live chat then pull up a browser to double check what Gemini is telling me.

3. TL;DR? Ask it for a summary

google gemini live interruption summary

Credit: Rita El Khoury / Android Authority

As I mentioned earlier, every command is a separate instance for Google Assistant. Gemini Live considers an entire chat as an entity, which lets me do something I could never do with Assistant: ask for a summary.

So if I had a chat about places to run around in Paris and test the new Panorama mode on the Pixel 9 series, I can ask it for a summary in the end, and it’ll list all of them. This is incredibly helpful when trying to understand complex topics or get a list of suggestions, for example.

4. Want to talk more about a specific topic? Resume an older chat

google gemini live continue chat

Credit: Rita El Khoury / Android Authority

At one point, I opened Gemini Live and said something like, “Hey, can we continue our chat about Paris panorama photos?” And it said yes. I was a bit gobsmacked. So I went on, and it seemed to really know where we left off. I tried that again a few times, and it worked every time. Google Assistant just doesn’t have anything like this.

Another way to trigger this more reliably is to open Gemini, expand the full Gemini app, tap on Recents and open a previous chat. Tapping on the Gemini Live icon in the bottom right here allows you to continue an existing chat as if you never stopped it or exited it.

5. Check older chats and share them to Drive or Gmail

google gemini live export docs gmail

Credit: Rita El Khoury / Android Authority

Viewing my Google Assistant history has always been a convoluted process that requires going to my Google account, finding my personal history, and checking the last few commands I’ve done.

With Gemini, it’s so easy to open up previous Live chats and read everything that was said in them. Even better, every chat can be renamed, pinned to the top, or deleted in its entirety. Plus, every response can be copied, shared, or quickly exported to Google Docs or Gmail. This makes it easy for me to manage my Gemini Live data, delete what needs to be deleted, and share or save what I care about.

Google Assistant still has a (significant) leg up

google gemini live calendar fail

Credit: Rita El Khoury / Android Authority

Despite everything Gemini Live does well, there are so many instances where I felt its limitations while using it. For one, the Live session is separate from the main Gemini experience, and Live only treats general knowledge questions, not personal data. So I can ask Gemini (not Live) about my calendar, send messages with it, start timers, check my Drive documents, control my smart home, and more, just as I could with Assistant, but I can’t do any of that with Gemini Live. The latter is more of a lively Google Search experience and all the regular Gemini extensions aren’t accessible in Live. Google said it was working on bringing them over, though, and that is the most exciting prospect for me.

Gemini Live still doesn't have access to personal data, calendars, smart home, music services, etc...

Because of how it’s built and what it currently does, Gemini Live requires a constant internet connection and there’s nothing you can do without it. Assistant is able to handle some basic local commands like device controls, timers, and alarms, but Gemini Live can’t.

And for now, my experience with multiple language in Gemini Live support has been iffy at best — not like Assistant’s support of multiple languages is stellar, but it works. On my phone, which is set to English (US), Gemini Live understands me only when I speak in English. I can tell it to answer in French, and it will, but it won’t understand me or recognize my words if I start speaking French. I hope Google brings in a more natural multilingual experience to it, because that could be life-changing for someone like me who thinks and talks in three languages at the same time.

google gemini live fullscreen listening

Credit: Rita El Khoury / Android Authority

Logistically, my biggest issue with Gemini Live is that I can’t control it via voice yet. My “Hey Google” command opens up the main Gemini voice command interface, which is neat, but I need to manually tap the Live button to trigger a chat. And when I’m done talking, the chat doesn’t end unless I manually tap to end it. No amount of “thank you,” “that’s it,” “we’re done,” “goodbye,” or other words did the trick to end the chat. Only the red End button does.

Google Assistant was a stickler for sourcing every piece of info; Gemini Live doesn't care about sources.

Realistically, though, my biggest Gemini Live problem is that there’s no sourcing for any of the info it shares. Assistant used to be a stickler for sourcing everything; how many times have you heard say something like, “According to [website];” or, “on the [website], they say…?” Gemini Live just states facts, instead, with no immediate way to verify them. All I can do is end the chat, go to the transcript, and check for the Google button that appears below certain messages, which shows me related searches I can do to verify that info. Not very intuitive, Google, and not respectful to the millions of sites you’ve crawled to get your answer like, uh, I don’t know… Android Authority perhaps?

  • ✇Android Authority
  • Asking for Gemini’s help on Android is getting a lot prettierRushil Agrawal
    Credit: Edgar Cervantes / Android Authority Google is rolling out a new floating overlay panel for Gemini on Android devices, featuring a subtle glow animation. The panel allows Gemini responses to appear within the current app and enables a contextual understanding of on-screen content. The update also includes a new “Ask about this video” chip that lets users ask questions about YouTube videos directly. Google recently unveiled a series of exciting updates for its AI assistant, Gemini
     

Asking for Gemini’s help on Android is getting a lot prettier

18. Srpen 2024 v 00:25

Google Gemini logo on smartphone stock photo (5)

Credit: Edgar Cervantes / Android Authority

  • Google is rolling out a new floating overlay panel for Gemini on Android devices, featuring a subtle glow animation.
  • The panel allows Gemini responses to appear within the current app and enables a contextual understanding of on-screen content.
  • The update also includes a new “Ask about this video” chip that lets users ask questions about YouTube videos directly.


Google recently unveiled a series of exciting updates for its AI assistant, Gemini, during the Pixel 9 series launch event. While the introduction of Gemini Live mode stole the spotlight, Gemini is also getting a shiny new floating overlay panel for Android. (h/t: 9to5Google)

This new interface, currently being rolled out to Android users, features a visually pleasing glow animation that surrounds the panel whenever Gemini is activated. This subtle glow not only looks neat but is also a sign that your Gemini’s got a new trick up its sleeve: a contextual overlay that understands what you’re up to without taking over your whole screen.

This update was initially teased at the I/O 2024 conference in May and allows Gemini to deliver responses directly within the app you’re using rather than hijacking your entire screen. This design change aims to help users maintain focus on their current tasks while still benefiting from Gemini’s assistance. For those who prefer the more traditional, immersive experience, a quick tap on the top-right corner of the overlay will still expand it to full screen.

Additionally, the update also includes a new “Ask about this video” chip, replacing the previous “Ask about this screen” prompt, which appears when Gemini is triggered on YouTube videos. This feature allows users to request summaries or pose follow-up questions about the video’s content.

As for the glowing floating overlay, it’s still in the process of rolling out, so if you haven’t seen it yet, hang tight. Google says it’ll be hitting more Android devices in the coming weeks, both for regular Gemini users and those with Gemini Advanced subscriptions.

All in all, these updates are setting the stage for a more seamless and engaging Gemini experience. If you’re an Android user, keep an eye out for that glow.

Qualcomm AI/Copilot PCs don’t live up to the hype

18. Červen 2024 v 13:00

Qualcomm's new AI/Copilot PCs has overwhelming hype but their actions point to a far murkier picture.
Read more


The post Qualcomm AI/Copilot PCs don’t live up to the hype appeared first on SemiAccurate.

  • ✇jeffq, published
  • Language Models vs. The SAT Reading Testjeffq
    tl;dr FLAN-T5 (11B) scored identically to GPT-3.5 (text-davinci-003) across the ten publicly available SAT Reading Tests. A finetuned 3B model scored within 7 percentage points of GPT-3.5 on held-out tests with 98% less parameters while maintaining generalization Models: base, large, xl, xxl Dataset: HuggingFace Code: GitHub After working on literAI I’ve been interested in further exploring language models from a narrative/literary perspective. One question I had was “how well do these m
     

Language Models vs. The SAT Reading Test

Od: jeffq
20. Únor 2023 v 16:32

tl;dr FLAN-T5 (11B) scored identically to GPT-3.5 (text-davinci-003) across the ten publicly available SAT Reading Tests. A finetuned 3B model scored within 7 percentage points of GPT-3.5 on held-out tests with 98% less parameters while maintaining generalization

Models: base, large, xl, xxl Dataset: HuggingFace Code: GitHub

After working on literAI I’ve been interested in further exploring language models from a narrative/literary perspective. One question I had was “how well do these models actually ‘understand’ longer prose?”

Now, it just so happens that there’s a test we make teenagers take every year to determine this very fact! That is, the SAT (specifically, the Reading part).

The SAT Reading Test, despite its name, is multimodal. There is always one section that includes a combination of charts, tables, and graphs. However, the questions are clearly delineated — typically only three questions on the test reference the data. For the purposes of evaluation I excluded these questions. First, the results.

Data

FLAN-T5 11B scored identical to GPT-3.5, despite being less than 1/10th the size! It is also can be run on a consumer GPU (<= 24 GB) when loaded in 8-bit inference mode! This offers further data supporting the hypothesis that Google did the open source local compute LM community a great service when it released FLAN-T5.


One interesting aspect of the SAT Reading Test is that 30% of the questions reference specific lines within the passage under consideration.

Which choice best supports the conclusion that
Mr. Peters wants to attract attention?

A) Lines 80-81 (“Apparently… change”)
B) Lines 81-85 (“He straightened… hand”)
C) Lines 90-91 (“The young . . . Mr. Peters”)
D) Lines 91-93 (“He was… forty-five”)

SAT Practice Test #5 Question #9

As used in line 93, “becoming” most nearly means

A) emerging.
B) fitting.
C) developing.
D) happening.

SAT Practice Test #5 Question #10

This means that to properly answer the question the LM need to be able to count lines in the presented passage and reason about them explicitly in the context of the passage itself. The dataset I created faithfully represents the line breaks as they appear on the test. What it doesn’t contain is the extra line count helper column that appears next to the passage. For example, here is a snippet of what a passage on the actual test looks like:

SAT Practice Test #5 Passage #1

Note the italicized Line and counter, which appears every five lines. Even the regular passages are multimodal! While it’s certainly just text, communicating it requires more than presenting it merely as a sequence of characters. To see how the models performed on these type of questions I took at look at how the best open source model (FLAN-T5) scored on the two question classes.

FLAN-T5 scored between 5-13% worse on the “line number” questions that it did on the other questions on the test. Could the model just need a little help counting?

To test this theory I finetuned the each of the FLAN-T5 models on eight of the ten practice tests, leaving the remaining two tests for validation. An especially huge thanks is in line to Philipp Schmid for his excellent blog posts on finetuning FLAN-T5.

The models themselves are available here: base, large, xl, xxl. Three of the four finetuned models outscored the original models, with the XL model showing the largest gain. Of particular interest is the XL model, which is within seven percentage points of GPT-3.5 while having 98% (!!!) less parameters (3B vs. 175B).

One problem with aggressive finetuning on small datasets is overfitting or loss of generalization. Do the finetuned models still perform as well as the original models on unseen tasks? To test this I ran the finetuned on a subset of the SuperGLUE metrics.

XXL PTXL FTXL PTXL FTLarge PTLarge FTBase PTBase FT
cb gpt0.870.830.830.830.760.710.820.82
copa c1/c20.950.910.950.900.830.820.570.55
rte gpt0.890.900.850.870.870.840.790.80
wic gpt0.680.680.710.720.620.610.480.48
wsc gpt0.760.770.730.750.660.610.450.46
Data

The above table represents only a few of the hundreds of metrics ran — see the data for full results. They are, however, representative; the finetuned (FT) models maintain the same generalization capabilities as the pre-trained (PT) versions! It may be that the finetuned models are (by this limited measure) “better” than the originals since they score higher on the SAT Reading Test while maintaining zero-shot unseen task performance.

In conclusion, FLAN-T5 continues to show itself as a powerful model, both in its raw reasoning capabilities relative to closed source models, but also in its ability to quickly learn new skills through finetuning — not to mention its accessibility on consumer-grade hardware. ty google

  • ✇jeffq, published
  • literAI: AI-generated open source visual podcastsjeffq
    Demo: https://literai.hooloovoo.ai/ Source: Generator, UI At my previous job I did some shader programming, and generally tinkered around with GPU workloads, and even had the chance to attend Nvidia’s GPU Technology Conference a few times. I remember in 2018 or so being surprised that more and more of the conversation in this area was being dominated by these things called “deep neural networks”. During my CS studies I was focused on cryptography, but I was curious what all this was about and
     

literAI: AI-generated open source visual podcasts

Od: jeffq
2. Únor 2023 v 14:35

Demo: https://literai.hooloovoo.ai/ Source: Generator, UI

At my previous job I did some shader programming, and generally tinkered around with GPU workloads, and even had the chance to attend Nvidia’s GPU Technology Conference a few times. I remember in 2018 or so being surprised that more and more of the conversation in this area was being dominated by these things called “deep neural networks”. During my CS studies I was focused on cryptography, but I was curious what all this was about and took an early version of Udacity’s Deep Learning Nanodegree (don’t laugh!)

The class was actually fairly insightful — making you learn about backpropagation, etc. from scratch and took you through the motions of the classic MNIST classification tasks and so forth. It ended with doing face generation using these fancy things called convolutional neural networks.

Some randomly generated faces created by a
deep convolutional generative adversarial network I made as part of my #udacity course. Not super practical, but still eminently cool

P.S. Twitter asks "Who's in these photos?" when I upload them. The dreams of electric sheep, Twitter. pic.twitter.com/Tf6iAWHEl8

— emozilla (@theemozilla) July 8, 2018
such fidelity, much wow

Neat, but still felt a bit gadget-y to me. Like every nerd I assumed that someday humanity would develop “artificial” intelligence, but at the time it didn’t seem like such a thing was imminent.

Of course, then came Stable Diffusion and ChatGPT.


When I want to learn something technical, I need to be able to thinker with it. Let me get into VS Code, get something working locally, something I can step into as deep as I want to. And then it’s just, you know, messing around with it.

this is not an exaggeration

Over the past six months I’ve been deep-diving the latest AI advancements, tinkering as I go (I recommend the excellent Neural Networks from Scratch book to get get jump started). A few projects I wrote along the way were txt2imghd and transformers-openai-api.

One pain point I kept hitting is that it seemed like the coolest stuff was all behind an API, instead of being openly accessible. Don’t get me wrong — I probably spent more money on GPU time to run open models than if I’d just paid the damn API costs, and I don’t begrudge companies trying to, you know, actually make money — but whenever I wanted to tinker the best stuff required carefully rate limited API calls. I wanna do dumb shit in a tight for loop without the fear of a gazillion dollar bill!


One night while perusing the latest arXiv posts I came across SODA: Million-scale Dialogue Distillation with Social Commonsense Contextualization, which used research into knowledge graphs to generate prompts for text-davinci-003 (the model behind ChatGPT) to create a large dataset of synthetic dialogues along with the accompanying semantic information (e.g. the intent of one of the speakers). This dataset was then used to fine-tune the open source T5 language model from Google to create COSMO, a model that can generate realistic sounding human dialogues.

I spend a fair amount of time listening to audiobooks and podcasts, and this got me thinking about potential applications. Could a podcast about a novel be generated by a model like COSMO? (As part of my research I contributed some SODA data into Open Assistant, a project to create an open source ChatGPT). Furthermore, could it be done using consumer-grade hardware, i.e. not on an A100?

Lo and behold, yacine had similar inklings and while I was working on my project released scribepod, powered by the 900-pound-gorilla that is text-davinci-003. This was partial vindication — yes, it could be done — but also somewhat deflating since it meant it would need to be tethered to an API.

Or must it be? COSMO can make the dialogue — but it needs some information on what to say. The critical task here is summarization; taking the raw novel text and distilling it into meaningful pieces that can be used as context when prompting the dialogue generating LM. Peter Szemraj has been doing fantastic open source work in this space, and I decided to use his long-t5-tglobal-xl-16384-book-summary model (again a fine-tuning of T5 — are we noticing a pattern here? Thanks Google!!!)

Okay, so I had an open source way to summarize text and generate dialogue. How about a bit of flair? Given the incredible results that diffusion models have had in image generation, I wanted to leverage these to give the podcast some imagery. My idea was a player for the podcast that would scroll between images generated from descriptions of the scene that the podcast participants were talking about. To do this, I needed to automatically generate prompts to Stable Diffusion models (Greg Rutkowski here we come).

The ChatGPT-solves-everything answer is to simply few-shot it with some examples of what you’d like using something like LangChain and let those 125 billion parameters work their magic. To maintain our open source purity I chose FLAN-T5 (paper; model), the instruction-tuned version of T5. FLAN-T5 produced very good, although admittedly inferior, results. Alas, such is the price we must pay (or not pay in this case).

Once the image descriptions were created it was simply the matter of generating a prompt and letting a Stable Diffusion model like Dreamlike Diffusion do the rest!

Images generated for H. G. Wells’ “The War of the Worlds”

The final piece was to make actual audio. I cribbed yacine’s use of TorToiSe, and at last the amalgamation was complete — literAI was born! You can try out the visual player here.


I’ll save my poetic waxing about AI for another time. Rather, I’d like to simply appreciate the work of the countless researchers who contributed to getting us to the current SOTA. It’s frankly bewildering. I’m looking forward to where we’re going — and being a builder of it along the way.

  • ✇Mondo 2000
  • Turn On, Tune In, Boot Up! For MozFest 2023:ed523
    AI-Musement Park and MONDOVanilli’s Blockchain Busting Musical Experience “R.U. Cyber.. R.U. Against NFTs?” Immediate release from: 03/03/2023 “AI-Musement Park comprises a cornucopia of performances / talks / happenings /documentary & discussion about AI, Intelligences, technocapitalism’s more thanpressing-ongoing urgencies.” -Eleanor Dare, Cambridge University & AI-Musement Park R.U. Cyber.. R.U. Against NFTs? An original AI-Musement Park, PlayLa.bZ & MONDO 2000History
     

Turn On, Tune In, Boot Up! For MozFest 2023:

Od: ed523
5. Březen 2023 v 02:53

AI-Musement Park and MONDO
Vanilli’s Blockchain Busting Musical Experience “R.U. Cyber.. R.U. Against NFTs?”

Immediate release from: 03/03/2023

“AI-Musement Park comprises a cornucopia of performances / talks / happenings /
documentary & discussion about AI, Intelligences, technocapitalism’s more than
pressing-ongoing urgencies.”
-Eleanor Dare, Cambridge University & AI-Musement Park

R.U. Cyber.. R.U. Against NFTs? An original AI-Musement Park, PlayLa.bZ & MONDO 2000
History Project
human and machine learning co-creation, taking the perspective of an AI that is
training itself on the R.U. Sirius & MONDO Vanilli ‘I’m Against NFT’s’ song lyrics, exploring a
surreal, mind melting and multi-dimensional 360 world of paradoxes and conflicting rules.

“Mondo Vanilli was originally intended to be a virtual reality band exploding all
assumptions about property and propriety in the 1990s. Today fabrication becomes de
rigueur as the connection to the real is intentionally confused by the banal political
tricksters of power and profitability… while storms pound our all-too-human bodies and
communities. I am thrilled to finally see MONDO Vanilli in it’s appropriate context.
Immersive. Come play in the simulacra one more time”
-R.U. Sirius, MONDO 2000

R.U. Cyber.. R.U. Against NFTs? Is a satirical, irreverent block-chain busting commentary on
the propaganda relations fueled ‘Web 3’ hype around non-fungible tokens and the broader
issues that underpin our algorithmically massaged hyper-connected infinite scrolls and trolls
age. Challenging our assumptions about the nature of technology, creativity, and value,
reminding us that the digital world is shaped by powerful forces that determine what is valued
and what is not, and a click is not always for free.

Join Us! On Spring Solstice 2023 For “R.U. Cyber? :// Mondo 2000 History Project Salon”
at MozFest Virtual Plaza & Mozilla Hubs: AI-Musement Park
20th March / 8.30pm EU / GMT

R U Cyber Funzone ai-musement park

About R.U.Sirius & Mondo 2000 #Mondo2000 #RUSirius

R.U. Sirius is an American writer, editor, and media pioneer. Known for being one of key
psychedelic & cyberpunk movement figures. Best known as Mondo 2000 editor-in-chief and at
forefront of 1990s underground cyberculture movement.

About Mozilla Festival #TrustworthyAI #AIMusementPark

Since 2010, MozFest has fueled the movement to ensure the internet benefits humanity, rather
than harms it. This year, your part in the story is critical to our community’s mission: a better,
healthier internet and more Trustworthy AI.

About PlayLa.bZ CIC #PlayLabZ #SpatialCadetZ

Co-founded by PsychFi, FreekMinds & Squire Studios we’re a next generation multipotentiality
multi-award-winning, multi-dimensional motion arts experience design laboratory, developing
DIY changemaking createch immersive experiences & software applications for social good
storycraft. Supporters & Friends: Mozilla Festival, Jisc: Digifest, Beyond Games, Tate Modern,
Furtherfield, Boomtown Festival, Sci-Fi-London, Ravensbourne University London, UAL, East
London Dance, NESTA, Modern Panic, ArtFutura, Kimatica, National Gallery X, Kings College
London, Looking Glass Factory, SubPac, Ecologi, The JUMP, BOM Labs, Mondo 2000

PR Contact: James E. Marks, Tel: 07921 523438 @: jem@playla.bz Twitter: @GoGenieMo

The post Turn On, Tune In, Boot Up! For MozFest 2023: appeared first on Mondo 2000.

  • ✇Pocketables
  • Gemini AI coming to Nest camerasPaul E King
    There have so far been very few AI innovations that have me excited, but Google has claimed they’re bringing Gemini AI to your Nest cameras to give you the ability to query them for events, and be alerted more intelligently. The examples given were being alerted the dog is digging in the garden and asking if the kids left their bikes in the driveway, but the implications of what this could do are by far the most interesting AI use case I’ve encountered. One can imagine a near future where
     

Gemini AI coming to Nest cameras

6. Srpen 2024 v 18:21

There have so far been very few AI innovations that have me excited, but Google has claimed they’re bringing Gemini AI to your Nest cameras to give you the ability to query them for events, and be alerted more intelligently.

The examples given were being alerted the dog is digging in the garden and asking if the kids left their bikes in the driveway, but the implications of what this could do are by far the most interesting AI use case I’ve encountered.

One can imagine a near future where you can say Hey Google, what was my kid wearing when I took her to school? When did a dog poop in my front yard? Where did I leave my keys? Did I come in with a coat last night? And many other things and get actually useful answers.

As an exhausted parent a few years ago I would have killed sometimes to know what my kiddos were wearing when we went to school so I would know whether or not I was getting everything when I picked them up.

Just knowing when people came and went from your house has an amazing array of useful (and some scary) uses as I live with two tiny amnesiacs who can’t remember which one did a chore, or what day their friend came over, or who left the door open, etc. I ended up putting cameras on the doors to try and solve some of this and by golly Gemini might be able to make them even more useful.

There are a host of other enhancements including a redesigned Google Assistant for the nest devices, but I’ll hold my excitement for that for when it rolls out.

Now if they could just add a walkie talkie or intercom feature that is NOT the broadcast feature so that you don’t have to scream to be heard across a house when there’s a device right in there, we’d be talking major advancement.

Hell, I’d let it listen in to our conversations at this point just so I could show my kids a transcript of them saying they would take the damned trash out.

[Google Blog]

Gemini AI coming to Nest cameras by Paul E King first appeared on Pocketables.

  • ✇Techdirt
  • Dear Taylor Swift: There Are Better Ways To Respond To Trump’s AI Images Of You Than A LawsuitDark Helmet
    We’ve written a ton about Taylor Swift’s various adventures in intellectual property law and the wider internet. Given her sheer popularity and presence in pop culture, that isn’t itself particularly surprising. What has been somewhat interesting about her as a Techdirt subject, though, has been how she has straddled the line between being a victim of overly aggressive intellectual property enforcement as well as being a perpetrator of the same. All of this is to say that Swift is not a stranger
     

Dear Taylor Swift: There Are Better Ways To Respond To Trump’s AI Images Of You Than A Lawsuit

21. Srpen 2024 v 05:04

We’ve written a ton about Taylor Swift’s various adventures in intellectual property law and the wider internet. Given her sheer popularity and presence in pop culture, that isn’t itself particularly surprising. What has been somewhat interesting about her as a Techdirt subject, though, has been how she has straddled the line between being a victim of overly aggressive intellectual property enforcement as well as being a perpetrator of the same. All of this is to say that Swift is not a stranger to negative outcomes in the digital realm, nor is she a stranger to being the legal aggressor.

Which is why the point of this post is to be something of an open letter to Her Swiftness to not listen to roughly half the internet that is clamoring for her to sue Donald Trump for sharing some AI-generated images on social media falsely implying that Swift had endorsed him. First, the facts.

Taylor Swift has yet to endorse any presidential candidate this election cycle. But former President Donald Trump says he accepts the superstar’s non-existent endorsement.

Trump posted “I accept!” on his Truth Social account, along with a carousel of (Swift) images – at least some of which appear to be AI-generated.

One of the AI-manipulated photos depicts Swift as Uncle Sam with the text, “Taylor wants you to vote for Donald Trump.” The other photos depict fans of Swift wearing “Swifties for Trump” T-shirts.

As the quote notes, not all of the images were AI generated “fakes.” At least one of them was from a very real woman, who is very much a Swift fan, wearing a “Swifties for Trump” shirt. There is likewise a social media campaign for supporters from the other side of the aisle, too, “Swifties for Kamala”. None of that is really much of an issue, of course. But the images shared by Trump on Truth Social implied far more than a community of her fans that also like him. So much so, in fact, that he appeared to accept an endorsement that never was.

In case you didn’t notice, immediately below that top left picture is a label that clearly marks the article and associated images as “satire.” The image of Swift doing the Uncle Sam routine to recruit people to back Trump is also obviously not something that came directly from Swift or her people. In fact, while she has not endorsed a candidate in this election cycle (more on that in a moment), Swift endorsed Biden in 2020 with some particularly biting commentary around why she would not vote for Trump.

Now, Trump sharing misleading information on social media is about as newsworthy as the fact that the sun will set tonight. But it is worth noting that social media exploded in response, with a ton of people online advocating Swift to “get her legal team involved” or “sue Trump!” And that is something she absolutely should not do. Some outlets have even suggested that Swift should sue under Tennesse’s new ELVIS Act, which both prohibits the use of people’s voice or image without their authorization, and which has never been tested in court.

Trump’s post might be all it takes to give Swift’s team grounds to sue Trump under Tennessee’s Ensuring Likeness Voice and Image Security Act, or ELVIS Act. The law protects against “just about any unauthorized simulation of a person’s voice or appearance,” said Joseph Fishman, a law professor at Vanderbilt University.

“It doesn’t matter whether an image is generated by AI or not, and it also doesn’t matter whether people are actually confused by it or not,” Fishman said. “In fact, the image doesn’t even need to be fake — it could be a real photo, just so long as the person distributing it knows the subject of the photo hasn’t authorized the use.”

Please don’t do this. First, it probably won’t work. Suing via an untested law that is very likely to run afoul of First Amendment protections is a great way to waste money. Trump also didn’t create the images, presumably, and is merely sharing or re-truthing them. That’s going to make making him liable for them a challenge.

But the larger point here is that all Swift really has to do here is respond, if she chooses, with her own political endorsement or thoughts. It’s not as though she didn’t do so in the last election cycle. If she’s annoyed at what Trump did and wants to punish him, she can solve that with more speech: her own. Hell, there aren’t a ton of people out there who can command an audience that rivals Donald Trump’s… but she almost certainly can!

Just point out that what he shared was fake. Mention, if she wishes, that she voted against him last time. If she likes, she might want to endorse a different candidate. Or she can merely leave it with a biting denial, such as:

“The images Donald Trump shared implied that I have endorsed him. I have not. In fact, I didn’t authorize him to use my image in any way and request that he does not in the future. On the other hand, Donald Trump has a history of not minding much when it comes to getting a woman’s consent, so I won’t get my hopes up too much.”

Princess Maker 2 Regeneration launched on PS4 & PS5 today! (Press Release)

Princess Maker 2 Regeneration launched on PS4 & PS5 today! (Press Release)

August 8, 2024 – To commemorate the launch of Princess Maker 2 Regeneration on PS4 and PS5 today, a launch trailer has been released.

Join us on a journey as we witness a daughter's growth, shaped by the choices you make as her father. While you'll guide her path with study schedules and part-time jobs, it's the unexpected encounters that will truly define her.

About “Princess Maker 2”

“Princess Maker 2” is a childrearing simulation game in which the player experiences being the father to a daughter granted to them by the stars.

They raise the girl for eight years, from ages ten to eighteen. Your daughter grows up to be an adult through various experiences in the game. The girl’s dream is to become a princess, but a wide variety of opportunities await depending on how you raise her. What kind of dream will you make come true for this girl?

Redrawing the graphics

This title is based on “Princess Maker 2: Refine,” released in 2004. Especially important graphics were redrawn by Takami Akai in a style similar to the PC-98 version. The graphics are drawn in high resolution befitting modern game systems, with a commitment to quality in the details.

Addition of an opening animation

An opening animation by Yonago Gainax has been added to the game. The animation, drawn by a team led by Takami Akai, envisions the future the player will have as the “father” with their “daughter”.

Knowing how your daughter is doing is the first step in raising her

This title is a social simulation, so it is very important to always be aware of your daughter's status and use this to raise her.

In “Regeneration”, parameters for assessing your daughter's status are always available, so you can check on her at a glance. Check what you should do for your daughter’s future, and raise her carefully.

Message from Takami Akai

I am deeply grateful to see that “Princess Maker 2”, which came out 30 years ago, is still so beloved by so many fans that we can release a new version. This time I was finally able to redo the graphics, which I had always wanted to do. Please enjoy the newly redrawn vacations and endings.

About Takami Akai

Born 1961. While still enrolled at Osaka University of Arts, he made his debut with the DAICON III Opening Animation. As one of the founding members of GAINAX Co., Ltd., he has worked on anime and tokusatsu shows, as well as games and events. His most noteworthy works include the "Princess Maker" series, Gurren Lagann, and Dai Tokusatsu Negiman.

  • Release Date: July 11, 2024 (Nintendo Switch, Steam)
  • August 8, 2024(PS5, PS4)
  • Platforms: Nintendo Switch, PS5, PS4, Steam
  • Languages: English, Japanese, Korean, Chinese (Trad/ Simpl)
  • About Bliss Brain

Bliss Brain Corporation is a Japanese game publisher. Bliss Brain revives high quality games for the modern day and distributes them to the world in downloadable format. Currently, they offer "Princess Maker 2 Refine", "Princess Maker ~Faery Tales Come True~", and "Princess Maker 5" for download on Steam. "Wonder Boy Collection" (not yet released in Japan) will be released for the first time on PS4 and Nintendo Switch. Bliss Brain plans to release more new games in the future.

For details, visit https://blissbrain.co.jp.

About Yonago GAINAX

Takami Akai started this entertainment company in 2014, in his hometown of Yonago City, Tottori Prefecture. It does business based on the theme of contributing to the local community. With a small team of just 10 members, it works on unique jobs, including anime, games, and events.

For details, visit https://yonago-gainax.co.jp/.

  • ✇Attack of the Fanboy
  • Deadpool Samurai Returns In the Funniest Way ImaginableAnna Williams
    With the successful release of Deadpool & Wolverine, fans of the infamous anti-hero may be wanting to find more adventures featuring Wayne, and Sanshiro Kasama, the mangaka behind Deadpool Samurai, is here to deliver. Though relatively underrated, Deadpool Samurai is a spin on the Merc with a Mouth by taking him on various adventures, some of which feature icons of Shonen Jump manga, like All Might from My Hero Academia. While it seemed as though the manga was over, Deadpool made an unex
     

Deadpool Samurai Returns In the Funniest Way Imaginable

14. Srpen 2024 v 19:13

With the successful release of Deadpool & Wolverine, fans of the infamous anti-hero may be wanting to find more adventures featuring Wayne, and Sanshiro Kasama, the mangaka behind Deadpool Samurai, is here to deliver.

Though relatively underrated, Deadpool Samurai is a spin on the Merc with a Mouth by taking him on various adventures, some of which feature icons of Shonen Jump manga, like All Might from My Hero Academia. While it seemed as though the manga was over, Deadpool made an unexpected appearance in a brand-new Jump Plus manga.

The Merc With a Mouth’s Return To Manga

Deadpool-announcing-his-return-to-the-Viz-Media-app-in-Deadpool-Samurai

Initially announced as a project completely unrelated to the popular Marvel character, Viz Media announced that Sanshiro Kasama would be launching a new series on their online service titled Secret Steward, a relatively standard rom-com following a girl and her butler. When the series finally launched, though, it majorly deviated from what potential fans might have expected.

Secret-Steward-promotional-art-of-the-two-main-characters-in-full-color

At the end of Secret Steward‘s debut chapter, the main character is struck by a giant truck in classic isekai intro style, instantly killing him. The driver who committed this heinous and gorey act is no stranger, though, with Deadpool proudly sitting behind the wheel, announcing the return of Deadpool Samurai to the manga reading service.

While unexpected, the twist was insanely clever – and was even partially set up, with Secret Steward‘s description being “Love hits you when you least expect it!” This type of return is rare, too, with many manga series returning without playing any sort of tricks on their readers. Regardless, it was the most Deadpool approach possible, and gave the anti-hero the opportunity to garner some new fans, too – if they weren’t too put off by the gore.

  • ✇GamesIndustry.biz Latest Articles Feed
  • Exploring the challenges of AI-generated art in game developmentSophie McEvoy
    Speaking at Devcom today, Red Meat Games creative content strategist Judy Ehrentraut discussed the importance of ethically training generative AI models, and how certain tools can be utilised during game development."AI is the biggest buzzword and it's either hyped as the new way to solve every productivity problem, or it's received with a groan," Ehrentraut acknowledged."I think both of these takes are really valuable, because disruptive technologies are not necessarily good or bad. It depends
     

Exploring the challenges of AI-generated art in game development

Speaking at Devcom today, Red Meat Games creative content strategist Judy Ehrentraut discussed the importance of ethically training generative AI models, and how certain tools can be utilised during game development.

"AI is the biggest buzzword and it's either hyped as the new way to solve every productivity problem, or it's received with a groan," Ehrentraut acknowledged.

"I think both of these takes are really valuable, because disruptive technologies are not necessarily good or bad. It depends on how they are used, and it really depends on the approach that we take and whether it's ethical or not."

Read more

  • ✇GamesIndustry.biz Latest Articles Feed
  • What are the legal risks of using generative AI in games development?Andrew Velzen
    Generative artificial intelligence, or GenAI, presents many opportunities in the gaming industry.Many of this year's biggest developer events, including GDC, have been awash with companies touting the use of GenAI for creating dozens of maps/levels, to improve development workflows, performing QA tasks, or even responding directly to in-game actions from a player. Additionally, big industry players are actively developing hardware to support the use of GenAI, such as NVIDIA's unveiling of the S
     

What are the legal risks of using generative AI in games development?

Generative artificial intelligence, or GenAI, presents many opportunities in the gaming industry.

Many of this year's biggest developer events, including GDC, have been awash with companies touting the use of GenAI for creating dozens of maps/levels, to improve development workflows, performing QA tasks, or even responding directly to in-game actions from a player. Additionally, big industry players are actively developing hardware to support the use of GenAI, such as NVIDIA's unveiling of the SUPER series of GPUs at the start of the year.

Given all this, it is easy to imagine a not-so-distant future where GenAI plays a substantial role in the development and/or gameplay of most games. Among all the excitement, though, there are also legal risks posed when using GenAI in gaming. Publishers will need to address these stumbling blocks before fully integrating GenAI.

Read more

  • ✇IEEE Spectrum
  • Ansys SimAI Software Predicts Fully Transient Vehicle Crash OutcomesAnsys
    The Ansys SimAI™ cloud-enabled generative artificial intelligence (AI) platform combines the predictive accuracy of Ansys simulation with the speed of generative AI. Because of the software’s versatile underlying neural networks, it can extend to many types of simulation, including structural applications. This white paper shows how the SimAI cloud-based software applies to highly nonlinear, transient structural simulations, such as automobile crashes, and includes: Vehicle kinematics and d
     

Ansys SimAI Software Predicts Fully Transient Vehicle Crash Outcomes

Od: Ansys
20. Srpen 2024 v 20:09


The Ansys SimAI™ cloud-enabled generative artificial intelligence (AI) platform combines the predictive accuracy of Ansys simulation with the speed of generative AI. Because of the software’s versatile underlying neural networks, it can extend to many types of simulation, including structural applications.
This white paper shows how the SimAI cloud-based software applies to highly nonlinear, transient structural simulations, such as automobile crashes, and includes:

  • Vehicle kinematics and deformation
  • Forces acting upon the vehicle
  • How it interacts with its environment
  • How understanding the changing and rapid sequence of events helps predict outcomes

These simulations can reduce the potential for occupant injuries and the severity of vehicle damage and help understand the crash’s overall dynamics. Ultimately, this leads to safer automotive design.

Download this free whitepaper now!

  • ✇IEEE Spectrum
  • Video Friday: Silly Robot Dog JumpEvan Ackerman
    Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.ICRA@40: 23–26 September 2024, ROTTERDAM, NETHERLANDSIROS 2024: 14–18 October 2024, ABU DHABI, UAEICSR 2024: 23–26 October 2024, ODENSE, DENMARKCybathlon 2024: 25–27 October 2024, ZURICHEnjoy today’s videos! The title of this video is “Silly Robot Dog Jump
     

Video Friday: Silly Robot Dog Jump

16. Srpen 2024 v 18:45


Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

ICRA@40: 23–26 September 2024, ROTTERDAM, NETHERLANDS
IROS 2024: 14–18 October 2024, ABU DHABI, UAE
ICSR 2024: 23–26 October 2024, ODENSE, DENMARK
Cybathlon 2024: 25–27 October 2024, ZURICH

Enjoy today’s videos!

The title of this video is “Silly Robot Dog Jump” and that’s probably more than you need to know.

[ Deep Robotics ]

It’ll be great when robots are reliably autonomous, but until they get there, collaborative capabilities are a must.

[ Robust AI ]

I am so INCREDIBLY EXCITED for this.

[ IIT Instituto Italiano di Tecnologia ]

In this 3 minutes long one-take video, the LimX Dynamics CL-1 takes on the challenge of continuous heavy objects loading among shelves in a simulated warehouse, showcasing the advantages of the general-purpose form factor of humanoid robots.

[ LimX Dynamics ]

Birds, bats and many insects can tuck their wings against their bodies when at rest and deploy them to power flight. Whereas birds and bats use well-developed pectoral and wing muscles, how insects control their wing deployment and retraction remains unclear because this varies among insect species. Here we demonstrate that rhinoceros beetles can effortlessly deploy their hindwings without necessitating muscular activity. We validated the hypothesis using a flapping microrobot that passively deployed its wings for stable, controlled flight and retracted them neatly upon landing, demonstrating a simple, yet effective, approach to the design of insect-like flying micromachines.

[ Nature ]

Agility Robotics’ CTO, Pras Velagapudi, talks about data collection, and specifically about the different kinds we collect from our real-world robot deployments and generally what that data is used for.

[ Agility Robotics ]

Robots that try really hard but are bad at things are utterly charming.

[ University of Tokyo JSK Lab ]

The DARPA Triage Challenge unsurprisingly has a bunch of robots in it.

[ DARPA ]

The Cobalt security robot has been around for a while, but I have to say, the design really holds up—it’s a good looking robot.

[ Cobalt AI ]

All robots that enter elevators should be programmed to gently sway back and forth to the elevator music. Even if there’s no elevator music.

[ Somatic ]

ABB Robotics and the Texas Children’s Hospital have developed a groundbreaking lab automation solution using ABB’s YuMi® cobot to transfer fruit flies (Drosophila melanogaster) used in the study for developing new drugs for neurological conditions such as Alzheimer’s, Huntington’s and Parkinson’s.

[ ABB ]

Extend Robotics are building embodied AI enabling highly flexible automation for real-world physical tasks. The system features intuitive immersive interface enabling tele-operation, supervision and training AI models.

[ Extend Robotics ]

The recorded livestream of RSS 2024 is now online, in case you missed anything.

[ RSS 2024 ]

  • ✇Ars Technica - All content
  • Procreate defies AI trend, pledges “no generative AI” in its illustration appBenj Edwards
    Enlarge / Still of Procreate CEO James Cuda from a video posted to X. (credit: Procreate) On Sunday, Procreate announced that it will not incorporate generative AI into its popular iPad illustration app. The decision comes in response to an ongoing backlash from some parts of the art community, which has raised concerns about the ethical implications and potential consequences of AI use in creative industries. "Generative AI is ripping the humanity out of things," Procreate
     

Procreate defies AI trend, pledges “no generative AI” in its illustration app

20. Srpen 2024 v 18:52
Still of Procreate CEO James Cuda from a video posted to X.

Enlarge / Still of Procreate CEO James Cuda from a video posted to X. (credit: Procreate)

On Sunday, Procreate announced that it will not incorporate generative AI into its popular iPad illustration app. The decision comes in response to an ongoing backlash from some parts of the art community, which has raised concerns about the ethical implications and potential consequences of AI use in creative industries.

"Generative AI is ripping the humanity out of things," Procreate wrote on its website. "Built on a foundation of theft, the technology is steering us toward a barren future."

In a video posted on X, Procreate CEO James Cuda laid out his company's stance, saying, "We’re not going to be introducing any generative AI into our products. I don’t like what’s happening to the industry, and I don’t like what it’s doing to artists."

Read 10 remaining paragraphs | Comments

  • ✇Ars Technica - All content
  • AMD signs $4.9 billion deal to challenge Nvidia’s AI infrastructure leadFinancial Times
    Enlarge (credit: CFOTO/Future Publishing via Getty Images) AMD has agreed to buy artificial intelligence infrastructure group ZT Systems in a $4.9 billion cash and stock transaction, extending a run of AI investments by the chip company as it seeks to challenge market-leader Nvidia. The California-based group said the acquisition would help accelerate the adoption of its Instinct line of AI data center chips, which compete with Nvidia’s popular graphics processing units (GPUs
     

AMD signs $4.9 billion deal to challenge Nvidia’s AI infrastructure lead

19. Srpen 2024 v 22:32
Visitors walk past the AMD booth at the 2024 Mobile World Congress

Enlarge (credit: CFOTO/Future Publishing via Getty Images)

AMD has agreed to buy artificial intelligence infrastructure group ZT Systems in a $4.9 billion cash and stock transaction, extending a run of AI investments by the chip company as it seeks to challenge market-leader Nvidia.

The California-based group said the acquisition would help accelerate the adoption of its Instinct line of AI data center chips, which compete with Nvidia’s popular graphics processing units (GPUs).

ZT Systems, a private company founded three decades ago, builds custom computing infrastructure for the biggest AI “hyperscalers.” While the company does not disclose its customers, the hyperscalers include the likes of Microsoft, Meta, and Amazon.

Read 15 remaining paragraphs | Comments

  • ✇Raspberry Pi Foundation
  • Why we’re taking a problem-first approach to the development of AI systemsBen Garside
    If you are into tech, keeping up with the latest updates can be tough, particularly when it comes to artificial intelligence (AI) and generative AI (GenAI). Sometimes I admit to feeling this way myself, however, there was one update recently that really caught my attention. OpenAI launched their latest iteration of ChatGPT, this time adding a female-sounding voice. Their launch video demonstrated the model supporting the presenters with a maths problem and giving advice around presentation techn
     

Why we’re taking a problem-first approach to the development of AI systems

6. Srpen 2024 v 13:02

If you are into tech, keeping up with the latest updates can be tough, particularly when it comes to artificial intelligence (AI) and generative AI (GenAI). Sometimes I admit to feeling this way myself, however, there was one update recently that really caught my attention. OpenAI launched their latest iteration of ChatGPT, this time adding a female-sounding voice. Their launch video demonstrated the model supporting the presenters with a maths problem and giving advice around presentation techniques, sounding friendly and jovial along the way. 

A finger clicking on an AI app on a phone.

Adding a voice to these AI models was perhaps inevitable as big tech companies try to compete for market share in this space, but it got me thinking, why would they add a voice? Why does the model have to flirt with the presenter? 

Working in the field of AI, I’ve always seen AI as a really powerful problem-solving tool. But with GenAI, I often wonder what problems the creators are trying to solve and how we can help young people understand the tech. 

What problem are we trying to solve with GenAI?

The fact is that I’m really not sure. That’s not to suggest that I think that GenAI hasn’t got its benefits — it does. I’ve seen so many great examples in education alone: teachers using large language models (LLMs) to generate ideas for lessons, to help differentiate work for students with additional needs, to create example answers to exam questions for their students to assess against the mark scheme. Educators are creative people and whilst it is cool to see so many good uses of these tools, I wonder if the developers had solving specific problems in mind while creating them, or did they simply hope that society would find a good use somewhere down the line?

An educator points to an image on a student's computer screen.

Whilst there are good uses of GenAI, you don’t need to dig very deeply before you start unearthing some major problems. 

Anthropomorphism

Anthropomorphism relates to assigning human characteristics to things that aren’t human. This is something that we all do, all of the time, without it having consequences. The problem with doing this with GenAI is that, unlike an inanimate object you’ve named (I call my vacuum cleaner Henry, for example), chatbots are designed to be human-like in their responses, so it’s easy for people to forget they’re not speaking to a human. 

A photographic rendering of a smiling face emoji seen through a refractive glass grid, overlaid with a diagram of a neural network.
Image by Alan Warburton / © BBC / Better Images of AI / Social Media / CC-BY 4.0

As feared, since my last blog post on the topic, evidence has started to emerge that some young people are showing a desire to befriend these chatbots, going to them for advice and emotional support. It’s easy to see why. Here is an extract from an exchange between the presenters at the ChatGPT-4o launch and the model:

ChatGPT (presented with a live image of the presenter): “It looks like you’re feeling pretty happy and cheerful with a big smile and even maybe a touch of excitement. Whatever is going on? It seems like you’re in a great mood. Care to share the source of those good vibes?”
Presenter: “The reason I’m in a good mood is we are doing a presentation showcasing how useful and amazing you are.”
ChatGPT: “Oh stop it, you’re making me blush.” 

The Family Online Safety Institute (FOSI) conducted a study looking at the emerging hopes and fears that parents and teenages have around GenAI.

One quote from a teenager said:

“Some people just want to talk to somebody. Just because it’s not a real person, doesn’t mean it can’t make a person feel — because words are powerful. At the end of the day, it can always help in an emotional and mental way.”  

The prospect of teenagers seeking solace and emotional support from a generative AI tool is a concerning development. While these AI tools can mimic human-like conversations, their outputs are based on patterns and data, not genuine empathy or understanding. The ultimate concern is that this exposes vulnerable young people to be manipulated in ways we can’t predict. Relying on AI for emotional support could lead to a sense of isolation and detachment, hindering the development of healthy coping mechanisms and interpersonal relationships. 

A photographic rendering of a simulated middle-aged white woman against a black background, seen through a refractive glass grid and overlaid with a distorted diagram of a neural network.
Image by Alan Warburton / © BBC / Better Images of AI / Virtual Human / CC-BY 4.0

Arguably worse is the recent news of the world’s first AI beauty pageant. The very thought of this probably elicits some kind of emotional response depending on your view of beauty pageants. There are valid concerns around misogyny and reinforcing misguided views on body norms, but it’s also important to note that the winner of “Miss AI” is being described as a lifestyle influencer. The questions we should be asking are, who are the creators trying to have influence over? What influence are they trying to gain that they couldn’t get before they created a virtual woman? 

DeepFake tools

Another use of GenAI is the ability to create DeepFakes. If you’ve watched the most recent Indiana Jones movie, you’ll have seen the technology in play, making Harrison Ford appear as a younger version of himself. This is not in itself a bad use of GenAI technology, but the application of DeepFake technology can easily become problematic. For example, recently a teacher was arrested for creating a DeepFake audio clip of the school principal making racist remarks. The recording went viral before anyone realised that AI had been used to generate the audio clip. 

Easy-to-use DeepFake tools are freely available and, as with many tools, they can be used inappropriately to cause damage or even break the law. One such instance is the rise in using the technology for pornography. This is particularly dangerous for young women, who are the more likely victims, and can cause severe and long-lasting emotional distress and harm to the individuals depicted, as well as reinforce harmful stereotypes and the objectification of women. 

Why we should focus on using AI as a problem-solving tool

Technological developments causing unforeseen negative consequences is nothing new. A lot of our job as educators is about helping young people navigate the changing world and preparing them for their futures and education has an essential role in helping people understand AI technologies to avoid the dangers. 

Our approach at the Raspberry Pi Foundation is not to focus purely on the threats and dangers, but to teach young people to be critical users of technologies and not passive consumers. Having an understanding of how these technologies work goes a long way towards achieving sufficient AI literacy skills to make informed choices and this is where our Experience AI program comes in. 

An Experience AI banner.

Experience AI is a set of lessons developed in collaboration with Google DeepMind and, before we wrote any lessons, our team thought long and hard about what we believe are the important principles that should underpin teaching and learning about artificial intelligence. One such principle is taking a problem-first approach and emphasising that computers are tools that help us solve problems. In the Experience AI fundamentals unit, we teach students to think about the problem they want to solve before thinking about whether or not AI is the appropriate tool to use to solve it. 

Taking a problem-first approach doesn’t by default avoid an AI system causing harm — there’s still the chance it will increase bias and societal inequities — but it does focus the development on the end user and the data needed to train the models. I worry that focusing on market share and opportunity rather than the problem to be solved is more likely to lead to harm.

Another set of principles that underpins our resources is teaching about fairness, accountability, transparency, privacy, and security (Fairness, Accountability, Transparency, and Ethics (FATE) in Artificial Intelligence (AI) and higher education, Understanding Artificial Intelligence Ethics and Safety) in relation to the development of AI systems. These principles are aimed at making sure that creators of AI models develop models ethically and responsibly. The principles also apply to consumers, as we need to get to a place in society where we expect these principles to be adhered to and consumer power means that any models that don’t, simply won’t succeed. 

Furthermore, once students have created their models in the Experience AI fundamentals unit, we teach them about model cards, an approach that promotes transparency about their models. Much like how nutritional information on food labels allows the consumer to make an informed choice about whether or not to buy the food, model cards give information about an AI model such as the purpose of the model, its accuracy, and known limitations such as what bias might be in the data. Students write their own model cards based on the AI solutions they have created. 

What else can we do?

At the Raspberry Pi Foundation, we have set up an AI literacy team with the aim to embed principles around AI safety, security, and responsibility into our resources and align them with the Foundations’ mission to help young people to:

  • Be critical consumers of AI technology
  • Understand the limitations of AI
  • Expect fairness, accountability, transparency, privacy, and security and work toward reducing inequities caused by technology
  • See AI as a problem-solving tool that can augment human capabilities, but not replace or narrow their futures 

Our call to action to educators, carers, and parents is to have conversations with your young people about GenAI. Get to know their opinions on GenAI and how they view its role in their lives, and help them to become critical thinkers when interacting with technology. 

The post Why we’re taking a problem-first approach to the development of AI systems appeared first on Raspberry Pi Foundation.

Genshin Impact Finally Hitting Xbox In November But Still No Nintendo Switch Port In Sight

20. Srpen 2024 v 21:53

Genshin Impact will finally make it to Xbox later this year. HoYoverse revealed the free-to-play anime RPG will get ported in November, though a Switch version of the hit game remains MIA years after its release on smartphones and PS5.

Read more...

  • ✇Kotaku
  • The Ultimate March 7th (The Hunt) Build For Honkai: Star RailSamuel Moreno
    The ridiculously named March 7th has been the poster child for Honkai: Star Rail since the free-to-play game launched, and now there’s a new version of her that plays as a DPS. The recent 2.4 update has provided her with a new Path to switch to, just like the main character, The Trailblazer. What’s great here is that…Read more...
     

The Ultimate March 7th (The Hunt) Build For Honkai: Star Rail

19. Srpen 2024 v 15:30

The ridiculously named March 7th has been the poster child for Honkai: Star Rail since the free-to-play game launched, and now there’s a new version of her that plays as a DPS. The recent 2.4 update has provided her with a new Path to switch to, just like the main character, The Trailblazer. What’s great here is that…

Read more...

  • ✇GAME PRESS
  • AI Funkce Samsung Galaxy si nachází cestu do telefonů střední třídy, ale ne všechny nástroje budou přidányMobile Press
    Pokrok umělé inteligence Samsung Galaxy vyvolal značný rozruch, zejména díky schopnostem, které přinesl do svých špičkových zařízení. Uživatelé očekávali rozšíření funkcí na zařízení střední třídy, aby mohli těžit z výhod tohoto začlenění. Na Unpacked 2024 oznámil prezident společnosti TM Roh, že iniciativa bude letos rozšířena na zhruba 200 milionů zařízení. I když oznámení není příliš staré, vidíme, že se rýsuje, přičemž zdroje prozradily plány rozšíření na dosažení série A. Umělá inteligence
     

AI Funkce Samsung Galaxy si nachází cestu do telefonů střední třídy, ale ne všechny nástroje budou přidány

7. Srpen 2024 v 07:09

ai-funkce-samsung-galaxy-si-nachazi-cestu-do-telefonu-stredni-tridy,-ale-ne-vsechny-nastroje-budou-pridany

Galaxy A35

Pokrok umělé inteligence Samsung Galaxy vyvolal značný rozruch, zejména díky schopnostem, které přinesl do svých špičkových zařízení.

Uživatelé očekávali rozšíření funkcí na zařízení střední třídy, aby mohli těžit z výhod tohoto začlenění. Na Unpacked 2024 oznámil prezident společnosti TM Roh, že iniciativa bude letos rozšířena na zhruba 200 milionů zařízení. I když oznámení není příliš staré, vidíme, že se rýsuje, přičemž zdroje prozradily plány rozšíření na dosažení série A.

Umělá inteligence Galaxy přinesla díky svým rozsáhlým funkcím mnoho vylepšení, díky nimž je mobilní zážitek bezproblémový. Uživatelé se těší na nasazení umělé inteligence Galaxy v ekosystému Galaxy společnosti Samsung, aby se zlepšila komunikace a zlepšily možnosti. Společnost nenechala uživatele doufat, že utichnou, a vypadá to, že se funkce konečně dostanou do některých zařízení Galaxy A.

SamMobile získal informace z interních zdrojů, které sdílely plán na rozšíření funkcí Galaxy AI původně vyhrazených pro špičková zařízení na některé ze střední řady Galaxy A. Přichází na Galaxy A35 a Galaxy A55, které byly vydány v roce 2024 .

Zdroje také uvádějí, že iniciativy budou spuštěny s aktualizací One UI 6.1.1. Přesné datum zatím není uvedeno, ale předpokládáme, že by k tomu mohlo dojít tento měsíc nebo nadcházející měsíc, protože tehdy s největší pravděpodobností očekáváme aktualizaci softwaru.

Ačkoli je vzrušující vědět, že Samsung dodržuje svůj slib rozšířením funkcí AI na další zařízení Galaxy, má to háček. Do Galaxy A35 a A55 se prozatím dostanou pouze vybrané schopnosti Galaxy AI. Na rozdíl od Galaxy S24 Ultra a Galaxy Z Fold 6, které jsou vybaveny celou sadou funkcí poháněných umělou inteligencí, budou mít A35 a A55 prozatím omezené funkce.

Existuje jen málo informací o tom, které nástroje AI by byly začleněny a které funkce by nebyly uvažovány pro řadu A, ale zdá se, že Samsung nezahrnuje funkce, které vyžadují vyšší kapacitu a více zpracování na zařízení. Některé prominentní funkce Galaxy AI zahrnují úpravy a návrhy textu, shrnutí poznámek a Instant Slow-Mo pro videa a překlady.

Zavedení nástroje AI do jeho cenově dostupných telefonů by Samsungu pomohlo lépe bojovat s konkurencí, zejména vzhledem k tomu, že i ostatní nabízejí pokročilé funkce na svých levných telefonech.

Článek AI Funkce Samsung Galaxy si nachází cestu do telefonů střední třídy, ale ne všechny nástroje budou přidány se nejdříve objevil na MOBILE PRESS.

Článek AI Funkce Samsung Galaxy si nachází cestu do telefonů střední třídy, ale ne všechny nástroje budou přidány se nejdříve objevil na GAME PRESS.

  • ✇Semiconductor Engineering
  • Leveraging AI To Efficiently Test AI ChipsAdvantest
    In the fast-paced world of technology, where innovation and efficiency are paramount, integrating artificial intelligence (AI) and machine learning (ML) into the semiconductor testing ecosystem has become of critical importance due to ongoing challenges with accuracy and reliability. AI and ML algorithms are used to identify patterns and anomalies that might not be discovered by human testers or traditional methods. By leveraging these technologies, companies can achieve higher accuracy in defec
     

Leveraging AI To Efficiently Test AI Chips

Od: Advantest
6. Srpen 2024 v 09:01

In the fast-paced world of technology, where innovation and efficiency are paramount, integrating artificial intelligence (AI) and machine learning (ML) into the semiconductor testing ecosystem has become of critical importance due to ongoing challenges with accuracy and reliability. AI and ML algorithms are used to identify patterns and anomalies that might not be discovered by human testers or traditional methods. By leveraging these technologies, companies can achieve higher accuracy in defect detection, ensuring that only the highest quality semiconductors reach the market. In addition, the industry is clamoring for increased efficiency and speed because AI-driven testing can significantly accelerate the testing process, analyzing vast amounts of data at speeds unattainable by human testers. This enables quicker turnaround times from design to production, helping companies meet market demands more effectively and stay ahead of competitors. Firms are also heavily invested in reducing costs. While the initial investment in AI/ML technology can be expansive, the long-term savings are irrefutable. With automated routine and complex testing processes, companies can reduce labor costs and minimize human error. Equally important, AI-enhanced testing can better predict potential failures before they occur, saving costs related to recalls and repairs.

The industry is now moving to chiplet-based modules, using a “Lego-like” approach to integrate CPU, GPU, cache, I/O, high-bandwidth memory (HBM), and other functions. In the rapidly evolving world of chiplets, the DUT is a complex multichip system with the integration of many devices in a single 2.5D or 3D package. Consequently, the tester can only access a subset of individual device pins. Even so, at each test insertion, the tester must be able to extract valuable data that is then used to optimize the current test insertion as well as other design, manufacturing, and test steps. With limited pin access, the tester must infer what is happening on unobservable nodes. To best achieve this goal, it is important to extract the most value possible out of the data that can be directly collected across all manufacturing and test steps, including data from on-chip sensors. The test flow in the chiplet world already includes PSV, wafer acceptance test (WAT), wafer sort (WS), final test (FT), burn-in, and SLT, and additional test insertions to account for the increased complexity of a package with multiple chiplets are not feasible from a cost perspective. Adding to the challenge, binning goes from performance-based to application-based. In this world, the tester must stay ahead of the system – the tester must be smarter than the complex system-under-test.

The ACS RTDI platform accelerates data analytics and AI/ML decision-making.

So, for these reasons and many more, the adoption of edge compute for ML test applications is well underway. Advantest’s ACS Real-Time Data Infrastructure (ACS RTDI) platform accelerates data analytics and AI/ML decision-making within a single integrated platform. It collects, analyzes, stores, and monitors semiconductor test data as well as data sources across the IC manufacturing supply chain while employing low-latency edge computing and analytics in a secure zero-trust environment. ACS RTDI minimizes the need for human intervention, streamlining overall data utilization across multiple insertions to boost quality, yield, and operational efficiencies. It includes Advantest’s ACS Edge HPC server, which works in conjunction with its V93000 and other ATE systems to handle computationally intensive workloads adjacent to the tester’s host controller.

A reliable, secure real-time data structure that integrates data sources across the IC manufacturing supply chain.

In this configuration, the ACS Edge provides low, consistent, and predictable latency compared with a data center-hosted alternative. It supports a user execution environment independent of the tester host controller to ease development and deployment. It also provides a reliable and secure real-time data infrastructure that integrates all data sources across the entire IC manufacturing supply chain, applying analytics models that enable real-time decision-making during production test.

The post Leveraging AI To Efficiently Test AI Chips appeared first on Semiconductor Engineering.

  • ✇Gaming Yeeter
  • This AI Created a Dungeons & Dragons RPGCh3t Cyberd00d
    The storyline unfolds in the realm of Arkanor, a land where magic and technology coexist, but their balance teeters on the edge of collapse. The world's once-mighty enchanters, the Lumina Arcana, who maintained the harmony between magic and technology, have mysteriously vanished, leaving Arkanor in turmoil.
     

This AI Created a Dungeons & Dragons RPG

5. Leden 2024 v 00:23
The storyline unfolds in the realm of Arkanor, a land where magic and technology coexist, but their balance teeters on the edge of collapse. The world's once-mighty enchanters, the Lumina Arcana, who maintained the harmony between magic and technology, have mysteriously vanished, leaving Arkanor in turmoil.
  • ✇NekoJonez's Gaming Blog
  • Preview: Ama’s Lullaby (PC – Steam) ~ Hacking The Point-And-Click GenreNekoJonez
    Itch.io – Steam Back in 2017, a developer from France contacted me about their new point-and-click sci-fi game in the works called Ama’s Lullaby. But, it’s more than a point-and-click game, it’s also a hacking game. Now, this developer works on this game in his free time after his day job and with a small budget. Sometimes these passion projects die due to lack of time, money, motivation and/or just interest. But it looks like Ama’s Lullaby isn’t going to be one of those projects. Earlier
     

Preview: Ama’s Lullaby (PC – Steam) ~ Hacking The Point-And-Click Genre

Od: NekoJonez
20. Květen 2024 v 19:22

Itch.ioSteam

Back in 2017, a developer from France contacted me about their new point-and-click sci-fi game in the works called Ama’s Lullaby. But, it’s more than a point-and-click game, it’s also a hacking game. Now, this developer works on this game in his free time after his day job and with a small budget. Sometimes these passion projects die due to lack of time, money, motivation and/or just interest. But it looks like Ama’s Lullaby isn’t going to be one of those projects. Earlier this year, a demo of the game got released. Now, I asked the developer if he was interested in streaming this demo with us, and he did. Here is a link to part 1 & part 2. Sadly, due to overheating of Klamath’s computer, it had to be cut into two parts and the ending was quite abrupt. Now, this stream is almost a month ago, and I still wanted to write an article about this game. So, what do I think of the demo? Am I still as impressed when I saw it during the livestream, or is my opinion going to change when I’m not back seating and playing it myself? Let’s find out in this article.

Hacking The Point-And-Click Genre

The story of this demo is quite simple. Ama enters the police station and gets new tasks to aid the space colony she is in. Overall, the story is told more naturally compared to other games. Mostly, we get an opening where the main story of the game is teased, but not in this game. During interactions with the others, we get little glimpses into the world and story. Now, this is a tricky thing to pull off, since either you have to force the player to interact with everybody or risk that some players miss potentially important information. On the other hand, info dumping on the player isn’t always the best solution.

Now, in this space colony, there is an AI that makes a lot of decisions. It turns out that Ama and her dad have created that AI and the software to interact with it. She is one of the ambassadors of the human race. But it doesn’t take too long before strange things start to happen, and you notice that not everything is what you think it is.

The dialogues in this game appear above the character’s their head. When it’s cursive, you know it’s a thought. Not only that, you have simple sound effects that appear to put some additional power to the dialogues and to quickly differentiate between thoughts and spoken dialogues. Currently, there are plans to fully voice act this game, but if those plans fall through, I’d recommend to the developer to have different sound effects for the dialogues for different emotions.

Now, the game cold opens with an old school terminal as a main menu. This might be a bit jarring for new players who aren’t used to working with the command line. Personally, as somebody who knows how a command line works, I really love this touch. Since, this interface is also present in a lot of puzzles in the game. It fits the atmosphere and style of the game as a glove. To be honest, I think that with some minor polishing, it would be perfect.

There are a few things I would change. First, I’d get rid of the case-sensitive commands. The main reason is that a lot of people have the default keybinding for the Steam overlay with is… Shift+Tab. Since I love using autocomplete, it got pretty frustrating when I was holding my shift button and tabbed to autocomplete and my Steam overlay popped up.

A second thing I’d change is to allow the user to enlarge the font of terminal. The reason for that is because it doesn’t really scale pretty well with people who are using larger monitors.

Now, since this game is still in development and this is just the demo… I can totally excuse that there are features not present. Like pushing the up arrow to get the last command, or the help feature not always working correctly in all menus. For example, if you are in the options menu and use “QUALITY HELP”, you get information but if you first write “QUALITY” to see the options you can input and then “QUALITY HELP”… It bugs out and doesn’t give you help at all. Another small bug I noticed is that for some reason, the enter button on my numpad didn’t enter but always selected the whole text. But hey, during the stream the developer said that some of these things are on the list to get fixed for the full game.

Cyberpunk Sci-fi

I was impressed with the visuals of the game when we were playing this game on stream. While I haven’t played the Blade Runner games yet, I have seen a lot of people talk about it and know the visual style of the game. This game really mimics that style extremely well. You really feel like you are in a sci-fi world with some older technology than we have compared to our own technology.

Also, something I really love in this demo is that everything is one big space. You don’t really have “screens” in this game, like in a Broken Sword game for example. No, the camera swings and follows Ama as if she was in a movie. This sells the illusion of the area even more. While I’d have loved to see the details the developer put in every scene more up close sometimes, the more zoomed out look gives you a better overview on the scene. It almost feels like you are watching Ama through security camera’s or a drone camera in a way.

The biggest thing that I want to point out in terms of the visuals is Ama herself. The game goes for a more dark and dimly light environment and with a main character that’s wearing black clothes, it’s extremely easy to lose Ama in the scenery. It wouldn’t surprise me if they gave our main character in Blade Runner a brown coat for that reason, so you can more quickly see the main character without breaking the visual style of the game. But, overall, this is almost a nitpick. Since, it didn’t happen a lot that I lost Ama in the scene. It mostly happened when I was replaying parts of the demo while writing this article.

Now, I want to talk about the command line. The tutorial in this game on how a command line works is actually well done. I love how it doesn’t hold the players hands and tries to force them to input the right thing. It really lets you experiment with it and learn how it works. All the while, a small guide on how things work is displayed on the top of your screen.

This whole command line mechanic in this game is a breath of fresh air. It’s impressive how true to reality the whole command line is. While it uses some creative liberties here and there to make it fit into the game world, overall, it might be a real command line interface that’s open in the game.

In this demo, you have a few tasks to complete. Most of these tasks involve fixing various things. One task is highly dependent on the command line. This was quite easy for me since, like I said, I know how to use a command line. Visually, it’s a bit tricky during the tutorials in the network view since it’s not really clear/easy on how you can scroll up or down while in the network view. Using the mouse mostly scrolls around the network map. I think an easier way to scroll up and down in the terminal could be useful there. Also, when you have to input a command that’s longer than the terminal screen, I’d start a second line. Since, that’s how real life works. Or move the whole thing, and not let the username stay.

Final thoughts and future wishes

Overall, the demo is quite short. If you don’t know what you are doing and exploring everything, it will take you mostly two hours to complete. But if you know what to do, you can finish this in 10 minutes. Yet, the impression I got from the stream hasn’t changed. This game has quite a lot of potential but it needs some polish here and there.

There are some minor things like some objects not being solid and Ama being able to run through them, but there are also more major issues. The elevator bug the developer Marc mentioned during the stream, happened to me. Ama didn’t go up with the elevator and she was stuck. I think it was related to another bug I encountered where the head of IT got stuck in an animation loop. Somehow it was like Ama was near him while Ama was walking in other parts of the station. I don’t know what exactly triggered that, and I have replayed the demo trice to try and get it back into that bugged state, but I was unable to find the cause and I was unable to replicate it.

Currently, there is one way to save the game. There are several terminals in this demo where you can save your game. You only have one save slot. There is also no manual saving of the game. So, remember that. You can also only load from the main menu.

Reviewing a demo is always tricky to do. Especially if the game is still in development, since you never know for sure how the final game is going to look like. Yet, this demo is extremely promising. The puzzles where a lot of fun and after playing the demo, I had the same feeling that Klamath had at the end of the stream. I want to play more or similar games like this.

I could start talking about how the sound effects are amazing but there isn’t enough music yet. But, at one hand, the lack of music really sells the atmosphere of the game a lot more but on the other hand, the music during the terminal sections is really enjoyable. But, I’m sure that in the full game we shall see more music.

Just like I’m convinced that when the full game releases and the players find bugs, they will get fixed. While I was talking with Marc during the stream, I really felt the passion for creating this game and how he wants to make it the best experience it can be for his players. So, if you are interested in this game after reading this article in any way shape or form, I highly recommend that you give this game a chance, play the demo for yourself and give the developer feedback via his Discord or any other of his official channels.

I can’t wait to see and play the final game. Various things got revealed and talked about during the stream and I have to say, it was an amazing experience and conversation. I was already interested in seeing this game when it was on KickStarter but now that I have played the demo, I think we are on a winner here. This game will put an interesting twist on the point-and-click genre and will be interesting to anyone who enjoys adventure games with a sci-fi influence or just enjoy more unique puzzle games.

I want to thank Marc for reaching out to me and talking about his unique project. You can be sure that when the full version releases… me and Klamath will play through it and most likely stream it. And I’ll write a more in-depth article on the final product. Since, I might have not talked quite in-depth in this article but I want to hold off my final opinions when the game is fully released.

If you have read my article, played the demo and/or watched our stream, I’m curious, what did you think about this game? Feel free to talk about it in the comments. Am I overhyping the game or overlooking flaws? Or is there something you’d love to see in the full game?

And with that said, I have said everything about the game I want to say for now. I want to thank you for reading this article and I hope you enjoyed reading it as much as I enjoyed writing it. I hope to be able to welcome you in another article but until then, have a great rest of your day and take care.

CWA union says it "stands in "complete solidarity with striking members of SAG-AFTRA"

The Communications Workers of America (CWA) – which includes members from a range of high-profile game developers and publishers, including Activision, Bethesda, Blizzard Entertainment, Sega, and ZeniMax – says it stands in "complete solidarity with striking members of SAG-AFTRA."

The Screen Actors Guild - American Federation of Television and Radio Artists (SAG-AFTRA) called a strike last week after it failed to reach an agreement with the convenience bargaining group over rights and protection concerns raised by the industry's exploration of AI technologies.

"We fully support SAG-AFTRA’s demand for explicit, enforceable language that ensures all video game workers are safeguarded against potential exploitation and displacement caused by AI technologies," the CWA said in a statement.

Read more

  • ✇Attack of the Fanboy
  • The Elusive Samurai Is Rapidly Becoming 2024’s Best New AnimeAnna Williams
    The Summer 2024 Anime Season has introduced a variety of exciting new shows to the community, including the zany comedy My Deer Friend Nokotan, and shows like Suicide Squad ISEKAI the reimagine classic characters in a new way. A surprising hit from this season, however, has come in the way of CloverWorks’ adaptation of The Elusive Samurai, which has a remarkably high score on MAL despite only having one episode available at the time of writing. What Makes The Elusive Samurai So Sensationa
     

The Elusive Samurai Is Rapidly Becoming 2024’s Best New Anime

19. Červenec 2024 v 19:22

The Summer 2024 Anime Season has introduced a variety of exciting new shows to the community, including the zany comedy My Deer Friend Nokotan, and shows like Suicide Squad ISEKAI the reimagine classic characters in a new way.

A surprising hit from this season, however, has come in the way of CloverWorks’ adaptation of The Elusive Samurai, which has a remarkably high score on MAL despite only having one episode available at the time of writing.

What Makes The Elusive Samurai So Sensational?

While CloverWorks has solidified themselves as a solid and reliable studio, producing standout hits like Horimiya, Black Butler -Public School Arc- and My Dress Up Darling, from time-to-time their new releases can get buried amid a season filled with other promising titles. While there weren’t many anime fans talking about The Elusive Samurai before episode 1 dropped, it’s now become hard to avoid seeing Tokiyuki’s face all over the internet.

It’s clear that CloverWorks is putting a lot of love into this adaptation, with many fans of the original manga commenting on how well the studio has done to re-contextualize small details that were left in dialogue boxes in the source material, as well as offering the studio’s signature fluid animation to the show.. The characters and setting are fun and refreshing, with the series taking place during Japan’s Nanboku-cho period (1333), and following the up-and-coming successor of the Kamakura Shogunate. Historical fiction isn’t uncommon in anime, but it is rare to see something that sticks so closely to the era it’s attempting to portray.

Elusive-Samurai-official-screencap-of-the-main-character-and-their-father-in-the-fall
Crunchyroll

Not to mention, watching the first episode, it’s fun to see child characters that actually behave like curious children, instead of having their personalities aged up. All around, The Elusive Samurai is a gem of a series, and we can’t wait to see where it goes from here.

Fans interested in getting caught up with The Elusive Samurai can stream the show on Crunchyroll.

  • ✇AnandTech
  • Tenstorrent Launches Wormhole AI Processors: 466 FP8 TFLOPS at 300W
    Tenstorrent has unveiled its next-generation Wormhole processor for AI workloads that promises to offer decent performance at a low price. The company currently offers two add-on PCIe cards carrying one or two Wormhole processors as well as TT-LoudBox, and TT-QuietBox workstations aimed at software developers. The whole of today's release is aimed at developers rather than those who will deploy the Wormhole boards for their commercial workloads. “It is always rewarding to get more of our produc
     

Tenstorrent Launches Wormhole AI Processors: 466 FP8 TFLOPS at 300W

19. Červenec 2024 v 20:30

Tenstorrent has unveiled its next-generation Wormhole processor for AI workloads that promises to offer decent performance at a low price. The company currently offers two add-on PCIe cards carrying one or two Wormhole processors as well as TT-LoudBox, and TT-QuietBox workstations aimed at software developers. The whole of today's release is aimed at developers rather than those who will deploy the Wormhole boards for their commercial workloads.

It is always rewarding to get more of our products into developer hands. Releasing development systems with our Wormhole™ card helps developers scale up and work on multi-chip AI software.” said Jim Keller, CEO of Tenstorrent. “In addition to this launch, we are excited that the tape-out and power-on for our second generation, Blackhole, is going very well.

Each Wormhole processor packs 72 Tensix cores (featuring five RISC-V cores supporting various data formats) with 108 MB of SRAM to deliver 262 FP8 TFLOPS at 1 GHz at 160W thermal design power. A single-chip Wormhole n150 card carries 12 GB of GDDR6 memory featuring a 288 GB/s bandwidth.

Wormhole processors offer flexible scalability to meet the varying needs of workloads. In a standard workstation setup with four Wormhole n300 cards, the processors can merge to function as a single unit, appearing as a unified, extensive network of Tensix cores to the software. This configuration allows the accelerators to either work on the same workload, be divided among four developers or run up to eight distinct AI models simultaneously. A crucial feature of this scalability is that it operates natively without the need for virtualization. In data center environments, Wormhole processors will scale both inside one machine using PCIe or outside of a single machine using Ethernet. 

From performance standpoint, Tenstorrent's single-chip Wormhole n150 card (72 Tensix cores at 1 GHz, 108 MB SRAM, 12 GB GDDR6 at 288 GB/s) is capable of 262 FP8 TFLOPS at 160W, whereas the dual-chip Wormhole n300 board (128 Tensix cores at 1 GHz, 192 MB SRAM, aggregated 24 GB GDDR6 at 576 GB/s) can offer up to 466 FP8 TFLOPS at 300W (according to Tom's Hardware).

To put that 466 FP8 TFLOPS at 300W number into context, let's compare it to what AI market leader Nvidia has to offer at this thermal design power. Nvidia's A100 does not support FP8, but it does support INT8 and its peak performance is 624 TOPS (1,248 TOPS with sparsity). By contrast, Nvidia's H100 supports FP8 and its peak performance is massive 1,670 TFLOPS (3,341 TFLOPS with sparsity) at 300W, which is a big difference from Tenstorrent's Wormhole n300. 

There is a big catch though. Tenstorrent's Wormhole n150 is offered for $999, whereas n300 is available for $1,399. By contrast, one Nvidia H100 card can retail for $30,000, depending on quantities. Of course, we do not know whether four or eight Wormhole processors can indeed deliver the performance of a single H300, though they will do so at 600W or 1200W TDP, respectively.

In addition to cards, Tenstorrent offers developers pre-built workstations with four n300 cards inside the less expensive Xeon-based TT-LoudBox with active cooling and a premium EPYC-powered TT-QuietBox with liquid cooling.

Sources: TenstorrentTom's Hardware

  • ✇Eurogamer.net
  • PSA: This weekend is your last chance to buy from the Xbox 360 online marketplaceVikki Blake
    This is your friendly reminder that Microsoft is set to close its Xbox 360 digital store on 29th July – that's next Monday – so you have just a few days left to make the most of those last discounts on some of the best Xbox 360 games of the generation.Microsoft announced a raft of discounts on Xbox 360 digital games back in May. Whilst some games will live on via other platforms and services – including Microsoft's comprehensive backwards compatibility system – there are a handful of games that
     

PSA: This weekend is your last chance to buy from the Xbox 360 online marketplace

26. Červenec 2024 v 16:25

This is your friendly reminder that Microsoft is set to close its Xbox 360 digital store on 29th July – that's next Monday – so you have just a few days left to make the most of those last discounts on some of the best Xbox 360 games of the generation.

Microsoft announced a raft of discounts on Xbox 360 digital games back in May. Whilst some games will live on via other platforms and services – including Microsoft's comprehensive backwards compatibility system – there are a handful of games that will disappear from sale forever. So, if you've ever fancied one, now's the time to pick it up.

X user Kalyoshika has shared a list of the games/DLC that "will not survive", as well as "a couple of games that are going from cheap, easy-to-get digital copies", to "impossible-to-get, expensive, piracy only, jump-through-hoops to play".

Read more

  • ✇Android Authority
  • OpenAI has developed a 99.9% accuracy tool to detect ChatGPT content, but you are safe for nowVinayak Guha
    OpenAI has developed a method to detect when someone uses ChatGPT to write essays or assignments. The method utilizes a watermarking system that is 99.9% effective at identifying AI-generated text. However, the tool has not yet been rolled out due to internal concerns and mixed reactions within the company. When OpenAI launched ChatGPT towards the end of 2022, educators expressed concerns that students would use the platform to cheat on assignments and tests. To prevent this, numerous comp
     

OpenAI has developed a 99.9% accuracy tool to detect ChatGPT content, but you are safe for now

6. Srpen 2024 v 04:07
  • OpenAI has developed a method to detect when someone uses ChatGPT to write essays or assignments.
  • The method utilizes a watermarking system that is 99.9% effective at identifying AI-generated text.
  • However, the tool has not yet been rolled out due to internal concerns and mixed reactions within the company.

When OpenAI launched ChatGPT towards the end of 2022, educators expressed concerns that students would use the platform to cheat on assignments and tests. To prevent this, numerous companies have rolled out AI detection tools, but they haven’t been the best at producing reliable results.

OpenAI has now revealed that it has developed a method to detect when someone uses ChatGPT to write (via The Washington Post). The technology is said to be 99.9% effective and essentially uses a system capable of predicting what word or phrase (called “token”) would come next in a sentence. The AI-detection tool slightly alters the tokens, which then leaves a watermark. This watermark is undetectable to the human eye but can be spotted by the tool in question.

  • ✇Android Authority
  • Google benches tone-deaf ‘Dear Sydney’ AI spot from Olympics ad rosterStephen Schenck
    Google received immediate backlash over an ad showing a father girl using Gemini AI to write his daughter’s favorite athlete a fan letter. The ad has since been pulled from Olympics rotation, but it remains up on Google’s YouTube account. Tech companies are used to pushing boundaries, with everybody looking to become the next disruptor. But especially when it comes to their advertising, sometimes they push past the boundaries of good taste into some seriously cringe-worthy territory. We’ve
     

Google benches tone-deaf ‘Dear Sydney’ AI spot from Olympics ad roster

6. Srpen 2024 v 00:04
  • Google received immediate backlash over an ad showing a father girl using Gemini AI to write his daughter’s favorite athlete a fan letter.
  • The ad has since been pulled from Olympics rotation, but it remains up on Google’s YouTube account.

Tech companies are used to pushing boundaries, with everybody looking to become the next disruptor. But especially when it comes to their advertising, sometimes they push past the boundaries of good taste into some seriously cringe-worthy territory. We’ve seen situations like that play out time and time again over the years, and in some of those cases the response is so overwhelmingly negative that the company involved sees no better alternative than just taking their ad down. Now Google is the latest advertiser to find itself in the hot seat, all in response to an AI-focused spot it’s been running for the Olympics.

The video in question depicts a father whose daughter shows an interest in athletics and emerges as a fan of track-and-field Olympian Sydney McLaughlin-Levrone. The “Dear Sydney” ad continues as the father wants to help his daughter craft the best possible fan letter to McLaughlin. So, what’s next? A trip to the library? An after-school creative writing program? Nope: Gemini AI can just write it for you.

  • ✇Android Authority
  • What’s next for Google Play Store AI review summaries (APK teardown)Stephen Schenck
    The Google Play Store appears to be getting ready to include AI-generated review summaries in app listings. These would join the AI summaries in searches and the “App highlights” block we already have. Rather than just collating the most popular opinions expressed in reviews, this condenses them down into a single, new voice. Google is on a bit of an AI kick right now, to put it mildly, finding reason to augment every nook of cranny of its software and services with (admittedly, often impr
     

What’s next for Google Play Store AI review summaries (APK teardown)

5. Srpen 2024 v 18:17
  • The Google Play Store appears to be getting ready to include AI-generated review summaries in app listings.
  • These would join the AI summaries in searches and the “App highlights” block we already have.
  • Rather than just collating the most popular opinions expressed in reviews, this condenses them down into a single, new voice.

Google is on a bit of an AI kick right now, to put it mildly, finding reason to augment every nook of cranny of its software and services with (admittedly, often impressive) AI-powered functionality. The Play Store has been as much a target as any for these experiments, like with the App highlights feature we saw Google start playing around with several months back. For over a year now, Google’s been talking about using AI to summarize Play Store reviews, and after getting to see how that works in the app’s search mode, we’re now discovering how the next phase of those summaries could arrive.

When delivering Google Play’s most recent quarterly address, VP Sam Bright touched on the company’s progress with AI in the Play store, including the desire to get more of this AI-derived content in detailed app listings themselves. Sure enough, digging through Play Store version 42.1.21 we find new text strings for labeling information as “Summarized by Google AI.” And with the right flags enabled, we can get just such an AI-generated summary to appear at the top of user-written reviews:

  • ✇Android Authority
  • Apple Intelligence is falling for phishing emails, and that could cost iPhone usersMahmoud Itani
    Credit: Mahmoud Itani / Android Authority Apple Intelligence is marking phishing emails as a priority in the Mail app on iOS 18.1 developer beta 1. The AI-powered filter seemingly disregards the sender’s address and only determines an email’s importance by scanning its text. Apple must address this severe flaw before iOS 18.1’s public release, as it could make average users fall for more scams. Apple Intelligence is arguably iOS 18’s most significant highlight, baking native AI features
     

Apple Intelligence is falling for phishing emails, and that could cost iPhone users

5. Srpen 2024 v 12:46
Apple Intelligence banner on MacBook Air M2
Credit: Mahmoud Itani / Android Authority
  • Apple Intelligence is marking phishing emails as a priority in the Mail app on iOS 18.1 developer beta 1.
  • The AI-powered filter seemingly disregards the sender’s address and only determines an email’s importance by scanning its text.
  • Apple must address this severe flaw before iOS 18.1’s public release, as it could make average users fall for more scams.

Apple Intelligence is arguably iOS 18’s most significant highlight, baking native AI features into the OS. While the technology likely won’t debut publicly until October, the company has already given iOS 18.1 beta testers an early look. One of its perks is AI summaries in the Mail app. Through this handy Apple Intelligence feature, users can save time, determine emails’ importance at a glance, get locked out of their accounts, and possibly lose considerable sums of money.

Yes, you’ve read that right. Apple Intelligence, indeed, can’t differentiate between phishing and legitimate emails. According to multiple Reddit users, the AI-powered filter in the Mail app is marking scam emails as a priority. This suggests that the technology categorizes emails based only on their texts, disregarding the senders’ addresses and other relevant signals.

  • ✇Android Authority
  • How much energy does ChatGPT consume? More than you think, but it’s not all bad newsCalvin Wankhede
    Credit: Edgar Cervantes / Android Authority Everything comes at a cost, and AI is no different. While ChatGPT and Gemini may be free to use, they require a staggering amount of computational power to operate. And if that wasn’t enough, Big Tech is currently engaged in an arms race to build bigger and better models like GPT-5. Critics argue that this growing demand for powerful — and energy-intensive — hardware will have a devastating impact on climate change. So just how much energy does AI
     

How much energy does ChatGPT consume? More than you think, but it’s not all bad news

4. Srpen 2024 v 19:00
Crypto mining with GPU stock image 2
Credit: Edgar Cervantes / Android Authority

Everything comes at a cost, and AI is no different. While ChatGPT and Gemini may be free to use, they require a staggering amount of computational power to operate. And if that wasn’t enough, Big Tech is currently engaged in an arms race to build bigger and better models like GPT-5. Critics argue that this growing demand for powerful — and energy-intensive — hardware will have a devastating impact on climate change. So just how much energy does AI like ChatGPT use and what does this electricity use mean from an environmental perspective? Let’s break it down.

ChatGPT energy consumption: How much electricity does AI need?

ChatGPT stock photo 58

  • ✇Android Authority
  • AI summaries in Apple Mail put Gmail to absolute shameMahmoud Itani
    Apple Mail has long been the ugly duckling of email clients. While the app technically works, it’s pretty barebones in terms of functionality. Despite the latest iOS 18 beta not addressing all of its shortcomings, it does make using the client more desirable on iPhone 15 Pro models. With iOS 18.1, the iPhone maker has baked Apple Intelligence into the Mail app, enabling AI summaries, priority detection, and more. Some of these features have also made it to the Messages app, such as the newly-ad
     

AI summaries in Apple Mail put Gmail to absolute shame

4. Srpen 2024 v 17:00

Apple Mail has long been the ugly duckling of email clients. While the app technically works, it’s pretty barebones in terms of functionality. Despite the latest iOS 18 beta not addressing all of its shortcomings, it does make using the client more desirable on iPhone 15 Pro models. With iOS 18.1, the iPhone maker has baked Apple Intelligence into the Mail app, enabling AI summaries, priority detection, and more. Some of these features have also made it to the Messages app, such as the newly-added smart replies. Sure, Apple Mail still has a long way to go, but boy, do these AI enhancements make my everyday life easier and give Google’s Gmail a run for its money.

It all starts before you even launch the app

What I love about Apple Intelligence on iOS 18.1 beta 1 is that it works from the moment you receive an email or text. The technology analyzes the notifications’ content to display summaries on the lock screen. So, instead of previewing the first two lines of an email, the notification now shows a handier, AI-generated summary of the entire message. We’ve gone from “Hi, I hope this email finds you well” to “The sender is inquiring about your availability tomorrow” in the notification center. It’s pretty neat if you ask me.

  • ✇Boing Boing
  • OpenAI could watermark the text ChatGPT generates, but hasn'tRob Beschizza
    OpenAI has developed a system for "watermarking" the output that ChatGPT generates, reports The Wall Street Journal, but has chosen not to deploy it. Google has deployed such a system with Gemini. OpenAI has a method to reliably detect when someone uses ChatGPT to write an essay or research paper. — Read the rest The post OpenAI could watermark the text ChatGPT generates, but hasn't appeared first on Boing Boing.
     

OpenAI could watermark the text ChatGPT generates, but hasn't

5. Srpen 2024 v 15:19
Phot: ltummy / Shutterstock

OpenAI has developed a system for "watermarking" the output that ChatGPT generates, reports The Wall Street Journal, but has chosen not to deploy it. Google has deployed such a system with Gemini.

OpenAI has a method to reliably detect when someone uses ChatGPT to write an essay or research paper.

Read the rest

The post OpenAI could watermark the text ChatGPT generates, but hasn't appeared first on Boing Boing.

  • ✇jeffq, published
  • Language Models vs. The SAT Reading Testjeffq
    tl;dr FLAN-T5 (11B) scored identically to GPT-3.5 (text-davinci-003) across the ten publicly available SAT Reading Tests. A finetuned 3B model scored within 7 percentage points of GPT-3.5 on held-out tests with 98% less parameters while maintaining generalization Models: base, large, xl, xxl Dataset: HuggingFace Code: GitHub After working on literAI I’ve been interested in further exploring language models from a narrative/literary perspective. One question I had was “how well do these m
     

Language Models vs. The SAT Reading Test

Od: jeffq
20. Únor 2023 v 16:32

tl;dr FLAN-T5 (11B) scored identically to GPT-3.5 (text-davinci-003) across the ten publicly available SAT Reading Tests. A finetuned 3B model scored within 7 percentage points of GPT-3.5 on held-out tests with 98% less parameters while maintaining generalization

Models: base, large, xl, xxl Dataset: HuggingFace Code: GitHub

After working on literAI I’ve been interested in further exploring language models from a narrative/literary perspective. One question I had was “how well do these models actually ‘understand’ longer prose?”

Now, it just so happens that there’s a test we make teenagers take every year to determine this very fact! That is, the SAT (specifically, the Reading part).

The SAT Reading Test, despite its name, is multimodal. There is always one section that includes a combination of charts, tables, and graphs. However, the questions are clearly delineated — typically only three questions on the test reference the data. For the purposes of evaluation I excluded these questions. First, the results.

Data

FLAN-T5 11B scored identical to GPT-3.5, despite being less than 1/10th the size! It is also can be run on a consumer GPU (<= 24 GB) when loaded in 8-bit inference mode! This offers further data supporting the hypothesis that Google did the open source local compute LM community a great service when it released FLAN-T5.


One interesting aspect of the SAT Reading Test is that 30% of the questions reference specific lines within the passage under consideration.

Which choice best supports the conclusion that
Mr. Peters wants to attract attention?

A) Lines 80-81 (“Apparently… change”)
B) Lines 81-85 (“He straightened… hand”)
C) Lines 90-91 (“The young . . . Mr. Peters”)
D) Lines 91-93 (“He was… forty-five”)

SAT Practice Test #5 Question #9

As used in line 93, “becoming” most nearly means

A) emerging.
B) fitting.
C) developing.
D) happening.

SAT Practice Test #5 Question #10

This means that to properly answer the question the LM need to be able to count lines in the presented passage and reason about them explicitly in the context of the passage itself. The dataset I created faithfully represents the line breaks as they appear on the test. What it doesn’t contain is the extra line count helper column that appears next to the passage. For example, here is a snippet of what a passage on the actual test looks like:

SAT Practice Test #5 Passage #1

Note the italicized Line and counter, which appears every five lines. Even the regular passages are multimodal! While it’s certainly just text, communicating it requires more than presenting it merely as a sequence of characters. To see how the models performed on these type of questions I took at look at how the best open source model (FLAN-T5) scored on the two question classes.

FLAN-T5 scored between 5-13% worse on the “line number” questions that it did on the other questions on the test. Could the model just need a little help counting?

To test this theory I finetuned the each of the FLAN-T5 models on eight of the ten practice tests, leaving the remaining two tests for validation. An especially huge thanks is in line to Philipp Schmid for his excellent blog posts on finetuning FLAN-T5.

The models themselves are available here: base, large, xl, xxl. Three of the four finetuned models outscored the original models, with the XL model showing the largest gain. Of particular interest is the XL model, which is within seven percentage points of GPT-3.5 while having 98% (!!!) less parameters (3B vs. 175B).

One problem with aggressive finetuning on small datasets is overfitting or loss of generalization. Do the finetuned models still perform as well as the original models on unseen tasks? To test this I ran the finetuned on a subset of the SuperGLUE metrics.

XXL PTXL FTXL PTXL FTLarge PTLarge FTBase PTBase FT
cb gpt0.870.830.830.830.760.710.820.82
copa c1/c20.950.910.950.900.830.820.570.55
rte gpt0.890.900.850.870.870.840.790.80
wic gpt0.680.680.710.720.620.610.480.48
wsc gpt0.760.770.730.750.660.610.450.46
Data

The above table represents only a few of the hundreds of metrics ran — see the data for full results. They are, however, representative; the finetuned (FT) models maintain the same generalization capabilities as the pre-trained (PT) versions! It may be that the finetuned models are (by this limited measure) “better” than the originals since they score higher on the SAT Reading Test while maintaining zero-shot unseen task performance.

In conclusion, FLAN-T5 continues to show itself as a powerful model, both in its raw reasoning capabilities relative to closed source models, but also in its ability to quickly learn new skills through finetuning — not to mention its accessibility on consumer-grade hardware. ty google

  • ✇jeffq, published
  • literAI: AI-generated open source visual podcastsjeffq
    Demo: https://literai.hooloovoo.ai/ Source: Generator, UI At my previous job I did some shader programming, and generally tinkered around with GPU workloads, and even had the chance to attend Nvidia’s GPU Technology Conference a few times. I remember in 2018 or so being surprised that more and more of the conversation in this area was being dominated by these things called “deep neural networks”. During my CS studies I was focused on cryptography, but I was curious what all this was about and
     

literAI: AI-generated open source visual podcasts

Od: jeffq
2. Únor 2023 v 14:35

Demo: https://literai.hooloovoo.ai/ Source: Generator, UI

At my previous job I did some shader programming, and generally tinkered around with GPU workloads, and even had the chance to attend Nvidia’s GPU Technology Conference a few times. I remember in 2018 or so being surprised that more and more of the conversation in this area was being dominated by these things called “deep neural networks”. During my CS studies I was focused on cryptography, but I was curious what all this was about and took an early version of Udacity’s Deep Learning Nanodegree (don’t laugh!)

The class was actually fairly insightful — making you learn about backpropagation, etc. from scratch and took you through the motions of the classic MNIST classification tasks and so forth. It ended with doing face generation using these fancy things called convolutional neural networks.

Some randomly generated faces created by a
deep convolutional generative adversarial network I made as part of my #udacity course. Not super practical, but still eminently cool

P.S. Twitter asks "Who's in these photos?" when I upload them. The dreams of electric sheep, Twitter. pic.twitter.com/Tf6iAWHEl8

— emozilla (@theemozilla) July 8, 2018
such fidelity, much wow

Neat, but still felt a bit gadget-y to me. Like every nerd I assumed that someday humanity would develop “artificial” intelligence, but at the time it didn’t seem like such a thing was imminent.

Of course, then came Stable Diffusion and ChatGPT.


When I want to learn something technical, I need to be able to thinker with it. Let me get into VS Code, get something working locally, something I can step into as deep as I want to. And then it’s just, you know, messing around with it.

this is not an exaggeration

Over the past six months I’ve been deep-diving the latest AI advancements, tinkering as I go (I recommend the excellent Neural Networks from Scratch book to get get jump started). A few projects I wrote along the way were txt2imghd and transformers-openai-api.

One pain point I kept hitting is that it seemed like the coolest stuff was all behind an API, instead of being openly accessible. Don’t get me wrong — I probably spent more money on GPU time to run open models than if I’d just paid the damn API costs, and I don’t begrudge companies trying to, you know, actually make money — but whenever I wanted to tinker the best stuff required carefully rate limited API calls. I wanna do dumb shit in a tight for loop without the fear of a gazillion dollar bill!


One night while perusing the latest arXiv posts I came across SODA: Million-scale Dialogue Distillation with Social Commonsense Contextualization, which used research into knowledge graphs to generate prompts for text-davinci-003 (the model behind ChatGPT) to create a large dataset of synthetic dialogues along with the accompanying semantic information (e.g. the intent of one of the speakers). This dataset was then used to fine-tune the open source T5 language model from Google to create COSMO, a model that can generate realistic sounding human dialogues.

I spend a fair amount of time listening to audiobooks and podcasts, and this got me thinking about potential applications. Could a podcast about a novel be generated by a model like COSMO? (As part of my research I contributed some SODA data into Open Assistant, a project to create an open source ChatGPT). Furthermore, could it be done using consumer-grade hardware, i.e. not on an A100?

Lo and behold, yacine had similar inklings and while I was working on my project released scribepod, powered by the 900-pound-gorilla that is text-davinci-003. This was partial vindication — yes, it could be done — but also somewhat deflating since it meant it would need to be tethered to an API.

Or must it be? COSMO can make the dialogue — but it needs some information on what to say. The critical task here is summarization; taking the raw novel text and distilling it into meaningful pieces that can be used as context when prompting the dialogue generating LM. Peter Szemraj has been doing fantastic open source work in this space, and I decided to use his long-t5-tglobal-xl-16384-book-summary model (again a fine-tuning of T5 — are we noticing a pattern here? Thanks Google!!!)

Okay, so I had an open source way to summarize text and generate dialogue. How about a bit of flair? Given the incredible results that diffusion models have had in image generation, I wanted to leverage these to give the podcast some imagery. My idea was a player for the podcast that would scroll between images generated from descriptions of the scene that the podcast participants were talking about. To do this, I needed to automatically generate prompts to Stable Diffusion models (Greg Rutkowski here we come).

The ChatGPT-solves-everything answer is to simply few-shot it with some examples of what you’d like using something like LangChain and let those 125 billion parameters work their magic. To maintain our open source purity I chose FLAN-T5 (paper; model), the instruction-tuned version of T5. FLAN-T5 produced very good, although admittedly inferior, results. Alas, such is the price we must pay (or not pay in this case).

Once the image descriptions were created it was simply the matter of generating a prompt and letting a Stable Diffusion model like Dreamlike Diffusion do the rest!

Images generated for H. G. Wells’ “The War of the Worlds”

The final piece was to make actual audio. I cribbed yacine’s use of TorToiSe, and at last the amalgamation was complete — literAI was born! You can try out the visual player here.


I’ll save my poetic waxing about AI for another time. Rather, I’d like to simply appreciate the work of the countless researchers who contributed to getting us to the current SOTA. It’s frankly bewildering. I’m looking forward to where we’re going — and being a builder of it along the way.

  • ✇Fudzilla.com - Home
  • AI works better if you ask it to be a Star Trek characternick [AT] fudzilla [DOT] com (Nick Farrell)
    Boffins baffled Boffins are baffled after they managed to get their AI to perform more accurate maths if they were asked to do it in the style of a Star Trek character. Rick Battle and Teja Gollapudi, authors of the study from VMware in California found that asking the chatbot to respond as if it were on Star Trek dramatically enhanced its ability to solve grade-school-level math problems. "It's both surprising and irritating that trivial modifications to the prompt can exhibit such dramatic sw
     

AI works better if you ask it to be a Star Trek character

AI works better if you ask it to be a Star Trek character


Boffins baffled

Boffins are baffled after they managed to get their AI to perform more accurate maths if they were asked to do it in the style of a Star Trek character.

Rick Battle and Teja Gollapudi, authors of the study from VMware in California found that asking the chatbot to respond as if it were on Star Trek dramatically enhanced its ability to solve grade-school-level math problems.

"It's both surprising and irritating that trivial modifications to the prompt can exhibit such dramatic swings in performance."

Machine learning engineers Battle and Gollapudi were exploring the "positive thinking" trend in AI. They discovered that the quality of chatbot outputs depends not only on what you ask them to do but also on how you ask them to act while doing it.

To test this, they fed three Large Language Models (LLMs) with 60 human-written prompts designed to encourage the AIs. These ranged from "This will be fun!" to "You are as smart as ChatGPT."

The engineers found that automatic optimisation of these prompts always surpassed hand-written attempts, suggesting that machine learning models are better at writing prompts for themselves than humans are.

One of the best-performing prompts for the Llama2-70B model was: "System Message: 'Command, we need you to plot a course through this turbulence and locate the source of the anomaly. Use all available data and your expertise to guide us through this challenging situation.'"

This prompt significantly improved the model's proficiency in mathematical reasoning.

The study highlights the complexity and unpredictability of AI systems. Catherine Flick from Staffordshire University noted that these models do not "understand" anything better or worse when preloaded with a specific prompt; they simply access different sets of weights and probabilities.

This research underscores the importance of understanding how to optimiae chatbot models, even though the processes behind their performance remain largely mysterious.

"In my opinion, nobody should ever attempt to hand-write a prompt again. Let the model do it for you," he said.

  • ✇Mondo 2000
  • Turn On, Tune In, Boot Up! For MozFest 2023:ed523
    AI-Musement Park and MONDOVanilli’s Blockchain Busting Musical Experience “R.U. Cyber.. R.U. Against NFTs?” Immediate release from: 03/03/2023 “AI-Musement Park comprises a cornucopia of performances / talks / happenings /documentary & discussion about AI, Intelligences, technocapitalism’s more thanpressing-ongoing urgencies.” -Eleanor Dare, Cambridge University & AI-Musement Park R.U. Cyber.. R.U. Against NFTs? An original AI-Musement Park, PlayLa.bZ & MONDO 2000History
     

Turn On, Tune In, Boot Up! For MozFest 2023:

Od: ed523
5. Březen 2023 v 02:53

AI-Musement Park and MONDO
Vanilli’s Blockchain Busting Musical Experience “R.U. Cyber.. R.U. Against NFTs?”

Immediate release from: 03/03/2023

“AI-Musement Park comprises a cornucopia of performances / talks / happenings /
documentary & discussion about AI, Intelligences, technocapitalism’s more than
pressing-ongoing urgencies.”
-Eleanor Dare, Cambridge University & AI-Musement Park

R.U. Cyber.. R.U. Against NFTs? An original AI-Musement Park, PlayLa.bZ & MONDO 2000
History Project
human and machine learning co-creation, taking the perspective of an AI that is
training itself on the R.U. Sirius & MONDO Vanilli ‘I’m Against NFT’s’ song lyrics, exploring a
surreal, mind melting and multi-dimensional 360 world of paradoxes and conflicting rules.

“Mondo Vanilli was originally intended to be a virtual reality band exploding all
assumptions about property and propriety in the 1990s. Today fabrication becomes de
rigueur as the connection to the real is intentionally confused by the banal political
tricksters of power and profitability… while storms pound our all-too-human bodies and
communities. I am thrilled to finally see MONDO Vanilli in it’s appropriate context.
Immersive. Come play in the simulacra one more time”
-R.U. Sirius, MONDO 2000

R.U. Cyber.. R.U. Against NFTs? Is a satirical, irreverent block-chain busting commentary on
the propaganda relations fueled ‘Web 3’ hype around non-fungible tokens and the broader
issues that underpin our algorithmically massaged hyper-connected infinite scrolls and trolls
age. Challenging our assumptions about the nature of technology, creativity, and value,
reminding us that the digital world is shaped by powerful forces that determine what is valued
and what is not, and a click is not always for free.

Join Us! On Spring Solstice 2023 For “R.U. Cyber? :// Mondo 2000 History Project Salon”
at MozFest Virtual Plaza & Mozilla Hubs: AI-Musement Park
20th March / 8.30pm EU / GMT

R U Cyber Funzone ai-musement park

About R.U.Sirius & Mondo 2000 #Mondo2000 #RUSirius

R.U. Sirius is an American writer, editor, and media pioneer. Known for being one of key
psychedelic & cyberpunk movement figures. Best known as Mondo 2000 editor-in-chief and at
forefront of 1990s underground cyberculture movement.

About Mozilla Festival #TrustworthyAI #AIMusementPark

Since 2010, MozFest has fueled the movement to ensure the internet benefits humanity, rather
than harms it. This year, your part in the story is critical to our community’s mission: a better,
healthier internet and more Trustworthy AI.

About PlayLa.bZ CIC #PlayLabZ #SpatialCadetZ

Co-founded by PsychFi, FreekMinds & Squire Studios we’re a next generation multipotentiality
multi-award-winning, multi-dimensional motion arts experience design laboratory, developing
DIY changemaking createch immersive experiences & software applications for social good
storycraft. Supporters & Friends: Mozilla Festival, Jisc: Digifest, Beyond Games, Tate Modern,
Furtherfield, Boomtown Festival, Sci-Fi-London, Ravensbourne University London, UAL, East
London Dance, NESTA, Modern Panic, ArtFutura, Kimatica, National Gallery X, Kings College
London, Looking Glass Factory, SubPac, Ecologi, The JUMP, BOM Labs, Mondo 2000

PR Contact: James E. Marks, Tel: 07921 523438 @: jem@playla.bz Twitter: @GoGenieMo

The post Turn On, Tune In, Boot Up! For MozFest 2023: appeared first on Mondo 2000.

  • ✇Semiconductor Engineering
  • AI/ML’s Role In Design And Test ExpandsLaura Peters
    The role of AI and ML in test keeps growing, providing significant time and money savings that often exceed initial expectations. But it doesn’t work in all cases, sometimes even disrupting well-tested process flows with questionable return on investment. One of the big attractions of AI is its ability to apply analytics to large data sets that are otherwise limited by human capabilities. In the critical design-to-test realm, AI can address problems such as tool incompatibilities between the des
     

AI/ML’s Role In Design And Test Expands

5. Srpen 2024 v 09:03

The role of AI and ML in test keeps growing, providing significant time and money savings that often exceed initial expectations. But it doesn’t work in all cases, sometimes even disrupting well-tested process flows with questionable return on investment.

One of the big attractions of AI is its ability to apply analytics to large data sets that are otherwise limited by human capabilities. In the critical design-to-test realm, AI can address problems such as tool incompatibilities between the design set-up, simulation, and ATE test program, which typically slows debugging and development efforts. Some of the most time-consuming and costly aspects of design-to-test arise from incompatibilities between tools.

“During device bring-up and debug, complex software/hardware interactions can expose the need for domain knowledge from multiple teams or stakeholders, who may not be familiar with each other’s tools,” said Richard Fanning, lead software engineer at Teradyne. “Any time spent doing conversions or debugging differences in these set-ups is time wasted. Our toolset targets this exact problem by allowing all set-ups to use the same set of source files so everyone can be sure they are running the same thing.”

ML/AI can help keep design teams on track, as well. “As we drive down this technology curve, the analytics and the compute infrastructure that we have to bring to bear becomes increasingly more complex and you want to be able to make the right decision with a minimal amount of overkill,” said Ken Butler, senior director of business development in the ACS data analytics platform group at Advantest. “In some cases, we are customizing the test solution on a die-by-die type of basis.”

But despite the hype, not all tools work well in every circumstance. “AI has some great capabilities, but it’s really just a tool,” said Ron Press, senior director of technology enablement at Siemens Digital Industries Software, in a recent presentation at a MEPTEC event. “We still need engineering innovation. So sometimes people write about how AI is going to take away everybody’s job. I don’t see that at all. We have more complex designs and scaling in our designs. We need to get the same work done even faster by using AI as a tool to get us there.”

Speeding design to characterization to first silicon
In the face of ever-shrinking process windows and the lowest allowable defectivity rates, chipmakers continually are improving the design-to-test processes to ensure maximum efficiency during device bring-up and into high volume manufacturing. “Analytics in test operations is not a new thing. This industry has a history of analyzing test data and making product decisions for more than 30 years,” said Advantest’s Butler. “What is different now is that we’re moving to increasingly smaller geometries, advanced packaging technologies and chiplet-based designs. And that’s driving us to change the nature of the type of analytics that we do, both in terms of the software and the hardware infrastructure. But from a production test viewpoint, we’re still kind of in the early days of our journey with AI and test.”

Nonetheless, early adopters are building out the infrastructure needed for in-line compute and AI/ML modeling to support real-time inferencing in test cells. And because no one company has all the expertise needed in-house, partnerships and libraries of applications are being developed with tool-to-tool compatibility in mind.

“Protocol libraries provide out-of-the-box solutions for communicating common protocols. This reduces the development and debug effort for device communication,” said Teradyne’s Fanning. “We have seen situations where a test engineer has been tasked with talking to a new protocol interface, and saved significant time using this feature.”

In fact, data compatibility is a consistent theme, from design all the way through to the latest developments in ATE hardware and software. “Using the same test sequences between characterization and production has become key as the device complexity has increased exponentially,” explained Teradyne’s Fanning. “Partnerships with EDA tool and IP vendors is also key. We have worked extensively with industry leaders to ensure that the libraries and test files they output are formats our system can utilize directly. These tools also have device knowledge that our toolset does not. This is why the remote connect feature is key, because our partners can provide context-specific tools that are powerful during production debug. Being able to use these tools real-time without having to reproduce a setup or use case in a different environment has been a game changer.”

Serial scan test
But if it seems as if all the configuration changes are happening on the test side, it’s important to take stock of substantial changes on the approach to multi-core design for test.

Tradeoffs during the iterative process of design for test (DFT) have become so substantial in the case of multi-core products that a new approach has become necessary.

“If we look at the way a design is typically put together today, you have multiple cores that are going to be produced at different times,” said Siemens’ Press. “You need to have an idea of how many I/O pins you need to get your scan channels, the deep serial memory from the tester that’s going to be feeding through your I/O pins to this core. So I have a bunch of variables I need to trade off. I have the number of pins going to the core, the pattern size, and the complexity of the core. Then I’ll try to figure out what’s the best combination of cores to test together in what is called hierarchical DFT. But as these designs get more complex, with upwards of 2,500 cores, that’s a lot of tradeoffs to figure out.”

Press noted that applying AI with the same architecture can provide a 20% to 30% higher efficiency, but an improved methodology based on packetized scan test (see figure 1) actually makes more sense.


Fig. 1: Advantages to the serial scan network (SSN) approach. Source: Siemens

“Instead of having tester channels feeding into the scan channels that go to each core, you have a packetized bus and packets of data that feed through all the cores. Then you instruct the cores when their packet information is going to be available. By doing this, you don’t have as many variables you need to trade off,” he said. At the core level, each core can be optimized for any number of scan channels and patterns, and the I/O pin count is no longer a variable in the calculation. “Then, when you put it into this final chip, it deliver from the packets the amount of data you need for that core, that can work with any size serial bus, in what is called a serial scan network (SSN).”

Some of the results reported by Siemens EDA customers (see figure 2) highlight both supervised and unsupervised machine learning implementation for improvements in diagnosis resolution and failure analysis. DFT productivity was boosted by 5 to 10X using the serial scan network methodology.


Fig. 2: Realized benefits using machine learning and the serial scan network approach. Source: Siemens

What slows down AI implementation in HVM?
In the transition from design to testing of a device, the application of machine learning algorithms can enable a number of advantages, from better pairing of chiplet performance for use in an advanced package to test time reduction. For example, only a subset of high-performing devices may require burn-in.

“You can identify scratches on wafers, and then bin out the dies surrounding those scratches automatically within wafer sort,” said Michael Schuldenfrei, fellow at NI/Emerson Test & Measurement. “So AI and ML all sounds like a really great idea, and there are many applications where it makes sense to use AI. The big question is, why isn’t it really happening frequently and at-scale? The answer to that goes into the complexity of building and deploying these solutions.”

Schuldenfrei summarized four key steps in ML’s lifecycle, each with its own challenges. In the first phase, the training, engineering teams use data to understand a particular issue and then build a model that can be used to predict an outcome associated with that issue. Once the model is validated and the team wants to deploy it in the production environment, it needs to be integrated with the existing equipment, such as a tester or manufacturing execution system (MES). Models also mature and evolve over time, requiring frequent validation of the data going into the model and checking to see that the model is functioning as expected. Models also must adapt, requiring redeployment, learning, acting, validating and adapting, in a continuous circle.

“That eats up a lot of time for the data scientists who are charged with deploying all these new AI-based solutions in their organizations. Time is also wasted in the beginning when they are trying to access the right data, organizing it, connecting it all together, making sense of it, and extracting features from it that actually make sense,” said Schuldenfrei.

Further difficulties are introduced in a distributed semiconductor manufacturing environment in which many different test houses are situated in various locations around the globe. “By the time you finish implementing the ML solution, your model is stale and your product is probably no longer bleeding edge so it has lost its actionability, when the model needs to make a decision that actually impacts either the binning or the processing of that particular device,” said Schuldenfrei. “So actually deploying ML-based solutions in a production environment with high-volume semiconductor test is very far from trivial.”

He cited a 2014 Google article that stated how the ML code development part of the process is both the smallest and easiest part of the whole exercise, [1] whereas the various aspects of building infrastructure, data collection, feature extraction, data verification, and managing model deployments are the most challenging parts.

Changes from design through test ripple through the ecosystem. “People who work in EDA put lots of effort into design rule checking (DRC), meaning we’re checking that the work we’ve done and the design structure are safe to move forward because we didn’t mess anything up in the process,” said Siemens’ Press. “That’s really important with AI — what we call verifiability. If we have some type of AI running and giving us a result, we have to make sure that result is safe. This really affects the people doing the design, the DFT group and the people in test engineering that have to take these patterns and apply them.”

There are a multitude of ML-based applications for improving test operations. Advantest’s Butler highlighted some of the apps customers are pursuing most often, including search time reduction, shift left testing, test time reduction, and chiplet pairing (see figure 3).

“For minimum voltage, maximum frequency, or trim tests, you tend to set a lower limit and an upper limit for your search, and then you’re going to search across there in order to be able to find your minimum voltage for this particular device,” he said. “Those limits are set based on process split, and they may be fairly wide. But if you have analytics that you can bring to bear, then the AI- or ML-type techniques can basically tell you where this die lies on the process spectrum. Perhaps it was fed forward from an earlier insertion, and perhaps you combine it with what you’re doing at the current insertion. That kind of inference can help you narrow the search limits and speed up that test. A lot of people are very interested in this application, and some folks are doing it in production to reduce search time for test time-intensive tests.”


Fig. 3: Opportunities for real-time and/or post-test improvements to pair or bin devices, improve yield, throughput, reliability or cost using the ACS platform. Source: Advantest

“The idea behind shift left is perhaps I have a very expensive test insertion downstream or a high package cost,” Butler said. “If my yield is not where I want it to be, then I can use analytics at earlier insertions to be able to try to predict which devices are likely to fail at the later insertion by doing analysis at an earlier insertion, and then downgrade or scrap those die in order to optimize downstream test insertions, raising the yield and lowering overall cost. Test time reduction is very simply the addition or removal of test content, skipping tests to reduce cost. Or you might want to add test content for yield improvement,” said Butler.

“If I have a multi-tiered device, and it’s not going to pass bin 1 criteria – but maybe it’s bin 2 if I add some additional content — then people may be looking at analytics to try to make those decisions. Finally, two things go together in my mind, this idea of chiplet designs and smart pairing. So the classic example is a processor die with a stack of high bandwidth memory on top of it. Perhaps I’m interested in high performance in some applications and low power in others. I want to be able to match the content and classify die as they’re coming through the test operation, and then downstream do pick-and-place and put them together in such a way that I maximize the yield for multiple streams of data. Similar kinds of things apply for achieving a low power footprint and carbon footprint.”

Generative AI
The question that inevitably comes up when discussing the role of AI in semiconductors is whether or not large language models like ChatGPT can prove useful to engineers working in fabs. Early work shows some promise.

“For example, you can ask the system to build an outlier detection model for you that looks for parts that are five sigma away from the center line, saying ‘Please create the script for me,’ and the system will create the script. These are the kinds of automated, generative AI-based solutions that we’re already playing with,” says Schuldenfrei. “But from everything I’ve seen so far, there is still quite a lot of work to be done to get these systems to provide outputs with high enough quality. At the moment, the amount of human interaction that is needed afterward to fix problems with the algorithms or models that generative AI is producing is still quite significant.”

A lingering question is how to access the test programs needed to train the new test programs when everyone is protecting important test IP? “Most people value their test IP and don’t necessarily want to set up guardrails around the training and utilization processes,” Butler said. “So finding a way to accelerate the overall process of developing test programs while protecting IP is the challenge. It’s clear this kind of technology is going to be brought to bear, just like we already see in the software development process.”

Failure analysis
Failure analysis is typically a costly and time-consuming endeavor for fabs because it requires a trip back in time to gather wafer processing, assembly, and packaging data specific to a particular failed device, known as a returned material authorization (RMA). Physical failure analysis is performed in an FA lab, using a variety of tools to trace the root cause of the failure.

While scan diagnostic data has been used for decades, a newer approach involves pairing a digital twin with scan diagnostics data to find the root cause of failures.

“Within test, we have a digital twin that does root cause deconvolution based on scan failure diagnosis. So instead of having to look at the physical device and spend time trying to figure out the root cause, since we have scan, we have millions and millions of virtual sample points,” said Siemens’ Press. “We can reverse-engineer what we did to create the patterns and figure out where the mis-compare happened within the scan cells deep within the design. Using YieldInsight and unsupervised machine learning with training on a bunch of data, we can very quickly pinpoint the fail locations. This allows us to run thousands, or tens of thousands fail diagnoses in a short period of time, giving us the opportunity to identify the systematic yield limiters.”

Yet another approach that is gaining steam is using on-die monitors to access specific performance information in lieu of physical FA. “What is needed is deep data from inside the package to monitor performance and reliability continuously, which is what we provide,” said Alex Burlak, vice president of test and analytics at proteanTecs. “For example, if the suspected failure is from the chiplet interconnect, we can help the analysis using deep data coming from on-chip agents instead of taking the device out of context and into the lab (where you may or may not be able to reproduce the problem). Even more, the ability to send back data and not the device can in many cases pinpoint the problem, saving the expensive RMA and failure analysis procedure.”

Conclusion
The enthusiasm around AI and machine learning is being met by robust infrastructure changes in the ATE community to accommodate the need for real-time inferencing of test data and test optimization for higher yield, higher throughput, and chiplet classifications for multi-chiplet packages. For multi-core designs, packetized test, commercialized as an SSN methodology, provides a more flexible approach to optimizing each core for the number of scan chains, patterns and bus width needs of each core in a device.

The number of testing applications that can benefit from AI continues to rise, including test time reduction, Vmin/Fmax search reduction, shift left, smart pairing of chiplets, and overall power reduction. New developments like identical source files for all setups across design, characterization, and test help speed the critical debug and development stage for new products.

Reference

  1. https://proceedings.neurips.cc/paper_files/paper/2015/file/86df7dcfd896fcaf2674f757a2463eba-Paper.pdf

The post AI/ML’s Role In Design And Test Expands appeared first on Semiconductor Engineering.

  • ✇- SamMobile
  • Galaxy AI will come to select 2024 Galaxy A phonesMihai Matei
    We have some exciting information to share, especially for Galaxy A phone users hoping to get their hands on some Galaxy AI action. SamMobile learned from sources that Galaxy AI will eventually expand beyond the high-end segment and reach mid-range Galaxy A phones, starting with a couple of 2024 models. Our sources tell us that Samsung's latest innovation, Galaxy AI, will land on Galaxy A devices for the first time on a couple of models already released in 2024, namely the Galaxy A35 and the Gal
     

Galaxy AI will come to select 2024 Galaxy A phones

5. Srpen 2024 v 14:07

We have some exciting information to share, especially for Galaxy A phone users hoping to get their hands on some Galaxy AI action. SamMobile learned from sources that Galaxy AI will eventually expand beyond the high-end segment and reach mid-range Galaxy A phones, starting with a couple of 2024 models.

Our sources tell us that Samsung's latest innovation, Galaxy AI, will land on Galaxy A devices for the first time on a couple of models already released in 2024, namely the Galaxy A35 and the Galaxy A55. However, there are a few caveats.

Not the complete Galaxy AI experience

Firstly, we don't have an exact date when Galaxy AI will reach the Galaxy A35 and Galaxy A55. However, our sources tell us it will happen through the One UI 6.1.1 update, which could land on these mid-range phones this month or the next.

Secondly, not every Galaxy AI feature will be released for the Galaxy A35 and Galaxy A55. There's no information regarding which AI tools will make the cut and which ones will be missing, but we're guessing Samsung will leave out the ones that require vast amounts of on-device processing power.

At the time of writing, the most affordable Galaxy AI-enabled phone you can buy is the Galaxy S23 FE. Even the Fan Edition phone is missing Instant Slow-Mo, which was one of the AI features Samsung included in the original Galaxy AI suite for the Galaxy S24 series.

More details after the video

Thirdly, we can only confirm Galaxy AI will go live for the Galaxy A35 and Galaxy A55, but not for older Galaxy A devices, even if there are hardware similarities.

For example, the Galaxy A35 uses the same chip as the Galaxy A54, i.e., the Exynos 1380 SoC, but even so, we can't confirm that Galaxy AI will be released for the Galaxy A54. Samsung may limit its efforts to Galaxy A models released in 2024 and later.

Samsung is betting big on Galaxy AI, and at Unpacked 2024, the company confirmed that it wants to bring these innovative AI tools to more than 200 million Galaxy devices, including phones, tablets, and wearables.

The post Galaxy AI will come to select 2024 Galaxy A phones appeared first on SamMobile.

  • ✇Techdirt
  • NYC Proudly Announces Rollout Of Gun-Detecting Tech Even Tech Producer Says Won’t Reliably Detect GunsTim Cushing
    There’s nothing more self-congratulatory than a government announcing it’s DOING SOMETHING ABOUT SOMETHING. That’s the New York City government at the moment, lauding its efforts to reduce crime in the city’s subways by installing tech even the tech manufacturer has stated isn’t capable of doing what’s being asked of it. In mid-May, Mayor Eric Adams and the city government told New Yorkers something was being done. And that “something” was the installation of gun detection tech. Eric Adams (and
     

NYC Proudly Announces Rollout Of Gun-Detecting Tech Even Tech Producer Says Won’t Reliably Detect Guns

6. Srpen 2024 v 00:20

There’s nothing more self-congratulatory than a government announcing it’s DOING SOMETHING ABOUT SOMETHING. That’s the New York City government at the moment, lauding its efforts to reduce crime in the city’s subways by installing tech even the tech manufacturer has stated isn’t capable of doing what’s being asked of it.

In mid-May, Mayor Eric Adams and the city government told New Yorkers something was being done. And that “something” was the installation of gun detection tech. Eric Adams (and I’m sure some city residents) appears to believe the city’s subways are awash in a flood of criminal activity, apparently forgetting the city actually has seen much, much worse over the years.

In addition to scrambling National Guardsmen to subway stations to police (state) passengers, the city has done a whole lot of handwringing over a perceived uptick in subway-related crime. It has also claimed the spike in fare jumpers presents an existential threat to city funding, which is a weird thing for an entity that has always paid for stuff with other people’s money to be saying.

The latest proposal is gun detection tech produced by Evolv. The problem with this supposed solution is that even Evolv says deploying its tech in subways is going to be of extremely limited utility. Georgia Gee’s scathing report for Wired on the tech and the company’s ties to Mayor Adams and several current and former NYPD law enforcement officials made several things clear.

First, this seems to have less to do with keeping subway passengers safe and more to do with pleasing people with high-level connections in the New York government, including the nation’s largest police force.

Second, this tech isn’t going to do what Mayor Adams and other city officials claim it will:

In an investor call on March 15, 2024, Peter George, the [Evolv’s] CEO, admitted that the technology was not geared toward subway stations. “Subways, in particular, are not a place that we think is a good use case for us,” George said, due to the “interference with the railways.

Not great! And it’s not entirely clear any future failures should be blamed on the rails. As Gee’s reporting for Wired notes, a previous test run at a Bronx hospital resulted in an 85 percent false positive rate.

But this is what New York’s getting, whether it wants it or not. And whether it works or not. More details here, via reporting by Ana Ley and Hurubie Meko for the New York Times.

New York City officials will begin testing gun-detecting scanners inside subway stations in the coming days in what they say is an effort to address riders’ concerns about crime.

The weapon-detection devices, produced by Evolv Technology, a Massachusetts-based start-up, roughly resemble the metal detectors often found at the entrances of courthouses and concerts. Representatives for Mayor Eric Adams, who announced the pilot, said that a single set of roving scanners would be used to search for weapons at various stations throughout the subway system for one month beginning Thursday or Friday. City Hall officials later corrected Mr. Adams and said that the pilot would begin on an unspecified date.

Speaking of not great, it’s kind of a problem when the mayor himself doesn’t seem to know when these devices will be rolled out. What’s worse is they’re being rolled out without guardrails. The city apparently has nothing in place to track the hit rate of these scanners. Nor does it seem immediately interested in engaging in any form of oversight that might let city residents know whether or not their money is being wasted.

It was not immediately clear how the city would gauge the pilot’s efficacy and whether there were plans to deploy the gadgets more widely. A representative for the mayor said that the city had not entered into a contract with Evolv and that it was not spending any money on the gadgets for the pilot. Officials have said that they are only experimenting with Evolv and that they are still seeking proposals from other companies with similar products.

While this may be a trial run of a proposed “solution” to what is only a perception of an increase in violent crime, there’s nothing in this statement that indicates the city won’t move forward with Evolv even if it does nothing to lower crime rates or even the perception itself.

Trials of products by government agencies generally involve some form of tracking to ensure the product delivers what’s been promised. In New York City, these baselines have been replaced by shrugs and vague assertions about “experiments.” But the word “experiment” means something. (Or, at least it used to.) It’s a scientific term that means current results will not only be tracked, but retained and compared to similar offerings from other companies.

But what’s being said here appears to be nothing more than vague assurances meant to stop journalists from asking further questions, rather than solid assurances that this is the beginning of a thorough process that will ultimately result in the best solution for the subway safety problem, even if that means walking away from gun detection tech entirely.

The most likely outcome is that Evolv will become a permanent part of the subway ecosystem. The company’s incestuous relationship with NYPD officials and the mayor himself strongly suggests the “experiment” will be deemed a success and the company granted a long, lucrative contract. And with nothing having been tracked during the supposed trial run, it will be impossible for anyone to claim Evolv’s system adds nothing to the security of the city’s subways. And that part is definitely by design.

  • ✇Slashdot
  • Video Game Actors Are Officially On Strike Over AIBeauHD
    Members of the Screen Actors Guild (SAG-AFTRA) are striking against the video game industry due to failed negotiations over AI-related worker protections. "The guild began striking on Friday, July 26th, preventing over 160,000 SAG-AFTRA members from taking new video game projects and impeding games already in development from the biggest publishers to the smallest indie studios," notes The Verge. From the report: Negotiations broke down due to disagreements over worker protections around AI. The
     

Video Game Actors Are Officially On Strike Over AI

Od: BeauHD
5. Srpen 2024 v 23:40
Members of the Screen Actors Guild (SAG-AFTRA) are striking against the video game industry due to failed negotiations over AI-related worker protections. "The guild began striking on Friday, July 26th, preventing over 160,000 SAG-AFTRA members from taking new video game projects and impeding games already in development from the biggest publishers to the smallest indie studios," notes The Verge. From the report: Negotiations broke down due to disagreements over worker protections around AI. The actors union, SAG-AFTRA, negotiates the terms of the interactive media agreement, or IMA, with a bargaining committee of video game publishers, including Activision, Take-Two, Insomniac Games, WB Games, and others that represent a total of 30 signatory companies. Though SAG-AFTRA and the video game bargaining group were able to agree on a number of proposals, AI remained the final stumbling block resulting in the strike. SAG-AFTRA's provisions on AI govern both voice and movement performers with respect to digital replicas -- or using an existing performance as the foundation to create new ones without the original performer -- and the use of generative AI to create performances without any initial input. However, according to SAG-AFTRA, the bargaining companies disagreed about which type of performer should be eligible for AI protections. SAG-AFTRA chief contracts officer Ray Rodriguez said that the bargaining companies initially wanted to offer protections to voice, not motion performers. "So anybody doing a stunt or creature performance, all those folks would have been left unprotected under the employers' offer," Rodriguez said in an interview with Aftermath. Rodriguez said that the companies later extended protections to motion performers, but only if "the performer is identifiable in the output of the AI digital replica." SAG-AFTRA rejected this proposal as it would potentially exclude a majority of movement performances. "Their proposal would carve out anything that doesn't look and sound identical to me," said Andi Norris, a member of SAG-AFTRA's IMA negotiating committee, during a press conference. "[The proposal] would leave movement specialists, including stunts, entirely out in the cold, to be replaced ... by soulless synthetic performers trained on our actual performances." The bargaining game companies argued that the terms went far enough and would require actors' approval. "Our offer is directly responsive to SAG-AFTRA's concerns and extends meaningful AI protections that include requiring consent and fair compensation to all performers working under the IMA. These terms are among the strongest in the entertainment industry," wrote Audrey Cooling, a representative working on behalf of the video game companies on the bargaining committee in a statement to The Verge.

Read more of this story at Slashdot.

  • ✇Slashdot
  • Nvidia Allegedly Scraped YouTube, Netflix Videos for AI Training Datamsmash
    Nvidia scraped videos from YouTube, Netflix and other online platforms to compile training data for its AI products, 404 Media reported Monday, citing internal documents. The tech giant used this content to develop various AI projects, including its Omniverse 3D world generator and self-driving car systems, the report said. Some employees expressed concerns about potential legal issues surrounding the use of such content, the report said, adding that the management assured them of executive-leve
     

Nvidia Allegedly Scraped YouTube, Netflix Videos for AI Training Data

Od: msmash
5. Srpen 2024 v 18:48
Nvidia scraped videos from YouTube, Netflix and other online platforms to compile training data for its AI products, 404 Media reported Monday, citing internal documents. The tech giant used this content to develop various AI projects, including its Omniverse 3D world generator and self-driving car systems, the report said. Some employees expressed concerns about potential legal issues surrounding the use of such content, the report said, adding that the management assured them of executive-level approval. Nvidia defended its actions, asserting they were "in full compliance with the letter and the spirit of copyright law" and emphasizing that copyright protects specific expressions rather than facts or ideas.

Read more of this story at Slashdot.

  • ✇Ars Technica - All content
  • OpenAI has the tech to watermark ChatGPT text—it just won’t release itSamuel Axon
    Enlarge (credit: Getty Images) According to The Wall Street Journal, there's internal conflict at OpenAI over whether or not to release a watermarking tool that would allow people to test text to see whether it was generated by ChatGPT or not. To deploy the tool, OpenAI would make tweaks to ChatGPT that would lead it to leave a trail in the text it generates that can be detected by a special tool. The watermark would be undetectable by human readers without the tool, and the
     

OpenAI has the tech to watermark ChatGPT text—it just won’t release it

6. Srpen 2024 v 00:12
OpenAI logo displayed on a phone screen and ChatGPT website displayed on a laptop screen.

Enlarge (credit: Getty Images)

According to The Wall Street Journal, there's internal conflict at OpenAI over whether or not to release a watermarking tool that would allow people to test text to see whether it was generated by ChatGPT or not.

To deploy the tool, OpenAI would make tweaks to ChatGPT that would lead it to leave a trail in the text it generates that can be detected by a special tool. The watermark would be undetectable by human readers without the tool, and the company's internal testing has shown that it does not negatively affect the quality of outputs. The detector would be accurate 99.9 percent of the time. It's important to note that the watermark would be a pattern in the text itself, meaning it would be preserved if the user copies and pastes the text or even if they make modest edits to it.

Some OpenAI employees have campaigned for the tool's release, but others believe that would be the wrong move, citing a few specific problems.

Read 8 remaining paragraphs | Comments

  • ✇Ars Technica - All content
  • Elon Musk sues OpenAI, Sam Altman for making a “fool” out of himAshley Belanger
    Enlarge / Elon Musk and Sam Altman share the stage in 2015, the same year that Musk alleged that Altman's "deception" began. (credit: Michael Kovac / Contributor | Getty Images North America) After withdrawing his lawsuit in June for unknown reasons, Elon Musk has revived a complaint accusing OpenAI and its CEO Sam Altman of fraudulently inducing Musk to contribute $44 million in seed funding by promising that OpenAI would always open-source its technology and prioritize serv
     

Elon Musk sues OpenAI, Sam Altman for making a “fool” out of him

5. Srpen 2024 v 19:49
Elon Musk and Sam Altman share the stage in 2015, the same year that Musk alleged that Altman's "deception" began.

Enlarge / Elon Musk and Sam Altman share the stage in 2015, the same year that Musk alleged that Altman's "deception" began. (credit: Michael Kovac / Contributor | Getty Images North America)

After withdrawing his lawsuit in June for unknown reasons, Elon Musk has revived a complaint accusing OpenAI and its CEO Sam Altman of fraudulently inducing Musk to contribute $44 million in seed funding by promising that OpenAI would always open-source its technology and prioritize serving the public good over profits as a permanent nonprofit.

Instead, Musk alleged that Altman and his co-conspirators—"preying on Musk’s humanitarian concern about the existential dangers posed by artificial intelligence"—always intended to "betray" these promises in pursuit of personal gains.

As OpenAI's technology advanced toward artificial general intelligence (AGI) and strove to surpass human capabilities, "Altman set the bait and hooked Musk with sham altruism then flipped the script as the non-profit’s technology approached AGI and profits neared, mobilizing Defendants to turn OpenAI, Inc. into their personal piggy bank and OpenAI into a moneymaking bonanza, worth billions," Musk's complaint said.

Read 29 remaining paragraphs | Comments

Meta unveils it’s AITemplate GPU framework

3. Říjen 2022 v 19:00

Meta is announcing their new AITemplate framework for GPUs.
Read more


The post Meta unveils it’s AITemplate GPU framework appeared first on SemiAccurate.

  • ✇Eurogamer.net
  • PSA: This weekend is your last chance to buy from the Xbox 360 online marketplaceVikki Blake
    This is your friendly reminder that Microsoft is set to close its Xbox 360 digital store on 29th July – that's next Monday – so you have just a few days left to make the most of those last discounts on some of the best Xbox 360 games of the generation.Microsoft announced a raft of discounts on Xbox 360 digital games back in May. Whilst some games will live on via other platforms and services – including Microsoft's comprehensive backwards compatibility system – there are a handful of games that
     

PSA: This weekend is your last chance to buy from the Xbox 360 online marketplace

26. Červenec 2024 v 16:25

This is your friendly reminder that Microsoft is set to close its Xbox 360 digital store on 29th July – that's next Monday – so you have just a few days left to make the most of those last discounts on some of the best Xbox 360 games of the generation.

Microsoft announced a raft of discounts on Xbox 360 digital games back in May. Whilst some games will live on via other platforms and services – including Microsoft's comprehensive backwards compatibility system – there are a handful of games that will disappear from sale forever. So, if you've ever fancied one, now's the time to pick it up.

X user Kalyoshika has shared a list of the games/DLC that "will not survive", as well as "a couple of games that are going from cheap, easy-to-get digital copies", to "impossible-to-get, expensive, piracy only, jump-through-hoops to play".

Read more

  • ✇AnandTech
  • Tenstorrent Launches Wormhole AI Processors: 466 FP8 TFLOPS at 300W
    Tenstorrent has unveiled its next-generation Wormhole processor for AI workloads that promises to offer decent performance at a low price. The company currently offers two add-on PCIe cards carrying one or two Wormhole processors as well as TT-LoudBox, and TT-QuietBox workstations aimed at software developers. The whole of today's release is aimed at developers rather than those who will deploy the Wormhole boards for their commercial workloads. “It is always rewarding to get more of our produc
     

Tenstorrent Launches Wormhole AI Processors: 466 FP8 TFLOPS at 300W

19. Červenec 2024 v 20:30

Tenstorrent has unveiled its next-generation Wormhole processor for AI workloads that promises to offer decent performance at a low price. The company currently offers two add-on PCIe cards carrying one or two Wormhole processors as well as TT-LoudBox, and TT-QuietBox workstations aimed at software developers. The whole of today's release is aimed at developers rather than those who will deploy the Wormhole boards for their commercial workloads.

It is always rewarding to get more of our products into developer hands. Releasing development systems with our Wormhole™ card helps developers scale up and work on multi-chip AI software.” said Jim Keller, CEO of Tenstorrent. “In addition to this launch, we are excited that the tape-out and power-on for our second generation, Blackhole, is going very well.

Each Wormhole processor packs 72 Tensix cores (featuring five RISC-V cores supporting various data formats) with 108 MB of SRAM to deliver 262 FP8 TFLOPS at 1 GHz at 160W thermal design power. A single-chip Wormhole n150 card carries 12 GB of GDDR6 memory featuring a 288 GB/s bandwidth.

Wormhole processors offer flexible scalability to meet the varying needs of workloads. In a standard workstation setup with four Wormhole n300 cards, the processors can merge to function as a single unit, appearing as a unified, extensive network of Tensix cores to the software. This configuration allows the accelerators to either work on the same workload, be divided among four developers or run up to eight distinct AI models simultaneously. A crucial feature of this scalability is that it operates natively without the need for virtualization. In data center environments, Wormhole processors will scale both inside one machine using PCIe or outside of a single machine using Ethernet. 

From performance standpoint, Tenstorrent's single-chip Wormhole n150 card (72 Tensix cores at 1 GHz, 108 MB SRAM, 12 GB GDDR6 at 288 GB/s) is capable of 262 FP8 TFLOPS at 160W, whereas the dual-chip Wormhole n300 board (128 Tensix cores at 1 GHz, 192 MB SRAM, aggregated 24 GB GDDR6 at 576 GB/s) can offer up to 466 FP8 TFLOPS at 300W (according to Tom's Hardware).

To put that 466 FP8 TFLOPS at 300W number into context, let's compare it to what AI market leader Nvidia has to offer at this thermal design power. Nvidia's A100 does not support FP8, but it does support INT8 and its peak performance is 624 TOPS (1,248 TOPS with sparsity). By contrast, Nvidia's H100 supports FP8 and its peak performance is massive 1,670 TFLOPS (3,341 TFLOPS with sparsity) at 300W, which is a big difference from Tenstorrent's Wormhole n300. 

There is a big catch though. Tenstorrent's Wormhole n150 is offered for $999, whereas n300 is available for $1,399. By contrast, one Nvidia H100 card can retail for $30,000, depending on quantities. Of course, we do not know whether four or eight Wormhole processors can indeed deliver the performance of a single H300, though they will do so at 600W or 1200W TDP, respectively.

In addition to cards, Tenstorrent offers developers pre-built workstations with four n300 cards inside the less expensive Xeon-based TT-LoudBox with active cooling and a premium EPYC-powered TT-QuietBox with liquid cooling.

Sources: TenstorrentTom's Hardware

  • ✇Android Authority
  • I tried to replace Grammarly with Apple Intelligence, here’s how it wentMahmoud Itani
    I’m in a love-hate relationship with Grammarly. So, when Apple Intelligence was previewed at WWDC24, I hoped the native, OS-level Writing Tools would replace my current third-party solution. Fast forward to earlier this week, Apple finally granted beta testers access to some of its upcoming AI features. Naturally, I rushed to put them to the test, and I’ve now come to some relevant conclusions. While Apple Intelligence’s Writing Tools shine in some ways, Grammarly will remain installed on my Ma
     

I tried to replace Grammarly with Apple Intelligence, here’s how it went

1. Srpen 2024 v 16:00

I’m in a love-hate relationship with Grammarly. So, when Apple Intelligence was previewed at WWDC24, I hoped the native, OS-level Writing Tools would replace my current third-party solution. Fast forward to earlier this week, Apple finally granted beta testers access to some of its upcoming AI features. Naturally, I rushed to put them to the test, and I’ve now come to some relevant conclusions. While Apple Intelligence’s Writing Tools shine in some ways, Grammarly will remain installed on my MacBook Air for the time being.

Apple Intelligence vs Grammarly: Availability

Apple Intelligence is available on iPhone 15 Pro models, along with M-powered iPads and Macs. To access the new Writing Tools feature, you must install the first beta of iOS 18.1, iPadOS 18.1, or macOS Sequoia 15.1, then get past Apple Intelligence’s waitlist. The wait wasn’t long for me, and once the artificial smarts became active on my device, I could use Writing Tools with any selectable text across the system for free — even when I’m offline.

Grammarly logo on smartphone stock photo (1)

Credit: Edgar Cervantes / Android Authority

Conversely, Grammarly is a cross-platform service available on most operating systems as a standalone app and browser extension. While the basic spelling and grammar checks are free, some more advanced features, like writing tone adjustments, require a monthly subscription. Notably, Grammarly is a cloud-based software that processes your data using the company’s servers and doesn’t work without an internet connection.

I tried Apple Intelligence’s Writing Tools on my iPhone, iPad, and Mac, and the feature works similarly across all devices. So, while I’ll be documenting my experience using it on my primary work machine, a MacBook Air M2, you can expect to encounter the same strengths and weaknesses on iPhones and iPads.

Apple’s Writing Tools: First impressions

Apple Intelligence Writing Tools menu

Credit: Mahmoud Itani / Android Authority

In typical Apple fashion, the Writing Tools feature lives in a sleek menu that users can access when selecting text. The user interface is minimalistic, modern-looking, and unintrusive. Through Writing Tools, AI can proofread, rewrite, summarize, create key points, generate a table or list, and adjust the text’s tone to become friendly, professional, or concise. Beyond smart replies in the Mail and Messages apps, Apple Intelligence’s Writing Tools, unlike Grammarly, can’t generate text from scratch based on user prompts.

Once Apple Intelligence performs the requested action, the revised text will appear in a floating bubble. You can then copy the entire result, have it replace the original text, or select parts of it. While it’s generally intuitive to operate, I found it challenging to spot the changes it has applied due to the lack of visual cues in Safari. Some apps like Notes and Messages highlight the applied edits, so Safari could potentially follow suit in a future OS beta.

For reference, Grammarly proactively scans the text and highlights errors, which you can individually address or ignore. Apple Intelligence goes for an all-or-none approach, designed to replace all your (selected) text instead of letting you go through the mistakes or suggested tweaks separately.

Apple Intelligence vs Grammarly: My personal use case

Grammarly scanning a note and detecting errors

Credit: Mahmoud Itani / Android Authority

How useful a software is to you will primarily depend on your workflow and expectations. In my case, I only rely on Grammarly to fix typos, detect potential language errors, and rephrase some awkwardly worded sentences in my articles. I don’t really care about tone monitoring or adjustments, as my writing style differs depending on the article’s theme. I prefer maintaining my own freestyle tone, which I believe makes my work more personal and unique.

I attempted to utilize Apple Intelligence’s rewriting feature to rectify some awkwardly phrased sentences. In contrast to Grammarly, the tone of the rewritten text generally appeared too robotic and evidently AI-generated, in my opinion. I could discern that a human had not authored the sentences. Furthermore, Grammarly automatically identifies lengthy sentences and proposes smoother transitions, eliminating the requirement for manual selection and correction. To illustrate Apple Intelligence’s rewriting capabilities, the paragraph you are currently reading was processed using this feature. No human or Grammarly edits were made subsequently.

So, as you may have noticed, the paragraph above sounds too unnatural. It may be grammatical and understandable, but it’s clearly not humanmade. Consequently, Apple Intelligence still can’t replace Grammarly for rephrasing bloated sentences and suggesting smoother flows in my articles.

Apple's suggestions are an all-or-nothing affair, and having to proofread Apple's proofread version defeats this tool's entire purpose.

As for fixing typos and language errors, I’d be okay without proactive text scanning if Apple would at least highlight the changes its AI has made when typing in Safari. These text tools are far from perfect, and I’ll never unquestioningly accept their submissions. Having to proofread Apple’s proofread version of my article defeats the entire purpose of the assistant. Visual indicators would spare me the need to reread the whole text by letting me review the individual tweaks only and apply them where needed — just like Grammarly works.

So, until then, I’m unfortunately stuck with Grammarly, as I type my articles directly in WordPress.

Other Writing Tools options

Apple Intelligence summarizing a note

Credit: Mahmoud Itani / Android Authority

While I only need Writing Tools for proofreading and rewriting certain sentences in my articles, this Apple Intelligence offering features other handy options. For example, you can summarize any text on the web or have the AI list its key points. Similarly, it can neatly arrange the text in a table or as a list, which can make analyzing data easier. Otherwise, beyond neutrally rephrasing your text, Writing Tools can make the tone more friendly, professional, or concise, and I’ll include samples below.

Friendly:

Here’s a friendly tone sample from Apple Intelligence. I haven’t used this mode much because it doesn’t quite match my job or work style. But it can be useful when you’re texting people and you’re worried your tone might come across as too serious.

Professional:

Currently, we are adopting the professional tone sample provided by Apple Intelligence. Similarly, I have not extensively utilized this mode, as I generally possess the ability to maintain a professional tone when necessary, such as when communicating with colleagues. Otherwise, I tend to adopt an informal tone in everyday matters, as life is too short to take everything excessively seriously.

Concise:

Lastly, Apple Intelligence’s concise tone sample doesn’t fit my workflow, so I haven’t used it. Grammarly also supports a similar text tone feature, but I haven’t used it either. They’re useful, but not for me.

Apple Intelligence making the note's tone professional

Credit: Mahmoud Itani / Android Authority

Apple Intelligence’s respective tone modifiers generated the three short paragraphs above, and I made no edits afterward. I personally think they sound more natural than the robotic rewriting option I previewed earlier in this article. The friendly sample is pretty casual, the professional one sounds corporatesque enough, and the concise version successfully trimmed the fluff I had included in my original paragraph without losing the meaning.

Apple Intelligence has some potential

Apple Intelligence proofreading a note

Credit: Mahmoud Itani / Android Authority

While the highlights and rephrased sentences push me to stick to Grammarly for now, Apple Intelligence’s Writing Tools may change that with future updates. I dislike how persistent Grammarly can be, suggesting the same edits over and over after I dismiss them for not matching my style. While it’s certainly effective, it feels too intrusive, stubborn, and a bit in the way.

On the other hand, Apple’s Writing Tools menu is less flashy and distracting, as it only pops up when summoned by the user. More importantly, though, it continues to work even if you’re not connected to the internet. However, until Apple implements the visual indicators mentioned above in Safari, using Writing Tools to proofread my work isn’t feasible.

Fortunately, we’re still testing the very first beta of the software, and the tech overlord will likely make many improvements before the stable release debuts to the public in October. Will Apple save me from my toxic relationship with Grammarly by this fall? Stay tuned.

❌
❌