FreshRSS

Normální zobrazení

Jsou dostupné nové články, klikněte pro obnovení stránky.
PředevčíremHlavní kanál
  • ✇IEEE Spectrum
  • Nvidia Conquers Latest AI Tests​Samuel K. Moore
    For years, Nvidia has dominated many machine learning benchmarks, and now there are two more notches in its belt. MLPerf, the AI benchmarking suite sometimes called “the Olympics of machine learning,” has released a new set of training tests to help make more and better apples-to-apples comparisons between competing computer systems. One of MLPerf’s new tests concerns fine-tuning of large language models, a process that takes an existing trained model and trains it a bit more with specialized
     

Nvidia Conquers Latest AI Tests​

12. Červen 2024 v 17:00


For years, Nvidia has dominated many machine learning benchmarks, and now there are two more notches in its belt.

MLPerf, the AI benchmarking suite sometimes called “the Olympics of machine learning,” has released a new set of training tests to help make more and better apples-to-apples comparisons between competing computer systems. One of MLPerf’s new tests concerns fine-tuning of large language models, a process that takes an existing trained model and trains it a bit more with specialized knowledge to make it fit for a particular purpose. The other is for graph neural networks, a type of machine learning behind some literature databases, fraud detection in financial systems, and social networks.

Even with the additions and the participation of computers using Google’s and Intel’s AI accelerators, systems powered by Nvidia’s Hopper architecture dominated the results once again. One system that included 11,616 Nvidia H100 GPUs—the largest collection yet—topped each of the nine benchmarks, setting records in five of them (including the two new benchmarks).

“If you just throw hardware at the problem, it’s not a given that you’re going to improve.” —Dave Salvator, Nvidia

The 11,616-H100 system is “the biggest we’ve ever done,” says Dave Salvator, director of accelerated computing products at Nvidia. It smashed through the GPT-3 training trial in less than 3.5 minutes. A 512-GPU system, for comparison, took about 51 minutes. (Note that the GPT-3 task is not a full training, which could take weeks and cost millions of dollars. Instead, the computers train on a representative portion of the data, at an agreed-upon point well before completion.)

Compared to Nvidia’s largest entrant on GPT-3 last year, a 3,584 H100 computer, the 3.5-minute result represents a 3.2-fold improvement. You might expect that just from the difference in the size of these systems, but in AI computing that isn’t always the case, explains Salvator. “If you just throw hardware at the problem, it’s not a given that you’re going to improve,” he says.

“We are getting essentially linear scaling,” says Salvator. By that he means that twice as many GPUs lead to a halved training time. “[That] represents a great achievement from our engineering teams,” he adds.

Competitors are also getting closer to linear scaling. This round Intel deployed a system using 1,024 GPUs that performed the GPT-3 task in 67 minutes versus a computer one-fourth the size that took 224 minutes six months ago. Google’s largest GPT-3 entry used 12-times the number of TPU v5p accelerators as its smallest entry and performed its task nine times as fast.

Linear scaling is going to be particularly important for upcoming “AI factories” housing 100,000 GPUs or more, Salvator says. He says to expect one such data center to come online this year, and another, using Nvidia’s next architecture, Blackwell, to startup in 2025.

Nvidia’s streak continues

Nvidia continued to boost training times despite using the same architecture, Hopper, as it did in last year’s training results. That’s all down to software improvements, says Salvator. “Typically, we’ll get a 2-2.5x [boost] from software after a new architecture is released,” he says.

For GPT-3 training, Nvidia logged a 27 percent improvement from the June 2023 MLPerf benchmarks. Salvator says there were several software changes behind the boost. For example, Nvidia engineers tuned up Hopper’s use of less accurate, 8-bit floating point operations by trimming unnecessary conversions between 8-bit and 16-bit numbers and better targeting of which layers of a neural network could use the lower precision number format. They also found a more intelligent way to adjust the power budget of each chip’s compute engines, and sped communication among GPUs in a way that Salvator likened to “buttering your toast while it’s still in the toaster.”

Additionally, the company implemented a scheme called flash attention. Invented in the Stanford University laboratory of Samba Nova founder Chris Re, flash attention is an algorithm that speeds transformer networks by minimizing writes to memory. When it first showed up in MLPerf benchmarks, flash attention shaved as much as 10 percent from training times. (Intel, too, used a version of flash attention but not for GPT-3. It instead used the algorithm for one of the new benchmarks, fine-tuning.)

Using other software and network tricks, Nvidia delivered an 80 percent speedup in the text-to-image test, Stable Diffusion, versus its submission in November 2023.

New benchmarks

MLPerf adds new benchmarks and upgrades old ones to stay relevant to what’s happening in the AI industry. This year saw the addition of fine-tuning and graph neural networks.

Fine tuning takes an already trained LLM and specializes it for use in a particular field. Nvidia, for example took a trained 43-billion-parameter model and trained it on the GPU-maker’s design files and documentation to create ChipNeMo, an AI intended to boost the productivity of its chip designers. At the time, the company’s chief technology officer Bill Dally said that training an LLM was like giving it a liberal arts education, and fine tuning was like sending it to graduate school.

The MLPerf benchmark takes a pretrained Llama-2-70B model and asks the system to fine tune it using a dataset of government documents with the goal of generating more accurate document summaries.

There are several ways to do fine-tuning. MLPerf chose one called low-rank adaptation (LoRA). The method winds up training only a small portion of the LLM’s parameters leading to a 3-fold lower burden on hardware and reduced use of memory and storage versus other methods, according to the organization.

The other new benchmark involved a graph neural network (GNN). These are for problems that can be represented by a very large set of interconnected nodes, such as a social network or a recommender system. Compared to other AI tasks, GNNs require a lot of communication between nodes in a computer.

The benchmark trained a GNN on a database that shows relationships about academic authors, papers, and institutes—a graph with 547 million nodes and 5.8 billion edges. The neural network was then trained to predict the right label for each node in the graph.

Future fights

Training rounds in 2025 may see head-to-head contests comparing new accelerators from AMD, Intel, and Nvidia. AMD’s MI300 series was launched about six months ago, and a memory-boosted upgrade the MI325x is planned for the end of 2024, with the next generation MI350 slated for 2025. Intel says its Gaudi 3, generally available to computer makers later this year, will appear in MLPerf’s upcoming inferencing benchmarks. Intel executives have said the new chip has the capacity to beat H100 at training LLMs. But the victory may be short-lived, as Nvidia has unveiled a new architecture, Blackwell, which is planned for late this year.

  • ✇Ars Technica - All content
  • Anthropic introduces Claude 3.5 Sonnet, matching GPT-4o on benchmarksBenj Edwards
    Enlarge (credit: Anthropic / Benj Edwards) On Thursday, Anthropic announced Claude 3.5 Sonnet, its latest AI language model and the first in a new series of "3.5" models that build upon Claude 3, launched in March. Claude 3.5 can compose text, analyze data, and write code. It features a 200,000 token context window and is available now on the Claude website and through an API. Anthropic also introduced Artifacts, a new feature in the Claude interface that shows related work d
     

Anthropic introduces Claude 3.5 Sonnet, matching GPT-4o on benchmarks

20. Červen 2024 v 23:04
The Anthropic Claude 3 logo, jazzed up by Benj Edwards.

Enlarge (credit: Anthropic / Benj Edwards)

On Thursday, Anthropic announced Claude 3.5 Sonnet, its latest AI language model and the first in a new series of "3.5" models that build upon Claude 3, launched in March. Claude 3.5 can compose text, analyze data, and write code. It features a 200,000 token context window and is available now on the Claude website and through an API. Anthropic also introduced Artifacts, a new feature in the Claude interface that shows related work documents in a dedicated window.

So far, people outside of Anthropic seem impressed. "This model is really, really good," wrote independent AI researcher Simon Willison on X. "I think this is the new best overall model (and both faster and half the price of Opus, similar to the GPT-4 Turbo to GPT-4o jump)."

As we've written before, benchmarks for large language models (LLMs) are troublesome because they can be cherry-picked and often do not capture the feel and nuance of using a machine to generate outputs on almost any conceivable topic. But according to Anthropic, Claude 3.5 Sonnet matches or outperforms competitor models like GPT-4o and Gemini 1.5 Pro on certain benchmarks like MMLU (undergraduate level knowledge), GSM8K (grade school math), and HumanEval (coding).

Read 17 remaining paragraphs | Comments

  • ✇Techdirt
  • Judge Experiments With ChatGPT, And It’s Not As Crazy As It SoundsMike Masnick
    Would you freak out if you found out a judge was asking ChatGPT a question to help decide a case? Would you think that it was absurd and a problem? Well, one appeals court judge felt the same way… until he started exploring the issue in one of the most thoughtful explorations of LLMs I’ve seen (while also being one of the most amusing concurrences I’ve seen). I recognize that the use of generative AI tools in lots of places raises a lot of controversy, though I think the biggest complaint comes
     

Judge Experiments With ChatGPT, And It’s Not As Crazy As It Sounds

10. Červen 2024 v 18:32

Would you freak out if you found out a judge was asking ChatGPT a question to help decide a case? Would you think that it was absurd and a problem? Well, one appeals court judge felt the same way… until he started exploring the issue in one of the most thoughtful explorations of LLMs I’ve seen (while also being one of the most amusing concurrences I’ve seen).

I recognize that the use of generative AI tools in lots of places raises a lot of controversy, though I think the biggest complaint comes from the ridiculously bad and poorly thought out uses of the technology (usually involving over relying on the tech, when it is not at all reliable).

Back in April, I wrote about how I use LLMs at Techdirt, not to replace anyone or to do any writing, but as a brainstorming tool or a soundboard for ideas. I continue to find it useful in that manner, mainly as an additional tool (beyond my existing editors) to push me to really think through the arguments I’m making and how I’m making them.

So I found it somewhat interesting to see Judge Kevin Newsom, of the 11th Circuit, recently issue a concurrence in a case, solely for the point of explaining how he used generative AI tools in thinking about the case, and how courts might want to think (carefully!) about using the tech in the future.

The case itself isn’t all that interesting. It’s a dispute over whether an insurance provider is required under its agreement to cover a trampoline injury case after the landscaper who installed the trampoline was sued. The lower court and the appeals court both say that the insurance agreement doesn’t cover this particular scenario, and therefore, the insurance company has no duty to defend the landscaper.

But Newsom’s concurrence is about his use of generative AI, which he openly admits may be controversial, and begs for people to consider his entire argument:

I concur in the Court’s judgment and join its opinion in full. I write separately (and I’ll confess this is a little unusual) simply to pull back the curtain on the process by which I thought through one of the issues in this case—and using my own experience here as backdrop, to make a modest proposal regarding courts’ interpretations of the words and phrases used in legal instruments.

Here’s the proposal, which I suspect many will reflexively condemn as heresy, but which I promise to unpack if given the chance: Those, like me, who believe that “ordinary meaning” is the foundational rule for the evaluation of legal texts should consider—consider—whether and how AI-powered large language models like OpenAI’s ChatGPT, Google’s Gemini, and Anthropic’s Claude might—might—inform the interpretive analysis. There, having thought the unthinkable, I’ve said the unsayable.

Now let me explain myself.

As Judge Newsom notes, a part of the case involved determining what the common understanding of the term “landscaping” meant, as it was not clearly defined in the contract. He also says that, due to a quirk of Alabama law, the final disposition of the case didn’t actually depend on this definitional issue, in part because of the landscaper’s insurance application, where he denied doing any work on recreational equipment.

But that allows Newsom the chance to explore how AI might be useful here, in a case where it wasn’t necessary. And that allows him to be somewhat more informal than you might expect from a judge (though, of course, we all have our favorite examples of judges letting their hair down a bit in opinions).

Importantly, though, that off-ramp wasn’t always obviously available to us—or at least as I saw things, to me. Accordingly, I spent hours and hours (and hours) laboring over the question whether Snell’s trampoline-installation project qualified as “landscaping” as that term is ordinarily understood. And it was midway along that journey that I had the disconcerting thought that underlies this separate writing: Is it absurd to think that ChatGPT might be able to shed some light on what the term “landscaping” means? Initially, I answered my own question in the affirmative: Yes, Kevin, that is positively absurd. But the longer and more deeply I considered it, the less absurd it seemed.

I kind of appreciate the thoroughness with which he admits that there are good reasons to think he’s absurd here — he even thought it himself! — before explaining how he changed his mind.

He admits that he did “the usual” thing when courts try to determine the ordinary meaning of a word, which often involves… looking up what the dictionary or other such reference materials say. So he did a run-through of dictionaries and looked at their definitions of “landscaping.” But he noted that it didn’t really help all that much in determining if the trampoline was landscaping.

Then, he also looked at the pictures associated with the case:

After languishing in definitional purgatory for a while, I decided to look at the case from a different perspective—and I do mean look. The record contains a series of photographs of Snell’s trampoline-related project. Here’s one, which shows his prep work—in particular, the empty sand pit and the below-ground retaining wall that reinforced its borders:

Image

And another, which depicts the finished product, including both the polypropylene mat (the fun part) and the decorative wooden “cap”:

Image

I’m not particularly proud of it, but I’ll confess that the photos affected the way I thought about the case. Nothing in them really struck me as particularly “landscaping”-y. The problem, of course, wasthat I couldn’t articulate why. And visceral, gut-instinct decisionmaking has always given me the willies—I definitely didn’t want to be that guy. So in a way, I felt like I was back to square one.

I swear, this is the “bloggiest” Appeals Court concurrence I’ve ever read. And it only gets more bloggy:

And that’s when things got weird. Perhaps in a fit of frustration, and most definitely on what can only be described as a lark, I said to one of my clerks, “I wonder what ChatGPT thinks about all this.” So he ran a query: “What is the ordinary meaning of ‘landscaping’?” Here’s what ChatGPT said in response:

“Landscaping” refers to the process of altering the visible features of an area of land, typically a yard, garden or outdoor space, for aesthetic or practical purposes. This can include activities such as planting trees, shrubs, flowers, or grass, as well as installing paths, fences, water features, and other elements to enhance the appearance and functionality of the outdoor space.

Interesting, for two reasons. First, ChatGPT’s explanation seemed more sensible than I had thought it might—and definitely less nutty than I had feared. Second, it squared with my own impression— informed by my own experience writing, reading, speaking, and listening to American English in the real world—that ordinary people might well use the word “landscaping” (1) to include more than just botanical and other natural improvements and (2) to cover both aesthetic and functional objectives. In fact, several of the examples that ChatGPT flagged—“paths, fences, [and] water features”—jibed with the sorts of things that had sprung to mind when I first started thinking about the case.

But, don’t worry. He didn’t just go with that because it confirmed his priors. He (rightly) recognized that’s not how any of this should work. Again, this reads like a blog post, not a judicial concurrence, but that’s what makes it fun.

Suffice it to say, my interest was piqued. But I definitely didn’t want to fall into the trap of embracing ChatGPT’s definition just because it aligned with my priors. (Bad.) So, in what might have been a mistake—more on that later—we went ahead and asked it the ultimate question: “Is installing an in-ground trampoline ‘landscaping’”? ChatGPT responded as follows:

Yes, installing an in-ground trampoline can be considered a part of landscaping. Landscaping involves altering the visible features of an outdoor area for aesthetic or practical purposes, and adding an in-ground trampoline would modify the appearance and function of the space. It’s a deliberate change to the outdoor environment, often aimed at enhancing the overall landscape and usability of the area.

For good measure, I posed the same questions to Google’s Bard (since replaced by Gemini). The precise details aren’t particularly important, but the upshot is that both models’ answers indicated that the trampoline-related work Snell had performed—the excavation of the pit, the construction of the retaining wall, the installation of the mat, and the addition of the decorative wooden cap—just might be landscaping.

Apparently, it was around this point that he realized the aforementioned “off-ramp” made by Alabama law, such that this didn’t matter. But he was intrigued that his experiments here had moved him out of the “that’s absurd” category into the “huh, this might be useful… somehow?”

So, he then uses more of the concurrence to explore the pros and cons. I won’t repost all of it, but the strongest argument in favor of considering this is that if the goal is to understand the “common” way in which a word or phrase is used, LLMs trained on the grand corpus of human knowledge might actually provide a better take on the common usage and understanding of such words and phrases.

The ordinary-meaning rule’s foundation in the common speech of common people matters here because LLMs are quite literally “taught” using data that aim to reflect and capture how individuals use language in their everyday lives. Specifically, the models train on a mind-bogglingly enormous amount of raw data taken from the internet—GPT-3.5 Turbo, for example, trained on between 400 and 500 billion words—and at least as I understand LLM design, those data run the gamut from the highest-minded to the lowest, from Hemmingway novels and Ph.D. dissertations to gossip rags and comment threads. Because they cast their nets so widely, LLMs can provide useful statistical predictions about how, in the main, ordinary people ordinarily use words and phrases in ordinary life. So, for instance, and as relevant here, LLMs can be expected to offer meaningful insight into the ordinary meaning of the term “landscaping” because the internet data on which they train contain so many uses of that term, from so many different sources—e.g., professional webpages, DIY sites, news stories, advertisements, government records, blog posts, and general online chatter about the topic.

He’s quick to admit that there are potential problems with this. There are questions about what LLMs trained on, how representative they might be. There might also be other questions about usage changes over time, for example. There are plenty of reasons why these results shouldn’t be automatically relied on.

But as I noted in my own explanation of how I’m using LLMs, the key point is to use them as a way to help you think through issues, not to rely on them as some sort of godlike answer machine. And Judge Newsom seems to recognize that. At the very least, it’s possible that an LLM might give you better (or, at the very least, different) insight into “common usage” of a word or phrase than a dictionary editor.

So far as I can tell, researchers powering the AI revolution have created, and are continuing to develop, increasingly sophisticated ways to convert language (and I’m not making this up) into math that computers can “understand.”… The combination of the massive datasets used for training and this cutting-edge “mathematization” of language enables LLMs to absorb and assess the use of terminology in context and empowers them to detect language patterns at a granular level. So, for instance, modern LLMs can easily discern the difference—and distinguish—between the flying-mammal “bat” that uses echolocation and may or may not be living in your attic, on the one hand, and the wooden “bat” that Shohei Otani uses to hit dingers, on the other. See id. And that, as I understand it, is just the tip of the iceberg. LLM predictions about how we use words and phrases have gotten so sophisticated that they can (for better or worse) produce full-blown conversations, write essays and computer code, draft emails to co-workers, etc. And as anyone who has used them can attest, modern LLMs’ results are often sensible—so sensible, in fact, that they can border on the creepy. Now let’s be clear, LLMs aren’t perfect—and again, we’ll discuss their shortcomings in due course. But let’s be equally clear about what they are: high-octane language-prediction machines capable of probabilistically mapping, among other things, how ordinary people use words and phrases in context.

And, he points out, dictionaries may be very good at proffering definitions, but they are still influenced by the team that puts together that dictionary:

First, although we tend to take dictionaries for granted, as if delivered by a prophet, the precise details of their construction aren’t always self-evident. Who exactly compiles them, and by what criteria do the compilers choose and order the definitions within any given entry? To be sure, we’re not totally in the dark; the online version of Merriam-Webster’s, for instance, provides a useful primer explaining “[h]ow . . . a word get[s] into” that dictionary. It describes a process by which human editors spend a couple of hours a day “reading a cross section of published material” and looking for new words, usages, and spellings, which they then mark for inclusion (along with surrounding context) in a “searchable text database” that totals “more than 70 million words drawn from a great variety of sources”—followed, as I understand things, by a step in which a “definer” consults the available evidence and exercises his or her judgment to “decide[] . . . the best course of action by reading through the citations and using the evidence in them to adjust entries or create new ones.”

Such explainers aside, Justice Scalia and Bryan Garner famously warned against “an uncritical approach to dictionaries.” Antonin Scalia & Bryan A. Garner, A Note on the Use of Dictionaries, 16 Green Bag 2d 419, 420 (2013). They highlighted as risks, for instance, that a volume could “have been hastily put together by two editors on short notice, and very much on the cheap,” and that without “consult[ing] the prefatory material” one might not be able “to understand the principles on which the dictionary [was] assembled” or the “ordering of [the] senses” of a particular term.

Judge Newsom wants you to know that he is not trying to slag the dictionaries here (nor to overly praise LLMs). He’s just pointing out some realities about both:

To be clear, I’m neither a nihilist nor a conspiracy theorist, but I do think that we textualists need to acknowledge (and guard against the fact) that dictionary definitions present a few known unknowns…. And while I certainly appreciate that we also lack perfect knowledge about the training data used by cuttingedge LLMs, many of which are proprietary in nature, see supra notes 6 & 8, I think it’s fair to say that we do know both (1) what LLMs are learning from—namely, tons and tons of internet data— and (2) one of the things that makes LLMs so useful—namely, their ability to accurately predict how normal people use language in their everyday lives.

[….]

Anyway, I don’t mean to paint either too grim a picture of our current, dictionary-centric practice—my own opinions are chock full of dictionary definitions, I hope to good effect—or too rosy a picture of the LLMs’ potentiality. My point is simply that I don’t think using LLMs entails any more opacity or involves any more discretion than is already inherent in interpretive practices that we currently take for granted—and in fact, that on both scores it might actually involve less.

And, of course, he has another long section on all the reasons to remain worried about LLMs in this context. He’s not a blind optimist, and he’s not one of those lawyers we’ve written about too often who just ChatGPT’d their way to useful and totally fake citations. He knows they hallucinate. But, he points, if “hallucinating” is misrepresenting things, lawyers already do that themselves:

LLMs can “hallucinate.” First, the elephant in the room: What about LLMs’ now-infamous “hallucinations”? Put simply, an LLM “hallucinates” when, in response to a user’s query, it generates facts that, well, just aren’t true—or at least not quite true. See, e.g., Arbel & Hoffman, supra, at 48–50. Remember the lawyer who got caught using ChatGPT to draft a brief when it ad-libbed case citations—which is to say cited precedents that didn’t exist? See, e.g., Benjamin Weiser, Here’s What Happens When Your Lawyer Uses ChatGPT, N.Y. Times (May 29, 2023). To me, this is among the most serious objections to using LLMs in the search for ordinary meaning. Even so, I don’t think it’s a conversationstopper. For one thing, LLM technology is improving at breakneck speed, and there’s every reason to believe that hallucinations will become fewer and farther between. Moreover, hallucinations would seem to be most worrisome when asking a specific question that has a specific answer—less so, it seems to me, when more generally seeking the “ordinary meaning” of some word or phrase. Finally, let’s shoot straight: Flesh-and-blood lawyers hallucinate too. Sometimes, their hallucinations are good-faith mistakes. But all too often, I’m afraid, they’re quite intentional—in their zeal, attorneys sometimes shade facts, finesse (and even omit altogether) adverse authorities, etc. So at worst, the “hallucination” problem counsels against blind-faith reliance on LLM outputs—in exactly the same way that no conscientious judge would blind-faith rely on a lawyer’s representations.

He also goes deep on some other downsides, including some we already discussed regarding what data the LLMs are trained on. If it’s only online speech, does that leave out speech that is common offline? Does it leave out communities who have less access to the internet? Basically, it’s part of the well-known “alignment problem” in generative AI, around the inevitability of some level of bias that is simply unavoidable. But that doesn’t mean you just shrug and accept things unquestioned.

He even considers that lawyers might try to shop around for different AIs that agree with them the most or, worse, try to “poison” an LLM to get it to agree with a preferred understanding. But, he notes, that seems unlikely to be all that effective.

There’s also this fun bit about the dystopian threat of “robo lawyers,” which I especially appreciate given that we once created a game, called HAL of Justice, for a legal academic conference that involved turning everyone involved into futuristic AI judges handling court cases.

Would the consideration of LLM outputs in interpreting legal texts inevitably put us on some dystopian path toward “robo judges” algorithmically resolving human disputes? I don’t think so. As Chief Justice Roberts recently observed, the law will always require “gray area[]” decisionmaking that entails the “application of human judgment.” Chief Justice John G. Roberts, Jr., 2023 Year-End Report on the Federal Judiciary 6 (Dec. 31, 2023). And I hope it’s clear by this point that I am not—not, not, not—suggesting that any judge should ever query an LLM concerning the ordinary meaning of some word (say, “landscaping”) and then mechanistically apply it to her facts and render judgment. My only proposal—and, again, I think it’s a pretty modest one—is that we consider whether LLMs might provide additional datapoints to be used alongside dictionaries, canons, and syntactical context in the assessment of terms’ ordinary meaning. That’s all; that’s it.

And with that, he closes with an interesting provocation. If you’ve come around to his idea that we should be considering this form of algorithmically-assisted brainstorming, what are the key things we should think about? He highlights that prompt construction will matter a lot. How do you create the “right” prompt? Should you try multiple prompts? Should you use multiple LLMs? Should there be some indication of how “confident” an LLM is in any particular answer? And, as noted earlier, how do you handle issues of words having meanings change over time, if the standard should be at the relevant time of the contract.

And he closes in the most blog-like fashion imaginable.

Just my two cents.

I find this whole discussion fascinating. As I highlighted in my own post about how we use LLMs for brainstorming, I recognize that some people hate the idea outright, while others are too utopian about “AI in everything” without thinking through the potential downsides. It’s nice for some to recognize that there is a reasonable middle path: that they have utility in certain, specific scenarios, if used properly, and not relied on as a final arbiter of anything.

Also, it’s just kind of fun to read through this quite thoughtful exploration of the topic and how Judge Newsom is considering these issues (fwiw, Newsom has been the author of opinions we’ve agreed with strongly, as well as ones we’ve disagreed with strongly, so it’s not as though I feel one way or the other about this based on his jurisprudence — it’s just a really interesting discussion).

I also appreciate that, unlike so many conversations on tech like generative AI these days, he’s not taking the extremist approach of it being “all good” or “all bad,” and is actually willing to explore the tradeoffs, nuances, and open questions related to the issues. It would be nice if the world saw more of that, just in general.

  • ✇Semiconductor Engineering
  • Vision Is Why LLMs Matter On The EdgeBen Gomes
    Large Language Models (LLMs) have taken the world by storm since the 2017 Transformers paper, but pushing them to the edge has proved problematic. Just this year, Google had to revise its plans to roll out Gemini Nano on all new Pixel models — the down-spec’d hardware options proved unable to host the model as part of a positive user experience. But the implementation of language-focused models at the edge is perhaps the wrong metric to look at. If you are forced to host a language-focused model
     

Vision Is Why LLMs Matter On The Edge

Od: Ben Gomes
30. Květen 2024 v 09:05

Large Language Models (LLMs) have taken the world by storm since the 2017 Transformers paper, but pushing them to the edge has proved problematic. Just this year, Google had to revise its plans to roll out Gemini Nano on all new Pixel models — the down-spec’d hardware options proved unable to host the model as part of a positive user experience. But the implementation of language-focused models at the edge is perhaps the wrong metric to look at. If you are forced to host a language-focused model for your phone or car in the cloud, that may be acceptable as an intermediate step in development. Vision applications of AI, on the other hand, are not so flexible: many of them rely on low latency and high dependability. If a vehicle relies on AI to identify that it should not hit the obstacle in front of it, a blip in contacting the server can be fatal. Accordingly, the most important LLMs to fit on the edge are vision models — the models whose purpose is most undermined by the reliance on remote resources.

“Large Language Models” can be an imprecise term, so it is worth defining. The original 2017 Transformer LLM that many see as kickstarting the AI rush was 215 million parameters. BERT was giant for its time (2018) at 335 million parameters. Both of these models might be relabeled as “Small Language Models” by some today to distinguish from models like GPT4 and Gemini Ultra with as much as 1.7 trillion parameters, but for the purposes here, all fall under the LLM category. All of these are language models though, so why does it matter for vision? The trick here is that language is an abstract system of deriving meaning from a structured ordering of arbitrary objects. There is no “correct” association of meaning and form in language which we could base these models on. Accordingly, these arbitrary units are substitutable — nothing forces architecture developed for language to only be applied to language, and all the language objects are converted to multidimensional vectors anyway. LLM architecture is thus highly generalizable, and typically retains the core strength from having been developed for language: a strong ability to carry through semantic information. Thus, when we talk about LLMs at the edge, it can be a language model cross-trained on image data, or it might be a vision-only model which is built on the foundation of technology designed for language. At the software and hardware levels, for bringing models to the edge, this distinction makes little difference.

Vision LLMs on the edge flexibly apply across many different use cases, but key applications where they show the greatest advantages are: embodied agents (an especially striking example of the benefits of cross-training embodied agents on language data can be seen with Dynalang’s advantages over DreamerV3 in interpreting the world due to superior semantic parsing), inpainting (as seen with the latent diffusion models), LINGO-2’s decision-making abilities in self-driving vehicles, context-aware security (such as ViViT), information extraction (Gemini’s ability to find and report data from video), and user assistance (physician aids, driver assist, etc). Specifically notable and exciting here is the ability for Vision LLMs to leverage language as a lossy storage and abstraction of visual data for decision-making algorithms to then interact with — especially as seen in LINGO-2 and Dynalang. Many of these vision-oriented LLMs depend on edge deployment to realize their value, and they benefit from the work that has already been done for optimizing language-oriented LLMs. Despite this, vision LLMs are still struggling for edge deployment just as the language-oriented models are. The improvements for edge deployments come in three classes: model architecture, system resource utilization, and hardware optimization. We will briefly review the first two and look more closely at the third since it often gets the least attention.

Model architecture optimizations include the optimizations that must be made at the model level: “distilling” models to create leaner imitators, restructuring where models spend their resource budget (such as the redistribution of transformer modules in Stable Diffusion XL) and pursuing alternate architectures (state-space models, H3 modules, etc.) to escape the quadratically scaling costs of transformers.

System resource optimizations are all the things that can be done in software to an already complete model. Quantization (to INT8, INT4, or even INT2) is a common focus here for both latency and memory burden, but of course compromises accuracy. Speculative decoding can improve utilization and latency. And of course, tiling, such as seen with FlashAttention, has become near-ubiquitous for improving utilization and latency.

Finally, there are hardware optimizations. The first option here is a general-purpose GPU, TPU, NPU or similar, but those tend to be best suited for settings where capability is needed without demanding streamlined optimization such as might be the case on a home computer. Custom hardware, such as purpose-built NPUs, generally has the advantage when the application is especially sensitive to latency or resource consumption, and this covers much of the applications for vision LLMs.

Exploring this trade-off further: Stable Diffusion’s architecture and resource demands have been discussed here before, but it is worth circling back to it as an example of why hardware solutions are so important in this space. Using Stable Diffusion 1.5 for simplicity, let us focus specifically on the U-Net component of the model. In this diagram, you can see the rough construction of the model: it downsamples repeatedly on the left until it hits the bottom of the U, and then upsamples up the right side, bringing back in residual connections from the left at each stage.

This U-Net implementation has 865 million parameters and entails 750 billion operations. The parameters are a fair proxy for the memory burden, and the operations are a direct representation of the compute demands. The distribution of these burdens on resources is not even however. If we plot the parameters and operations for each layer, a clear picture emerges:

These graphs show a model that is destined for gross inefficiencies at every step. Most of the memory burden peaks in the center, whereas the compute is heavily taxed at the two tails but underutilized in the center. These inefficiencies come with costs. The memory peak can overwhelm on-chip storage, thus incurring I/O operations, or else requiring a large excess of unused memory for most of the graph. Similarly, storing residuals for later incurs I/O latency and higher power draws. The underutilization of the compute power at the center of the graph means that the processor will have wasteful power draw as it cannot use the tail of the power curve as it does sparser operations. While software interventions can also help here, this is exactly the kind of problem that custom hardware solutions are meant to address. Custom silicon tailored to the model can let you offload some of that memory burden into additional compute cycles at the center of the graph without incurring extra I/O operations by recomputing the residual connections instead of kicking them out to memory. In doing so, the total required memory drops, and the processor can remain at full utilization. Rightsizing the resource allotment and finding ways to redistribute the burdens are key components to how these models can be best deployed at the edge.

Despite their name, LLMs are important to the vision domain for their flexibility in handling different inputs and their strength at interpreting meaning in images. Whether used for embodied agents, context-aware security, or user assistance, their use at the edge requires a dependable low latency which precludes cloud-based solutions, in contrast to other AI applications on edge devices. Bringing them successfully to the edge asks for optimizations at every level, and we have seen already some of the possibilities at the hardware level. Conveniently, the common architecture with language-oriented LLMs means that many of the solutions needed to bring these most essential models to the edge in turn may also generalize back to the language-oriented models which donated the architecture in the first place.

The post Vision Is Why LLMs Matter On The Edge appeared first on Semiconductor Engineering.

  • ✇Semiconductor Engineering
  • How To Successfully Deploy GenAI On Edge DevicesGordon Cooper
    Generative AI (GenAI) burst onto the scene and into the public’s imagination with the launch of ChatGPT in late 2022. Users were amazed at the natural language processing chatbot’s ability to turn a short text prompt into coherent humanlike text including essays, language translations, and code examples. Technology companies – impressed with ChatGPT’s abilities – have started looking for ways to improve their own products or customer experiences with this innovative technology. Since the ‘cost’
     

How To Successfully Deploy GenAI On Edge Devices

16. Květen 2024 v 09:06

Generative AI (GenAI) burst onto the scene and into the public’s imagination with the launch of ChatGPT in late 2022. Users were amazed at the natural language processing chatbot’s ability to turn a short text prompt into coherent humanlike text including essays, language translations, and code examples. Technology companies – impressed with ChatGPT’s abilities – have started looking for ways to improve their own products or customer experiences with this innovative technology. Since the ‘cost’ of adding GenAI includes a significant jump in computational complexity and power requirements versus previous AI models, can this class of AI algorithms be applied to practical edge device applications where power, performance and cost are critical? It depends.

What is GenAI?

A simple definition of GenAI is ‘a class of machine learning algorithms that can produce various types of content including human like text and images.’ Early machine learning algorithms focused on detecting patterns in images, speech or text and then making predictions based on the data. For example, predicting the percentage likelihood that a certain image included a cat. GenAI algorithms take the next step – they perceive and learn patterns and then generate new patterns on demand by mimicking the original dataset. They generate a new image of a cat or describe a cat in detail.

While ChatGPT might be the most well-known GenAI algorithm, there are many available, with more being released on a regular basis. Two major types of GenAI algorithms are text-to-text generators – aka chatbots – like ChatGPT, GPT-4, and Llama2, and text-to-image generative model like DALLE-2, Stable Diffusion, and Midjourney. You can see example prompts and their returned outputs of these two types of GenAI models in figure 1. Because one is text based and one is image based, these two types of outputs will demand different resources from edge devices attempting to implement these algorithms.

Fig. 1: Example GenAI outputs from a text-to-image generator (DALLE-2) and a text-to-text generator (ChatGPT).

Edge device applications for Gen AI

Common GenAI use cases require connection to the internet and from there access to large server farms to compute the complex generative AI algorithms. However, for edge device applications, the entire dataset and neural processing engine must reside on the individual edge device. If the generative AI models can be run at the edge, there are potential use cases and benefits for applications in automobiles, cameras, smartphones, smart watches, virtual and augmented reality, IoT, and more.

Deploying GenAI on edge devices has significant advantages in scenarios where low latency, privacy or security concerns, or limited network connectivity are critical considerations.

Consider the possible application of GenAI in automotive applications. A vehicle is not always in range of a wireless signal, so GenAI needs to run with resources available on the edge. GenAI could be used for improving roadside assistance and converting a manual into an AI-enhanced interactive guide. In-car uses could include a GenAI-powered virtual voice assistant, improving the ability to set navigation, play music or send messages with your voice while driving. GenAI could also be used to personalize your in-cabin experience.

Other edge applications could benefit from generative AI. Augmented Reality (AR) edge devices could be enhanced by locally generating overlay computer-generated imagery and relying less heavily on cloud processing. While connected mobile devices can use generative AI for translation services, disconnected devices should be able to offer at least a portion of the same capabilities. Like our automotive example, voice assistant and interactive question-and-answer systems could benefit a range of edge devices.

While uses cases for GenAI at the edge exist now, implementations must overcome the challenges related to computational complexity and model size and limitations of power, area, and performance inherent in edge devices.

What technology is required to enable GenAI?

To understand GenAI’s architectural requirements, it is helpful to understand its building blocks. At the heart of GenAI’s rapid development are transformers, a relatively new type of neural network introduced in a Google Brain paper in 2017. Transformers have outperformed established AI models like Recurrent Neural Networks (RNNs) for natural language processing and Convolutional Neural Networks (CNNs) for images, video or other two- or three-dimensional data. A significant architectural improvement of a transformer model is its attention mechanism. Transformers can pay more attention to specific words or pixels than legacy AI models, drawing better inferences from the data. This allows transformers to better learn contextual relationships between words in a text string compared to RNNs and to better learn and express complex relationships in images compared to CNNs.

Fig. 2: Parameter sizes for various machine learning algorithms.

GenAI models are pre-trained on vast amounts of data which allows them to better recognize and interpret human language or other types of complex data. The larger the datasets, the better the model can process human language, for instance. Compared to CNN or vision transformer machine learning models, GenAI algorithms have parameters – the pretrained weights or coefficients used in the neural network to identify patterns and create new ones – that are orders of magnitude larger. We can see in figure 2 that ResNet50 – a common CNN algorithm used for benchmarking – has 25 million parameters (or coefficients). Some transformers like BERT and Vision Transformer (ViT) have parameters in the hundreds of millions. While other transformers, like Mobile ViT, have been optimized to better fit in embedded and mobile applications. MobileViT is comparable to the CNN model MobileNet in parameters.

Compared to CNN and vision transformers, ChatGPT requires 175 billion parameters and GPT-4 requires 1.75 trillion parameters. Even GPUs implemented in server farms struggle to execute these high-end large language models. How could an embedded neural processing unit (NPU) hope to complete so many parameters given the limited memory resources of edge devices? The answer is they cannot. However, there is a trend toward making GenAI more accessible in edge device applications, which have more limited computation resources. Some LLM models are tuned to reduce the resource requirements for a reduced parameter set. For example, Llama-2 offers a 70 billion parameter version of their model, but they also have created smaller models with fewer parameters. Llama-2 with seven billion parameters is still large, but it is within reach of a practical embedded NPU implementation.

There is no hard threshold for generative AI running on the edge, however, text-to-image generators like Stable Diffusion with one billion parameters can run comfortably on an NPU. And the expectation is for edge devices to run LLMs up to six to seven billion parameters. MLCommons have added GPT-J, a six billion parameter GenAI model, to their MLPerf edge AI benchmark list.

Running GenAI on the edge

GenAI algorithms require a significant amount of data movement and computation complexity (with transformer support). The balance of those two requirements can determine whether a given architecture is compute-bound – not enough multiplications for the data available – or memory bound – not enough memory and/or bandwidth for all the multiplications required for processing. Text-to-image has a better mix of compute and bandwidth requirements – more computations needed for processing two dimensional images and fewer parameters (in the one billion range). Large language models are more lopsided. There is less compute required, but a significantly large amount of data movement. Even the smaller (6-7B parameter) LLMs are memory bound.

The obvious solution is to choose the fastest memory interface available. From figure 3, you can see that a typically memory used in edge devices, LPDDR5, has a bandwidth of 51 Gbps, while HBM2E can support up to 461 Gbps. This does not, however, take into consideration the power-down benefits of LPDDR memory over HBM. While HBM interfaces are often used in high-end server-type AI implementations, LPDDR is almost exclusively used in power sensitive applications because of its power down abilities.

Fig. 3: The bandwidth and power difference between LPDDR and HBM.

Using LPDDR memory interfaces will automatically limit the maximum data bandwidth achievable with an HBM memory interface. That means edge applications will automatically have less bandwidth for GenAI algorithms than an NPU or GPU used in a server application. One way to address bandwidth limitations is to increase the amount of on-chip L2 memory. However, this impacts area and, therefore, silicon cost. While embedded NPUs often implement hardware and software to reduce bandwidth, it will not allow an LPDDR to approach HBM bandwidths. The embedded AI engine will be limited to the amount of LPDDR bandwidth available.

Implementation of GenAI on an NPX6 NPU IP

The Synopsys ARC NPX6 NPU IP family is based on a sixth-generation neural network architecture designed to support a range of machine learning models including CNNs and transformers. The NPX6 family is scalable with a configurable number of cores, each with its own independent matrix multiplication engine, generic tensor accelerator (GTA), and dedicated direct memory access (DMA) units for streamlined data processing. The NPX6 can scale for applications requiring less than one TOPS of performance to those requiring thousands of TOPS using the same development tools to maximize software reuse.

The matrix multiplication engine, GTA and DMA have all been optimized for supporting transformers, which allow the ARC NPX6 to support GenAI algorithms. Each core’s GTA is expressly designed and optimized to efficiently perform nonlinear functions, such as ReLU, GELU, sigmoid. These are implemented using a flexible lookup table approach to anticipate future nonlinear functions. The GTA also supports other critical operations, including SoftMax and L2 normalization needed in transformers. Complementing this, the matrix multiplication engine within each core can perform 4,096 multiplications per cycle. Because GenAI is based on transformers, there are no computation limitations for running GenAI on the NPX6 processor.

Efficient NPU design for transformer-based models like GenAI requires complex multi-level memory management. The ARC NPX6 processor has a flexible memory hierarchy and can support a scalable L2 memory up to 64MB of on chip SRAM. Furthermore, each NPX6 core is equipped with independent DMAs dedicated to the tasks of fetching feature maps and coefficients and writing new feature maps. This segregation of tasks allows for an efficient, pipelined data flow that minimizes bottlenecks and maximizes the processing throughput. The family also has a range of bandwidth reduction techniques in hardware and software to maximize bandwidth.

In an embedded GenAI application, the ARC NPX6 family will only be limited by the LPDDR available in the system. The NPX6 successfully runs Stable Diffusion (text-to-image) and Llama-2 7B (text-to-text) GenAI algorithms with efficiency dependent on system bandwidth and the use of on-chip SRAM. While larger GenAI models could run on the NPX6, they will be slower – measured in tokens per second – than server implementations. Learn more at www.synopsys.com/npx

The post How To Successfully Deploy GenAI On Edge Devices appeared first on Semiconductor Engineering.

  • ✇Semiconductor Engineering
  • Fundamental Issues In Computer Vision Still UnresolvedKaren Heyman
    Given computer vision’s place as the cornerstone of an increasing number of applications from ADAS to medical diagnosis and robotics, it is critical that its weak points be mitigated, such as the ability to identify corner cases or if algorithms are trained on shallow datasets. While well-known bloopers are often the result of human decisions, there are also fundamental technical issues that require further research. “Computer vision” and “machine vision” were once used nearly interchangeably, w
     

Fundamental Issues In Computer Vision Still Unresolved

2. Květen 2024 v 09:08

Given computer vision’s place as the cornerstone of an increasing number of applications from ADAS to medical diagnosis and robotics, it is critical that its weak points be mitigated, such as the ability to identify corner cases or if algorithms are trained on shallow datasets. While well-known bloopers are often the result of human decisions, there are also fundamental technical issues that require further research.

“Computer vision” and “machine vision” were once used nearly interchangeably, with machine vision most often referring to the hardware embodiment of vision, such as in robots. Computer vision (CV), which started as the academic amalgam of neuroscience and AI research, has now become the dominant idea and preferred term.

“In today’s world, even the robotics people now call it computer vision,” said Jay Pathak, director, software development at Ansys. “The classical computer vision that used to happen outside of deep learning has been completely superseded. In terms of the success of AI, computer vision has a proven track record. Anytime self-driving is involved, any kind of robot that is doing work — its ability to perceive and take action — that’s all driven by deep learning.”

The original intent of CV was to replicate the power and versatility of human vision. Because vision is such a basic sense, the problem seemed like it would be far easier than higher-order cognitive challenges, like playing chess. Indeed, in the canonical anecdote about the field’s initial naïve optimism, Marvin Minsky, co-founder of the MIT AI Lab, having forgotten to include a visual system in a robot, assigned the task to undergraduates. But instead of being quick to solve, the problem consumed a generation of researchers.

Both academic and industry researchers work on problems that roughly can be split into three categories:

  • Image capture: The realm of digital cameras and sensors. It may use AI for refinements or it may rely on established software and hardware.
  • Image classification/detection: A subset of AI/ML that uses image datasets as training material to build models for visual recognition.
  • Image generation: The most recent work, which uses tools like LLMs to create novel images, and with the breakthrough demonstration of OpenAI’s Sora, even photorealistic videos.

Each one alone has spawned dozens of PhD dissertations and industry patents. Image classification/detection, the primary focus of this article, underlies ADAS, as well as many inspection applications.

The change from lab projects to everyday uses came as researchers switched from rules-based systems that simulated visual processing as a series of if/then statements (if red and round, then apple) to neural networks (NNs), in which computers learned to derive salient features by training on image datasets. NNs are basically layered graphs. The earliest model, 1943’s Perceptron, was a one-layer simulation of a biological neuron, which is one element in a vast network of interconnecting brain cells. Neurons have inputs (dendrites) and outputs (axons), driven by electrical and chemical signaling. The Perceptron and its descendant neural networks emulated the form but skipped the chemistry, instead focusing on electrical signals with algorithms that weighted input values. Over the decades, researchers refined different forms of neural nets with vastly increased inputs and layers, eventually becoming the deep learning networks that underlie the current advances in AI.

The most recent forms of these network models are convolutional neural networks (CNNs) and transformers. In highly simplified terms, the primary difference between them is that CNNs are very good at distinguishing local features, while transformers perceive a more globalized picture.

Thus, transformers are a natural evolution from CNNs and recurrent neural networks, as well as long short-term memory approaches (RNNs/LSTMs), according to Gordon Cooper, product marketing manager for Synopsys’ embedded vision processor family.

“You get more accuracy at the expense of more computations and parameters. More data movement, therefore more power,” said Cooper. “But there are cases where accuracy is the most important metric for a computer vision application. Pedestrian detection comes to mind. While some vision designs still will be well served with CNNs, some of our customers have determined they are moving completely to transformers. Ten years ago, some embedded vision applications that used DSPs moved to NNs, but there remains a need for both NNs and DSPs in a vision system. Developers still need a good handle on both technologies and are better served to find a vendor that can provide a combined solution.”

The emergence of CNN-based neural networks began supplanting traditional CV techniques for object detection and recognition.

“While first implemented using hardwired CNN accelerator hardware blocks, many of those CNN techniques then quickly migrated to programmable solutions on software-driven NPUs and GPNPUs,” said Aman Sikka, chief architect at Quadric.

Two parallel trends continue to reshape CV systems. “The first is that transformer networks for object detection and recognition, with greater accuracy and usability than their convolution-based predecessors, are beginning to leave the theoretical labs and enter production service in devices,” Sikka explained. “The second is that CV experts are reinventing the classical ISP functions with NN and transformer-based models that offer superior results. Thus, we’ve seen waves of ISP functionality migrating first from pure hardwired to C++ algorithmic form, and now into advanced ML network formats, with a modern design today in 2024 consisting of numerous machine-learning models working together.”

CV for inspection
While CV is well-known for its essential role in ADAS, another primary application is inspection. CV has helped detect everything from cancer tumors to manufacturing errors, or in the case of IBM’s productized research, critical flaws in the built environment. For example, a drone equipped with the IBM system could check if a bridge had cracks, a far safer and more precise way to perform visual inspection than having a human climb to dangerous heights.

By combining visual transformers with self-supervised learning, the annotation requirement is vastly reduced. In addition, the company has introduced a new process named “visual prompting,” where the AI can be taught to make the correct distinctions with limited supervision by using “in-context learning,” such as a scribble as a prompt. The optimal end result is that it should be able to respond to LLM-like prompts, such as “find all six-inch cracks.”

“Even if it makes mistakes and needs the help of human annotations, you’re doing far less labeling work than you would with traditional CNNs, where you’d have to do hundreds if not thousands of labels,” said Jayant Kalagnanam, director, AI applications at IBM Research.

Beware the humans
Ideally, domain-specific datasets should increase the accuracy of identification. They are often created by expanding on foundation models already trained on general datasets, such as ImageNet. Both types of datasets are subject to human and technical biases. Google’s infamous racial identification gaffes resulted from both technical issues and subsequent human overcorrections.

Meanwhile, IBM was working on infrastructure identification, and the company’s experience of getting its model to correctly identify cracks, including the problem of having too many images of one kind of defect, suggests a potential solution to the bias problem, which is to allow the inclusion of contradictory annotations.

“Everybody who is not a civil engineer can easily say what a crack is,” said Cristiano Malossi, IBM principal research scientist. “Surprisingly, when we discuss which crack has to be repaired with domain experts, the amount of disagreement is very high because they’re taking different considerations into account and, as a result, they come to different conclusions. For a model, this means if there’s ambiguity in the annotations, it may be because the annotations have been done by multiple people, which may actually have the advantage of introducing less bias.”

Fig.1 IBM’s Self-supervised learning model. Source: IBM

Fig. 1: IBM’s Self-supervised learning model. Source: IBM

Corner cases and other challenges to accuracy
The true image dataset is infinity, which in practical terms leaves most computer vision systems vulnerable to corner cases, potentially with fatal results, noted Alan Yuille, Bloomberg distinguished professor of cognitive science and computer science at Johns Hopkins University.

“So-called ‘corner cases’ are rare events that likely aren’t included in the dataset and may not even happen in everyday life,” said Yuille. “Unfortunately, all datasets have biases, and algorithms aren’t necessarily going to generalize to data that differs from the datasets they’re trained on. And one thing we have found with deep nets is if there is any bias in the dataset, the deep nets are wonderful at finding it and exploiting it.”

Thus, corner cases remain a problem to watch for. “A classic example is the idea of a baby in the road. If you’re training a car, you’re typically not going to have many examples of images with babies in the road, but you definitely want your car to stop if it sees a baby,” said Yuille. “If the companies are working in constrained domains, and they’re very careful about it, that’s not necessarily going to be a problem for them. But if the dataset is in any way biased, the algorithms may exploit the biases and corner cases, and may not be able to detect them, even if they may be of critical importance.”

This includes instances, such as real-world weather conditions, where an image may be partly occluded. “In academic cases, you could have algorithms that when evaluated on standard datasets like ImageNet are getting almost perfect results, but then you can give them an image which is occluded, for example, by a heavy rain,” he said. “In cases like that, the algorithms may fail to work, even if they work very well under normal weather conditions. A term for this is ‘out of domain.’ So you train in one domain and that may be cars in nice weather conditions, you test in out of domain, where there haven’t been many training images, and the algorithms would fail.”

The underlying reasons go back to the fundamental challenge of trying to replicate a human brain’s visual processing in a computer system.

“Objects are three-dimensional entities. Humans have this type of knowledge, and one reason for that is humans learn in a very different way than machine learning AI algorithms,” Yuille said. “Humans learn over a period of several years, where they don’t only see objects. They play with them, they touch them, they taste them, they throw them around.”

By contrast, current algorithms do not have that type of knowledge.

“They are trained as classifiers,” said Yuille. “They are trained to take images and output a class label — object one, object two, etc. They are not trained to estimate the 3D structure of objects. They have some sort of implicit knowledge of some aspects of 3D, but they don’t have it properly. That’s one reason why if you take some of those models, and you’ve contaminated the images in some way, the algorithms start degrading badly, because the vision community doesn’t have datasets of images with 3D ground truth. Only for humans, do we have datasets with 3D ground truth.”

Hardware implementation, challenges
The hardware side is becoming a bottleneck, as academics and industry work to resolve corner cases and create ever-more comprehensive and precise results. “The complexity of the operation behind the transformer is quadratic,“ said Malossi. “As a result, they don’t scale linearly with the size of the problem or the size of the model.“

While the situation might be improved with a more scalable iteration of transformers, for now progress has been stalled as the industry looks for more powerful hardware or any suitable hardware. “We’re at a point right now where progress in AI is actually being limited by the supply of silicon, which is why there’s so much demand, and tremendous growth in hardware companies delivering AI,” said Tony Chan Carusone, CTO of Alphawave Semi. “In the next year or two, you’re going to see more supply of these chips come online, which will fuel rapid progress, because that’s the only thing holding it back. The massive investments being made by hyperscalers is evidence about the backlogs in delivering silicon. People wouldn’t be lining up to write big checks unless there were very specific projects they had ready to run as soon as they get the silicon.”

As more AI silicon is developed, designers should think holistically about CV, since visual fidelity depends not only on sophisticated algorithms, but image capture by a chain of co-optimized hardware and software, according to Pulin Desai, group director of product marketing and management for Tensilica vision, radar, lidar, and communication DSPs at Cadence. “When you capture an image, you have to look at the full optical path. You may start with a camera, but you’ll likely also have radar and lidar, as well as different sensors. You have to ask questions like, ‘Do I have a good lens that can focus on the proper distance and capture the light? Can my sensor perform the DAC correctly? Will the light levels be accurate? Do I have enough dynamic range? Will noise cause the levels to shift?’ You have to have the right equipment and do a lot of pre-processing before you send what’s been captured to the AI. Remember, as you design, don’t think of it as a point solution. It’s an end-to-end solution. Every different system requires a different level of full path, starting from the lens to the sensor to the processing to the AI.”

One of the more important automotive CV applications is passenger monitoring, which can help reduce the tragedies of parents forgetting children who are strapped into child seats. But such systems depend on sensors, which can be challenged by noise to the point of being ineffective.

“You have to build a sensor so small it goes into your rearview mirror,” said Jayson Bethurem, vice president of marketing and business development at Flex Logix. “Then the issue becomes the conditions of your car. The car can have the sun shining right in your face, saturating everything, to the complete opposite, where it’s completely dark and the only light in the car is emitting off your dashboard. For that sensor to have that much dynamic range and the level of detail that it needs to have, that’s where noise creeps in, because you can’t build a sensor of that much dynamic range to be perfect. On the edges, or when it’s really dark or oversaturated bright, it’s losing quality. And those are sometimes the most dangerous times.”

Breaking into the black box
Finally, yet another serious concern for computer vision systems is the fact that they can’t be tested. Transformers, especially, are a notorious black box.

“We need to have algorithms that are more interpretable so that we can understand what’s going on inside them,” Yuille added. “AI will not be satisfactory till we move to a situation where we evaluate algorithms by being able to find the failure mode. In academia, and I hope companies are more careful, we test them on random samples. But if those random samples are biased in some way — and often they are — they may discount situations like the baby in the road, which don’t happen often. To find those issues, you’ve got to let your worst enemy test your algorithm and find the images that break it.”

Related Reading
Dealing With AI/ML Uncertainty
How neural network-based AI systems perform under the hood is currently unknown, but the industry is finding ways to live with a black box.

The post Fundamental Issues In Computer Vision Still Unresolved appeared first on Semiconductor Engineering.

  • ✇IEEE Spectrum
  • Intel’s Gaudi 3 Goes After NvidiaSamuel K. Moore
    Although the race to power the massive ambitions of AI companies might seem like it’s all about Nvidia, there is a real competition going in AI accelerator chips. The latest example: At Intel’s Vision 2024 event this week in Phoenix, Ariz., the company gave the first architectural details of its third-generation AI accelerator, Gaudi 3. With the predecessor chip, the company had touted how close to parity its performance was to Nvidia’s top chip of the time, H100, and claimed a superior ra
     

Intel’s Gaudi 3 Goes After Nvidia

9. Duben 2024 v 21:00


Although the race to power the massive ambitions of AI companies might seem like it’s all about Nvidia, there is a real competition going in AI accelerator chips. The latest example: At Intel’s Vision 2024 event this week in Phoenix, Ariz., the company gave the first architectural details of its third-generation AI accelerator, Gaudi 3.

With the predecessor chip, the company had touted how close to parity its performance was to Nvidia’s top chip of the time, H100, and claimed a superior ratio of price versus performance. With Gaudi 3, it’s pointing to large-language-model (LLM) performance where it can claim outright superiority. But, looming in the background is Nvidia’s next GPU, the Blackwell B200, expected to arrive later this year.

Gaudi Architecture Evolution

Gaudi 3 doubles down on its predecessor Gaudi 2’s architecture, literally in some cases. Instead of Gaudi 2’s single chip, Gaudi 3 is made up of two identical silicon dies joined by a high-bandwidth connection. Each has a central region of 48 megabytes of cache memory. Surrounding that are the chip’s AI workforce—four engines for matrix multiplication and 32 programmable units called tensor processor cores. All that is surrounded by connections to memory and capped with media processing and network infrastructure at one end.

Intel says that all that combines to produce double the AI compute of Gaudi 2 using 8-bit floating-point infrastructure that has emerged as key to training transformer models. It also provides a fourfold boost for computations using the BFloat 16 number format.


Gaudi 3 LLM Performance

Intel projects a 40 percent faster training time for the GPT-3 175B large language model versus the H100 and even better results for the 7-billion and 8-billion parameter versions of Llama2.

For inferencing, the contest was much closer, according to Intel, where the new chip delivered 95 to 170 percent of the performance of H100 for two versions of Llama. Though for the Falcon 180B model, Gaudi 3 achieved as much as a fourfold advantage. Unsurprisingly, the advantage was smaller against the Nvidia H200—80 to 110 percent for Llama and 3.8x for Falcon.

Intel claims more dramatic results when measuring power efficiency, where it projects as much as 220 percent H100’s value on Llama and 230 percent on Falcon.

“Our customers are telling us that what they find limiting is getting enough power to the data center,” says Intel’s Habana Labs chief operating officer Eitan Medina.

The energy-efficiency results were best when the LLMs were tasked with delivering a longer output. Medina puts that advantage down to the Gaudi architecture’s large-matrix math engines. These are 512 bits across. Other architectures use many smaller engines to perform the same calculation, but Gaudi’s supersize version “needs almost an order of magnitude less memory bandwidth to feed it,” he says.

Gaudi 3 Versus Blackwell

It’s speculation to compare accelerators before they’re in hand, but there are a couple of data points to compare, particular in memory and memory bandwidth. Memory has always been important in AI, and as generative AI has taken hold and popular models reach the tens of billions of parameters in size it’s become even more critical.

Both make use of high-bandwidth memory (HBM), which is a stack of DRAM memory dies atop a control chip. In high-end accelerators, it sits inside the same package as the logic silicon, surrounding it on at least two sides. Chipmakers use advanced packaging, such as Intel’s EMIB silicon bridges or TSMC’s chip-on-wafer-on-silicon (CoWoS), to provide a high-bandwidth path between the logic and memory.

As the chart shows, Gaudi 3 has more HBM than H100, but less than H200, B200, or AMD’s MI300. It’s memory bandwidth is also superior to H100’s. Possibly of importance to Gaudi’s price competitiveness, it uses the less expensive HBM2e versus the others’ HBM3 or HBM3e, which are thought to be a significant fraction of the tens of thousands of dollars the accelerators reportedly sell for.


One more point of comparison is that Gaudi 3 is made using TSMC’s N5 (sometimes called 5-nanometer) process technology. Intel has basically been a process node behind Nvidia for generations of Gaudi, so it’s been stuck comparing its latest chip to one that was at least one rung higher on the Moore’s Law ladder. With Gaudi 3, that part of the race is narrowing slightly. The new chip uses the same process as H100 and H200. What’s more, instead of moving to 3-nm technology, the coming competitor Blackwell is done on a process called N4P. TSMC describes N4P as being in the same 5-nm family as N5 but delivering an 11 percent performance boost, 22 percent better efficiency, and 6 percent higher density.

In terms of Moore’s Law, the big question is what technology the next generation of Gaudi, currently code-named Falcon Shores, will use. So far the product has relied on TSMC technology while Intel gets its foundry business up and running. But next year Intel will begin offering its 18A technology to foundry customers and will already be using 20A internally. These two nodes bring the next generation of transistor technology, nanosheets, with backside power delivery, a combination TSMC is not planning until 2026.

❌
❌