FreshRSS

Normální zobrazení

Jsou dostupné nové články, klikněte pro obnovení stránky.
PředevčíremHlavní kanál
  • ✇I, Cringely
  • AI and Moore’s Law: It’s the Chips, StupidRobert X. Cringely
    Sorry I’ve been away: time flies when you are not having fun. But now I’m back. Moore’s Law, which began with a random observation by the late Intel co-founder Gordon Moore that transistor densities on silicon substrates were doubling every 18 months, has over the intervening 60+ years been both borne-out yet also changed from a lithography technical feature to an economic law. It’s getting harder to etch ever-thinner lines, so we’ve taken as a culture to emphasizing the cost part of Moore’s Law
     

AI and Moore’s Law: It’s the Chips, Stupid

15. Červen 2023 v 16:20

Sorry I’ve been away: time flies when you are not having fun. But now I’m back.

Moore’s Law, which began with a random observation by the late Intel co-founder Gordon Moore that transistor densities on silicon substrates were doubling every 18 months, has over the intervening 60+ years been both borne-out yet also changed from a lithography technical feature to an economic law. It’s getting harder to etch ever-thinner lines, so we’ve taken as a culture to emphasizing the cost part of Moore’s Law (chips drop in price by 50 percent on an area basis (dollars per acre of silicon) every 18 months). We can accomplish this economic effect through a variety of techniques including multiple cores, System-On-Chip design, and unified memory — anything to keep prices going-down.

I predict that Generative Artificial Intelligence is going to go a long way toward keeping Moore’s Law in force and the way this is going to happen says a lot about the chip business, global economics, and Artificial Intelligence, itself.

Let’s take these points in reverse order. First, Generative AI products like ChatGPT are astoundingly expensive to build. GPT-4 reportedly cost $100+ million to build, mainly in cloud computing resources. Yes, this was primarily Microsoft paying itself and so maybe the economics are a bit suspect, but the actual calculations took tens of thousands of GPUs running for months and that can’t be denied. Nor can it be denied that building GPT-5 will cost even more.

Some people think this economic argument is wrong, that Large Language Models comparable to ChatGPT can be built using Open Source software for only a few hundred or a few thousand dollars. Yes and no.

Competitive-yet-inexpensive LLMs built at such low cost have nearly all started with Meta’s (Facebook’s)  LLaMA (Large Language Model Meta AI), which has effectively become Open Source now that both the code and the associated parameter weightsa big deal in fine-tuning language models — have been released to the wild.  It’s not clear how much of this Meta actually intended to do, but this genie is out of its bottle to great effect in the AI research community.

But GPT-5 will still cost $1+ billion and even ChatGPT, itself, is costing about $1 million per day just to run. That’s $300+ million per year to run old code.

So the current el cheapo AI research frenzy is likely to subside as LLaMA ages into obsolescence and has to be replaced by something more expensive, putting Google, Microsoft and OpenAI back in control.  Understand, too, that these big, established companies like the idea of LLMs costing so much to build because that makes it harder for startups to disrupt. It’s a form of restraint of trade, though not illegal.

But before then — and even after then in certain vertical markets — there is a lot to learn and a lot of business to be done using these smaller models, which can be used to build true professional language models, which GPT-4 and ChatGPT definitely are not.

GPT-4 and ChatGPT are general purpose models — supposedly useful for pretty much anything. But that means that when you are asking ChatGPT for legal advice, for example, you are asking it to imitate a lawyer. While ChatGPT may be able to pass the bar test, so did my cousin Chad, whom I assure you is an idiot.

If you are reading this I’ll bet you are smarter than your lawyer.

This means there is an opportunity for vertical LLMs trained on different data — real data from industries like medicine and auto mechanics. Whoever owns this data will own these markets.

What will make these models both better and cheaper is they can be built from a LLaMA base because most of that data doesn’t have to change over time to still fix your car, and the added Machine Learning won’t be from crap found on the Internet, but rather from the service manuals actually used to train mechanics and fix cars.

We are approaching a time when LLMs won’t have to imitate mechanics and nurses because they will be trained like mechanics and nurses.

Bloomberg has already done this for investment advice using its unique database of historical financial information.

With an average of 50 billion nodes, these vertical models will cost only five percent as much to run as OpenAI’s one billion node GPT-4.

But what does this have to do with semiconductors and Moore’s Law? Chip design is very similar to fixing cars in that there is a very limited amount of Machine Learning data required (think of logic cells as language words). It’s a small vocabulary (the auto repair section at the public library is just a few shelves of books). And EVEN BETTER THAN AUTO REPAIR, the semiconductor industry has well-developed simulation tools for testing logic before it is actually built.

So it ought to be pretty simple to apply AI to chip design, building custom chip design models to iterate into existing simulators and refine new designs that actually have a pretty good chance of being novel.

And who will be the first to leverage this chip AI? China.

The USA is doing its best to freeze China out of semiconductor development, denying access to advanced manufacturing tools, for example. But China is arguably the world’s #2 country for AI research and can use that advantage to make up some of the difference.

Look for fabless AI chip startups to spring-up around Chinese universities and for the Chinese Communist Party to put lots of money into this very cost-effective work. Because even if it’s used just to slim-down and improve existing designs, that’s another generation of chips China might otherwise not have had at all.

The post AI and Moore’s Law: It’s the Chips, Stupid first appeared on I, Cringely.






Digital Branding
Web Design Marketing

  • ✇Ars Technica - All content
  • Reddit sells training data to unnamed AI company ahead of IPOBenj Edwards
    Enlarge (credit: Reddit) On Friday, Bloomberg reported that Reddit has signed a contract allowing an unnamed AI company to train its models on the site's content, according to people familiar with the matter. The move comes as the social media platform nears the introduction of its initial public offering (IPO), which could happen as soon as next month. Reddit initially revealed the deal, which is reported to be worth $60 million a year, earlier in 2024 to potential investor
     

Reddit sells training data to unnamed AI company ahead of IPO

19. Únor 2024 v 22:10
In this photo illustration the American social news

Enlarge (credit: Reddit)

On Friday, Bloomberg reported that Reddit has signed a contract allowing an unnamed AI company to train its models on the site's content, according to people familiar with the matter. The move comes as the social media platform nears the introduction of its initial public offering (IPO), which could happen as soon as next month.

Reddit initially revealed the deal, which is reported to be worth $60 million a year, earlier in 2024 to potential investors of an anticipated IPO, Bloomberg said. The Bloomberg source speculates that the contract could serve as a model for future agreements with other AI companies.

After an era where AI companies utilized AI training data without expressly seeking any rightsholder permission, some tech firms have more recently begun entering deals where some content used for training AI models similar to GPT-4 (which runs the paid version of ChatGPT) comes under license. In December, for example, OpenAI signed an agreement with German publisher Axel Springer (publisher of Politico and Business Insider) for access to its articles. Previously, OpenAI has struck deals with other organizations, including the Associated Press. Reportedly, OpenAI is also in licensing talks with CNN, Fox, and Time, among others.

Read 4 remaining paragraphs | Comments

❌
❌