FreshRSS

Zobrazení pro čtení

Jsou dostupné nové články, klikněte pro obnovení stránky.

Apple’s Vision Pro headset is a hobby. Why won’t Tim Cook say that?

I’ve been following the press and social media coverage of Apple’s pricey new Vision Pro Augmented Reality headset, which now totals hundreds of stories and thousands of comments and I’ve noticed one idea missing from all of them: what would Steve (Jobs) say?  Steve would call the Vision Pro a “hobby,” just as he did with the original Apple TV.

You know I’m correct about this.

And the fact that Apple hasn’t gone for the H-word and no other writers are suggesting it is the topic of this column, not the Vision Pro, itself.

It would appear that nobody at Apple has the balls to call the Vision Pro a hobby, which is to say it is not expected to make a profit for the time being, which is obviously the case. Instead people like me speculate how the Vision Pro will possibly make money? It won’t.

Nor does it have to.

There’s that scene in Citizen Kane where Kane the young tycoon is accused of losing $1 million per year on his newspaper and it’s remarked that he could only continue to do so for another 60 years.

Apple’s Vision Pro business is less than a rounding error on Cupertino’s balance sheet. Its success or failure doesn’t matter to Apple’s success, nor should it matter to Apple investors. I’m not saying there can’t be good reasons to sell Apple shares, but if you sold because of the Vision Pro you made a mistake.

Which is why I wish Apple had been honest and called it a hobby. Maybe they are hoping it isn’t a hobby, but that would be a mistake. The Vision Pro’s trajectory is clear to me. It will lose money for years until it finds a vertical market where the price doesn’t matter. Along the way two important effects will also have happened: 1) third-party developers will fall in love with the Vision Pro and make good applications for it, and; 2) eventually Moore’s Law — and Moore’s Law alone — will drive down the Vision Pro’s price enough for some later version to be declared an overnight success.

Apple’s unstated strategy here is obvious. Just look at the company’s previous hobby — Apple TV — which eventually broke even and then begat Apple TV+, a completely separate and different business that needed such a hardware platform to succeed. Along the way Apple TV and the broad success of streaming video on actual televisions helped Apple as a whole to sell production computers and copies of Final Cut Pro, enabling the very different video market of today.

Apple TV was worth doing and so — probably — will be the Vision Pro. But if it isn’t successful that means nothing to Apple’s eventual legacy. So for the moment, it’s just something to write about.

But why did Apple choose not to call the Vision Pro a hobby? That decision was entirely Tim Cook’s, because only the CEO can designate a product to be a hobby. Someone has to take responsibility and when it has an even a minuscule effect on earnings, that someone is the CEO.

So why did Tim Cook decide against calling the Vision Pro a hobby? It’s not that Tim didn’t know the truth. It’s that Tim Cook isn’t Steve Jobs.

This is me simultaneously saying that Tim Cook didn’t have the balls to call the Vision Pro a hobby but at the same time explaining that the decision was meant, in a way, as a compliment to Steve, who remains the company’s visionary, even in death.

That’s touching, Tim, but it’s time for that attitude to change at Apple or the next iPod/iMac/iPad/iPhone will never come.

The post Apple’s Vision Pro headset is a hobby. Why won’t Tim Cook say that? first appeared on I, Cringely.






Digital Branding
Web Design Marketing

AI and Moore’s Law: It’s the Chips, Stupid

Sorry I’ve been away: time flies when you are not having fun. But now I’m back.

Moore’s Law, which began with a random observation by the late Intel co-founder Gordon Moore that transistor densities on silicon substrates were doubling every 18 months, has over the intervening 60+ years been both borne-out yet also changed from a lithography technical feature to an economic law. It’s getting harder to etch ever-thinner lines, so we’ve taken as a culture to emphasizing the cost part of Moore’s Law (chips drop in price by 50 percent on an area basis (dollars per acre of silicon) every 18 months). We can accomplish this economic effect through a variety of techniques including multiple cores, System-On-Chip design, and unified memory — anything to keep prices going-down.

I predict that Generative Artificial Intelligence is going to go a long way toward keeping Moore’s Law in force and the way this is going to happen says a lot about the chip business, global economics, and Artificial Intelligence, itself.

Let’s take these points in reverse order. First, Generative AI products like ChatGPT are astoundingly expensive to build. GPT-4 reportedly cost $100+ million to build, mainly in cloud computing resources. Yes, this was primarily Microsoft paying itself and so maybe the economics are a bit suspect, but the actual calculations took tens of thousands of GPUs running for months and that can’t be denied. Nor can it be denied that building GPT-5 will cost even more.

Some people think this economic argument is wrong, that Large Language Models comparable to ChatGPT can be built using Open Source software for only a few hundred or a few thousand dollars. Yes and no.

Competitive-yet-inexpensive LLMs built at such low cost have nearly all started with Meta’s (Facebook’s)  LLaMA (Large Language Model Meta AI), which has effectively become Open Source now that both the code and the associated parameter weightsa big deal in fine-tuning language models — have been released to the wild.  It’s not clear how much of this Meta actually intended to do, but this genie is out of its bottle to great effect in the AI research community.

But GPT-5 will still cost $1+ billion and even ChatGPT, itself, is costing about $1 million per day just to run. That’s $300+ million per year to run old code.

So the current el cheapo AI research frenzy is likely to subside as LLaMA ages into obsolescence and has to be replaced by something more expensive, putting Google, Microsoft and OpenAI back in control.  Understand, too, that these big, established companies like the idea of LLMs costing so much to build because that makes it harder for startups to disrupt. It’s a form of restraint of trade, though not illegal.

But before then — and even after then in certain vertical markets — there is a lot to learn and a lot of business to be done using these smaller models, which can be used to build true professional language models, which GPT-4 and ChatGPT definitely are not.

GPT-4 and ChatGPT are general purpose models — supposedly useful for pretty much anything. But that means that when you are asking ChatGPT for legal advice, for example, you are asking it to imitate a lawyer. While ChatGPT may be able to pass the bar test, so did my cousin Chad, whom I assure you is an idiot.

If you are reading this I’ll bet you are smarter than your lawyer.

This means there is an opportunity for vertical LLMs trained on different data — real data from industries like medicine and auto mechanics. Whoever owns this data will own these markets.

What will make these models both better and cheaper is they can be built from a LLaMA base because most of that data doesn’t have to change over time to still fix your car, and the added Machine Learning won’t be from crap found on the Internet, but rather from the service manuals actually used to train mechanics and fix cars.

We are approaching a time when LLMs won’t have to imitate mechanics and nurses because they will be trained like mechanics and nurses.

Bloomberg has already done this for investment advice using its unique database of historical financial information.

With an average of 50 billion nodes, these vertical models will cost only five percent as much to run as OpenAI’s one billion node GPT-4.

But what does this have to do with semiconductors and Moore’s Law? Chip design is very similar to fixing cars in that there is a very limited amount of Machine Learning data required (think of logic cells as language words). It’s a small vocabulary (the auto repair section at the public library is just a few shelves of books). And EVEN BETTER THAN AUTO REPAIR, the semiconductor industry has well-developed simulation tools for testing logic before it is actually built.

So it ought to be pretty simple to apply AI to chip design, building custom chip design models to iterate into existing simulators and refine new designs that actually have a pretty good chance of being novel.

And who will be the first to leverage this chip AI? China.

The USA is doing its best to freeze China out of semiconductor development, denying access to advanced manufacturing tools, for example. But China is arguably the world’s #2 country for AI research and can use that advantage to make up some of the difference.

Look for fabless AI chip startups to spring-up around Chinese universities and for the Chinese Communist Party to put lots of money into this very cost-effective work. Because even if it’s used just to slim-down and improve existing designs, that’s another generation of chips China might otherwise not have had at all.

The post AI and Moore’s Law: It’s the Chips, Stupid first appeared on I, Cringely.






Digital Branding
Web Design Marketing

If you want to reduce ChatGPT mediocrity, do it promptly

My son Cole, pictured here as a goofy kid many years ago, is now six feet six inches tall and in college. Cole needed a letter of recommendation recently so he turned to an old family friend who, in turn, used ChatGPT to generate the letter, which he thought was remarkably good. As a guy who pretends to write for a living, I read it differently. ChatGPT’s letter was facile but empty, the type of letter you would write for someone you’d never met. It said almost nothing about Cole other than that he’s a good kid. Artificial Intelligence is good for certain things, but blind letters of reference aren’t among them.

The key problem here has to do with Machine Learning. ChatGPT’s language model is nuanced, but contains no data at all specific to either my friend the lazy reference writer or my son the reference needer. Even if ChatGPT was allowed access to my old friend’s email boxes, it would only learn about his style and almost nothing about Cole, with whom he’s communicated, I think, twice.

If you think ChatGPT is the answer to some unmet personal need, it probably isn’t unless mediocrity is good enough or you are willing to share lots of private data — an option that I don’t think ChatGPT yet provides.

Then yesterday I learned a lesson from super-lawyer Neal Katyal who tweeted that he asked ChatGPT to write a specific 1000-word essay “in the style of Neal Katyal.” The result, he explained, was an essay that was largely wrong on the facts but read like he had written it.

What I learned from this was that there is a valuable business in writing prompts for Large Language Models like ChatGPT (many more are coming). I was stunned that it only required adding the words “in the style of Bob Cringely” to clone me. Until then I thought personalizing LLMs cost thousands, maybe millions (ChatGPT reportedly cost $2.25 million to train).

So where Google long ago trained us how to write queries, these Large Language Models will soon train us to write prompts to achieve our AI goals. In these cases we’re asking ChatGPT or Google’s Bard or Baidu’s Ernie or whatever LLM to temporarily forget about something, but that’s unlikely to give the LLMs better overall judgement.

Part of the problem with prompt-engineering is it is completely at the spell-casting / magical incantation phase: no one really understands the underlying general principles behind what makes a good prompt for getting a given kind of answer – work here is very preliminary and will probably vary greatly from LLM to LLM.

A logical solution to this problem might be to write a prompt that excludes unwanted information like racism while simultaneously including local data from your PC (called fine-tuning in the LLM biz), which would require API calls that to my knowledge haven’t yet been published. But once they are published, just imagine the new tools that could be created.

I believe there is a big opportunity to apply Artificial Intelligence to teaching, for example. While this also means applying AI to education in general, my desired path is through teachers, who I see as having been failed by educational IT, which makes their jobs harder, not easier.  No wonder teachers hate IT.

The application of Information Technology to primary and secondary education has mainly involved scheduling and records. The master class schedule is in a computer. Grades are in another. And graduation requirements are handled by a database that spans the two, integrating attendance. Whether this is one vendor or up to four, the idea is generally to give the principal and school board daily snapshots of where everything stands. In this model the only place for teachers is data entry.

These systems require MORE teacher work, not less. And it leads to resentment and disappointment all around. It’s garbage-in, garbage-out as IT systems try to impose daily metrics on activities that were traditionally measured in weeks. I as a parent get mad when the system says my kid is failing when in fact it means someone forgot to upload grades or even forgot to grade work at all.

If report cards come out every six weeks it would be nice to know halfway through that my kid was struggling, but current systems we have been exposed to don’t do that. All they do is advertise in excruciating and useless detail that the system, itself, isn’t working right.

How could IT actually help teachers?

Look at Snorkel AI in Redwood City, CA for example. They are developing super-low-cost Machine Learning tools for Enterprise, not education, mainly because for education they can’t identify a customer.

I think the customer here is the teacher. This may sound odd, but understand that teachers aren’t well-served by IT to this point because they aren’t viewed as customers. They have no clout in the system. I chose to use the word clout rather than power or money because it better characterizes the teacher’s position as someone essential to the process but also both a source of thrust and drag.

I envision a new system where teachers can run their paperwork (both cellulose-based and electronic) through an AI that does a combination of automatically storing and classifying everything while also taking a first hack at grading. The AI comes to reflect mainly the values and methods of the individual teacher, which is new, and might keep more of them from quitting.

Next column: AI and Moore’s Law.

The post If you want to reduce ChatGPT mediocrity, do it promptly first appeared on I, Cringely.






Digital Branding
Web Design Marketing

What about the layoffs at Meta and Twitter? Elon is crazy! WTF???

I first arrived in Silicon Valley in 1977 — 45 years ago. I was 24 years old and had accepted a Stanford fellowship paying $2,575 for the academic year. My on-campus apartment rent was $175 per month and a year later I’d buy my first Palo Alto house for $57,000 (sold 21 years later for $990,000). It was an exciting time to be living and working in Silicon Valley. And it still is. We’re right now in a period of economic confusion and reflection when many of the loudest voices have little to no sense of history. Well my old brain is crammed with history and I’m here to tell you that the current situation — despite the news coverage — is no big deal. This, too, shall pass.

But what about the layoffs at Meta and Twitter? Elon is crazy! WTF???

On February 25, 1981, Apple Computer CEO Mike Scott fired 40 percent of the company’s engineering staff at a time when sales were doubling month-over-month and the company had no budgets because there was no way they could spend money fast enough to need budgets. Scott, who left Apple, himself, two months later, said he fired all those engineers and support staff because he feared four year-old Apple was becoming “complacent.” People were gone by the end of the day, when Scott held a companywide beer bust.

Cataclysmic change is par for the course in both startup culture and high tech. If there is going to be a next wave the previous wave has to die. Above is a chart I found from 2015 that shows the Silicon Valley economy starting in 1976. If we were to update this chart there would be a more recent boom, post social media, that I would label Artificial Intelligence, not to be confused with the late-1980s Artificial Intelligence bust that we’ve all forgotten about.

That original AI debacle is significant because it was caused by over-enthusiasm. The idea of AI made perfect sense in 1987 — the exact same sense it makes today — but nobody really understood how much computing power would be required to make those dreams come true. If AI was impractical in 1987 but is practical today thanks to Moore’s Law, how bad was our aim, exactly?

Our aim was pathetic and fortunes were lost on that pathos.

Let’s do the math. The original AI funding boom began in the late 1980s. Implicit in the VC model at the time was it taking no more than two Moore’s Law cycles from initiating the wave to launching real products. If VCs were funding companies in 1987, they expected big things from one or more of those startups by 1990. Moore’s Law said the cost of computing drops by 50 percent every 18 months so that implies that VCs in 1987 and the founders who were pitching to those VCs thought that AI would be technically practical by 1990 at which point a basic unit of computing power that cost one 1987 dollar would cost 25 cents in 1990.

IF AI is indeed economically practical today (some people still aren’t convinced that it is) mid-2021 marked 23 complete Moore’s Law cycles, meaning the computing power that cost $1 in 1987 had been reduced to $0.00000006.

Venture capitalists who bet several hundred million 1987 dollars that AI would have some chance of being economically practical at $0.25, were wrong by 48 million X.

It’s easy to look back, make these calculations, and feel smug, but that’s not even close to my point. My point is that the very VCs who lost all that money are generally zillionaires today. They kept betting on what was, for the most part, a growing tech economy.

The most important part of being a successful venture capitalist in the last 40 years has been maintaining some dry powder for future investments and staying in the game.

I could easily argue that AI in 1987 looks very similar to the metaverse in 2021. Meta (formerly Facebook) is losing $10 billion per year betting on its metaverse strategy. Recent layoffs suggest that Meta CEO Mark Zuckerberg is reevaluating his expected timeline for success.

How long can Zuckerberg afford to continue dumping billions into metaverse development? Given Meta’s corporate structure giving Zuckerberg personal voting control of the company, that question comes down to how long Meta will have enough excess cashflow to cover the costs. IF Meta is cutting its burn rate in half with these layoffs (a good argument I think) Zuckerberg can continue spending at this rate… forever. This assumes Meta continues to make lots of money with current products, but it also identifies Zuck as probably the only person in the history of tech who could make this bet pay off IF the meta verse actually becomes the next big thing.

It will be interesting to see what happens with Meta. Zuck might just run out of energy or — more likely — some competing next big thing may come along to distract him. I’m not sure it really matters much.

What does matter is that in high tech change is the norm, flux is nearly constant, and what we are seeing in the current weakness is probably change that should have happened years ago but for all the cheap money.

Silicon Valley relies on startups for ideas and growth. Startups require cheap office space and engineers looking for work. Boom and bust is not a bad thing for Silicon Valley it’s how Silicon Valley evolves.

This too shall pass.

The post What about the layoffs at Meta and Twitter? Elon is crazy! WTF??? first appeared on I, Cringely.






Digital Branding
Web Design Marketing

Paul Graham’s Legacy

Last week there was a press release you might easily have missed. A Distributed Autonomous Organization (DAO) called OrangeDAO is cooperating with a small seed venture fund called Press Start Capital to establish the OrangeDAO X Press Start Cap Fellowship Program for new Web3 entrepreneurs. Successful applicants get $25,000 each plus 10 weeks of structured mentorship plus continued access to the more than 1200-member OrangeDAO network. In exchange, OrangeDAO and Press Start get to invest in the resulting companies, if any, produced by the class. 

Big deal, it’s Y Combinator Junior, right?

Wrong. It’s Y Combinator on steroids.  

This second-generation YC has been released in the wild where it will replicate and grow unconstrained. Expect to see more deals like this one.

A Distributed Autonomous Organization is a financial partnership that leverages blockchain technology to help multiple users make decisions as a single entity. There are many DAOs around and hardly anybody understands them or knows what they are good for. Mainly they have seemed to be involved in the NFT market. But OrangeDAO is different. It has 1200+ members and every one of those members is a graduate of the Y Combinator startup accelerator. They are verified Y Combinator company founders, so they’ve all had similar entrepreneurial experiences and see business much the same way as a result. OrangeDAO seems to have big plans and to make those plans happen in August the DAO, itself, raised $80 million in venture capital, with their first use of that capital being these Fellowships.

I think this will change forever venture capital and the world economy.

It represents a new stage in the evolution of venture capital. In many senses it is the democratization of VC.

It’s no surprise that OrangeDAO comes from Y Combinator alumni. YC, itself, disrupted the VC model and this Fellowship continues that disruption.

It’s turning what was a disruption into an ecosystem.

Think about the VC model. The original Silicon Valley VC wasn’t even from Silicon Valley — it was Sherman Fairchild from Fairchild Camera in Baltimore, who came to Mountain View to invest in Shockley Semiconductor in 1954. 

This was the Tycoon-as-VC model, which was soon replaced by the Professional VC model where dumb institutional money was invested by VCs (generally lawyers or former CFOs) who didn’t really understand what they were investing in. But there were enough opportunities that they could “spray and pray” and succeed on the simple odds. Tim Draper’s grandpa and Arthur Rock typified this generation. Few people realize that Rock invested only $75K in Apple… ever.

Eventually there rose in Silicon Valley a technocracy with a new class of VCs who DID more or less understand their investments. Don Valentine and Tom Perkins led this charge and ultimately hired associates and partners who looked just like them, which describes every person today working on Sand Hill Road.

Typical of this glory age of VC, there were dumb institutional investors, technical or semi-technical professional VCs, and an emerging class of entrepreneurs who needed progressively LESS money as technical markets blossomed and third-party services became available.

At this point there emerged the YC/Techstars, Angel investing, and eventually crowdfunding models. YC brought with it two revolutionary ideas: 1) you didn’t have to have a VC friend to get a chance to pitch your idea, and; 2) there was a VC role for educating entrepreneurs. 

Prior to YC (and to this day in most places) VCs like to keep their entrepreneurs ignorant, so they can be more easily controlled. YC worked to subvert that control.

Angel investing was something parallel to YC — experienced (generally self-taught — the hard way) entrepreneurs playing VC together over dinner for smaller deals. Remember The Band-of-Angels had Gordon Moore at those dinners. But total deal sizes were limited because it wasn’t professional — not full-time work for anyone.

Crowdfunding was also parallel and totally wacky because it was truly democratic: nobody knows anything. In crowdfunding EVERYONE is stupid. Neither the investors, managers, nor entrepreneurs know what they are doing, which is why crowdfunding hasn’t been a big success to date.

I had an Indian friend who worked at Intel  and lived in Roseville in a neighborhood filled with Indian engineers who worked at Intel and lived in Roseville. The average first-generation Indian engineer in California keeps in his/her checking account $100,000 “just in case.” My friend used to argue that he could walk around the block with a good pitch deck and get seed funding for his next venture by the time he made it back to his own doorstep. It was a brilliant observation.

These OrangeDAO Fellowships are like that Indian neighborhood in Roseville. The DAO members all have similar backgrounds, similar values, and similar risk tolerances. THERE ARE MORE OF THEM, so they can do bigger deals. And — here’s the important bit — THEY ARE ALL YC-EDUCATED and connected globally through the blockchain.  They not only know many of the same things, they have a sense of where this knowledge comes from and why it is useful. That’s Paul Graham’s legacy at YC.

But this is a second-through-Nth-generation movement, at the very center of which is not just education, but FURTHER education — the very concept that education, itself, is a legacy to be nurtured and extended. Think land-grant American universities of the 19th century, In the YC-based DAO we have people who want the next generation of entrepreneurs to be even better-educated. It’s not some egalitarian goal, either: they see it as key to success for the whole thing.

Smart people with good ideas will self-identify, be funded at a subsistence level to allow them to develop those ideas and prove their worth, then they can participate on a truly level playing field for the first time. 

YC and Techstars and their copycat cousins did this too, BUT NEVER AT SCALE.

Gone is the Tycoon, gone is the professional VC who doesn’t understand his tech, gone soon will be the angels (subsumed into the DAO model), and gone for the most part are the asshole VCs whom entrepreneurs grow to hate (not all of them, but a lot).

Done correctly, this model is essentially Meritocratic VC. If the idea is good, the market is ready, and the people know what they are doing, the capital will be there. Everything has the prospect of being better under this evolved system, or at least that’s the way I see it. And it all comes down to the centrality of education combined with scale. 

Now here is the $100 TRILlION question: can this Middle Class VC model be exported to Topeka and Timbuktu?

I think it can be.

The problem with all the Silicon Glens and Silicon Prairies, and Silicon Forests and Silicon Gulags that failed repeatedly over the last 30 years is they were copying the wrong parts of the successful model. They didn’t have the people and the institutional knowledge of Silicon Valley and Sand Hill Road, but this OrangeDAO DOES. 

It’s a containerized copy of successful Silicon Valley culture that carries with it all dependencies, even money.                         .

 

The post Paul Graham’s Legacy first appeared on I, Cringely.






Digital Branding
Web Design Marketing

❌