FreshRSS

Normální zobrazení

Jsou dostupné nové články, klikněte pro obnovení stránky.
PředevčíremHlavní kanál
  • ✇Techdirt
  • Was There A Trojan Horse Hidden In Section 230 All Along That Could Enable Adversarial Interoperability?Mike Masnick
    There’s a fascinating new lawsuit against Meta that includes a surprisingly novel interpretation of Section 230. If the court buys it, this interpretation could make the open web a lot more open, while chipping away at the centralized control of the biggest tech companies. And, yes, that could mean that the law (Section 230) that is wrongly called “a gift to big tech” might be a tool that undermines the dominance of some of those companies. But the lawsuit could be tripped up for any number of r
     

Was There A Trojan Horse Hidden In Section 230 All Along That Could Enable Adversarial Interoperability?

2. Květen 2024 v 18:23

There’s a fascinating new lawsuit against Meta that includes a surprisingly novel interpretation of Section 230. If the court buys it, this interpretation could make the open web a lot more open, while chipping away at the centralized control of the biggest tech companies. And, yes, that could mean that the law (Section 230) that is wrongly called “a gift to big tech” might be a tool that undermines the dominance of some of those companies. But the lawsuit could be tripped up for any number of reasons, including a potentially consequential typo in the law that has been ignored for years.

Buckle in, this is a bit of a wild ride.

You would think with how much attention has been paid to Section 230 over the last few years (there’s an entire excellent book about it!), and how short the law is, that there would be little happening with the existing law that would take me by surprise. But the new Zuckerman v. Meta case filed on behalf of Ethan Zuckerman by the Knight First Amendment Institute has got my attention.

It’s presenting a fairly novel argument about a part of Section 230 that almost never comes up in lawsuits, but could create an interesting opportunity to enable all kinds of adversarial interoperability and middleware to do interesting (and hopefully useful) things that the big platforms have been using legal threats to shut down.

If the argument works, it may reveal a surprising and fascinating trojan horse for a more open internet, hidden in Section 230 for the past 28 years without anyone noticing.

Of course, it could also have much wider ramifications that a bunch of folks need to start thinking through. This is the kind of thing that happens when someone discovers something new in a law that no one really noticed before.

But there’s also a very good chance this lawsuit flops for a variety of other reasons without ever really exploring the nature of this possible trojan horse. There are a wide variety of possible outcomes here.

But first, some background.

For years, we’ve talked about the importance of tools and systems that give end users more control over their own experiences online, rather than leaving it entirely up to the centralized website owners. This has come up in a variety of different contexts in different ways, from “Protocols, not Platforms” to “adversarial interoperability,” to “magic APIs” to “middleware.” These are not all exactly the same thing, but they’re all directionally strongly related, and conceivably could work well together in interesting ways.

But there are always questions about how to get there, and what might stand in the way. One of the biggest things standing in the way over the last decade or so has been interpretations of various laws that effectively allow social media companies to threaten and/or bring lawsuits against companies trying to provide these kinds of additional services. This can take the form of a DMCA 1201 claim for “circumventing” a technological block. Or, more commonly, it has taken the form of a civil (Computer Fraud & Abuse Act) CFAA claim.

The most representative example of where this goes wrong is when Facebook sued Power Ventures years ago. Power was trying to build a unified dashboard across multiple social media properties. Users could provide Power with their own logins to social media sites. This would allow Power to log in to retrieve and post data, so that someone could interact with their Facebook community without having to personally go into Facebook.

This was a potentially powerful tool in limiting Facebook’s ability to become a walled-off garden with too much power. And Facebook realized that too. That’s why it sued Power, claiming that it violated the CFAA’s prohibition on “unauthorized access.”

The CFAA was designed (poorly and vaguely) as an “anti-hacking” law. And you can see where “unauthorized access” could happen as a result of hacking. But Facebook (and others) have claimed that “unauthorized access” can also be “because we don’t want you to do that with your own login.”

And the courts have agreed to Facebook’s interpretation, with a few limitations (that don’t make that big of a difference).

I still believe that this ability to block interoperability/middleware with law has been a major (perhaps the most major) reason “big tech” is so big. They’re able to use these laws to block out the kinds of companies who would make the market more competitive and pull down some the walls of walled gardens.

That brings us to this lawsuit.

Ethan Zuckerman has spent years trying to make the internet a better, more open space (partially, I think, in penance for creating the world’s first pop-up internet ad). He’s been doing some amazing work on reimagining the digital public infrastructure, which I keep meaning to write about, but never quite find the time to get to.

According to the lawsuit, he wants to build a tool called “Unfollow Everything 2.0.” The tool is based on a similar tool, also called Unfollow Everything, that was built by Louis Barclay a few years ago and did what it says on the tin: let you automatically unfollow everything on Facebook. Facebook sent Barclay a legal threat letter and banned him for life from the site.

Zuckerman wants to recreate the tool with some added features enabling users to opt-in to provide some data to researchers about the impact of not following anyone on social media. But he’s concerned that he’d face legal threats from Meta, given what happened with Barclay.

Using Unfollow Everything 2.0, Professor Zuckerman plans to conduct an academic research study of how turning off the newsfeed affects users’ Facebook experience. The study is opt-in—users may use the tool without participating in the study. Those who choose to participate will donate limited and anonymized data about their Facebook usage. The purpose of the study is to generate insights into the impact of the newsfeed on user behavior and well-being: for example, how does accessing Facebook without the newsfeed change users’ experience? Do users experience Facebook as less “addictive”? Do they spend less time on the platform? Do they encounter a greater variety of other users on the platform? Answering these questions will help Professor Zuckerman, his team, and the public better understand user behavior online and the influence that platform design has on that behavior

The tool and study are nearly ready to launch. But Professor Zuckerman has not launched them because of the near certainty that Meta will pursue legal action against him for doing so.

So he’s suing for declaratory judgment that he’s not violating any laws. If he were just suing for declaratory judgment over the CFAA, that would (maybe?) be somewhat understandable or conventional. But, while that argument is in the lawsuit, the main claim in the case is something very, very different. It’s using a part of Section 230, section (c)(2)(B), that almost never gets mentioned, let alone tested.

Most Section 230 lawsuits involve (c)(1): the famed “26 words” that state “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”

Some Section 230 cases involve (c)(2)(A) which states that “No provider or user of an interactive computer service shall be held liable on account of any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected.” Many people incorrectly think that Section 230 cases turn on this part of the law, when really, much of those cases are already cut off by (c)(1) because they try to treat a service as a speaker or publisher.

But then there’s (c)(2)(B), which says:

No provider or user of an interactive computer service shall be held liable on account of any action taken to enable or make available to information content providers or others the technical means to restrict access to material described in paragraph (1)

As noted, this basically never comes up in cases. But the argument being made here is that this creates some sort of proactive immunity from lawsuits for middleware creators who are building tools (“technical means”) to “restrict access.” In short: does Section 230 protect “Unfollow Everything” from basically any legal threats from Meta, because it’s building a tool to restrict access to content on Meta platforms?

Or, according to the lawsuit:

This provision would immunize Professor Zuckerman from civil liability for designing, releasing, and operating Unfollow Everything 2.0

First, in operating Unfollow Everything 2.0, Professor Zuckerman would qualify as a “provider . . . of an interactive computer service.” The CDA defines the term “interactive computer service” to include, among other things, an “access software provider that provides or enables computer access by multiple users to a computer server,” id. § 230(f)(2), and it defines the term “access software provider” to include providers of software and tools used to “filter, screen, allow, or disallow content.” Professor Zuckerman would qualify as an “access software provider” because Unfollow Everything 2.0 enables the filtering of Facebook content—namely, posts that would otherwise appear in the feed on a user’s homepage. And he would “provide[] or enable[] computer access by multiple users to a computer server” by allowing users who download Unfollow Everything 2.0 to automatically unfollow and re-follow friends, groups, and pages; by allowing users who opt into the research study to voluntarily donate certain data for research purposes; and by offering online updates to the tool.

Second, Unfollow Everything 2.0 would enable Facebook users who download it to restrict access to material they (and Zuckerman) find “objectionable.” Id. § 230(c)(2)(A). The purpose of the tool is to allow users who find the newsfeed objectionable, or who find the specific sequencing of posts within their newsfeed objectionable, to effectively turn off the feed.

I’ve been talking to a pretty long list of lawyers about this and I’m somewhat amazed at how this seems to have taken everyone by surprise. Normally, when new lawsuits come out, I’ll gut check my take on it with a few lawyers and they’ll all agree with each other whether I’m heading in the right direction or the totally wrong direction. But here… the reactions were all over the map, and not in any discernible pattern. More than one person I spoke to started by suggesting that this was a totally crazy legal theory, only to later come back and say “well, maybe it actually makes some sense.”

It could be a trojan horse that no one noticed in Section 230 that effectively bars websites from taking legal action against middleware providers who are providing technical means for people to filter or screen content on their feed. Now, it’s important to note that it does not bar those companies from putting in place technical measures to block such tools, or just banning accounts or whatever. But that’s very different from threatening or filing civil suits.

If this theory works, it could do a lot to enable these kinds of middleware services and make it significantly harder for big social media companies like Meta to stop them. If you believe in adversarial interoperability, that could be a very big deal. Like, “shift the future of the internet we all use” kind of big.

Now, there are many hurdles before we get to that point. And there are some concerns that if this legal theory succeeds, it could also lead to other problematic results (though I’m less convinced by those).

Let’s start with the legal concerns.

First, as noted, this is a very novel and untested legal theory. Upon reading the case initially, my first reaction was that it felt like one of those slightly wacky academic law journal articles you see law professors write sometimes, with some far-out theory they have that no one’s ever really thought about. This one is in the form of a lawsuit, so at some point we’ll find out how the theory works.

But that alone might make a judge unwilling to go down this path.

Then there are some more practical concerns. Is there even standing here? ¯\_(ツ)_/¯ Zuckerman hasn’t released his tool. Meta hasn’t threatened him. He makes a credible claim that given Meta’s past actions, they’re likely to react unfavorably, but is that enough to get standing?

Then there’s the question of whether or not you can even make use of 230 in an affirmative way like this. 230 is used as a defense to get cases thrown out, not proactively for declaratory judgment.

Also, this is not my area of expertise by any stretch of the imagination, but I remember hearing in the past that outside of IP law, courts (and especially courts in the 9th Circuit) absolutely disfavor lawsuits for declaratory judgment (i.e., a lawsuit before there’s any controversy, where you ask the court “hey, can you just check and make sure I’m on the right side of the law here…”). So I could totally see the judge saying “sorry, this is not a proper use of our time” and tossing it. In fact, that might be the most likely result.

Then there’s this kinda funny but possibly consequential issue: there’s a typo in Section 230 that almost everyone has ignored for years. Because it’s never really mattered. Except it matters in this case. Jeff Kosseff, the author of the book on Section 230, always likes to highlight that in (c)(2)(B), it says that the immunity is for using “the technical means to restrict access to material described in paragraph (1).”

But they don’t mean “paragraph (1).” They mean “paragraph (A).” Paragraph (1) is the “26 words” and does not describe any material, so it would make no sense to say “material described in paragraph (1).” It almost certainly means “paragraph (A),” which is the “good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable” section. That’s the one that describes material.

I know that, at times, Jeff has joked when people ask him how 230 should be reformed he suggests they fix the typo. But Congress has never listened.

And now it might matter?

The lawsuit basically pretends that the typo isn’t there. Its language inserts the language from “paragraph (A)” where the law says “paragraph (1).”

I don’t know how that gets handled. Perhaps it gets ignored like every time Jeff points out the typo? Perhaps it becomes consequential? Who knows!

There are a few other oddities here, but this article is getting long enough and has mostly covered the important points. However, I will conclude on one other point that one of the people I spoke to raised. As discussed above, Meta has spent most of the past dozen or so years going legally ballistic about anyone trying to scrape or data mine its properties in anyway.

Yet, earlier this year, it somewhat surprisingly bailed out on a case where it had sued Bright Data for scraping/data mining. Lawyer Kieran McCarthy (who follows data scraping lawsuits like no one else) speculated that Meta’s surprising about-face may be because it suddenly realized that for all of its AI efforts, it’s been scraping everyone else. And maybe someone high up at Meta suddenly realized how it was going to look in court when it got sued for all the AI training scraping, if the plaintiffs point out that at the very same time it was suing others for scraping its properties.

For me, I suspect the decision not to appeal might be more about a shift in philosophy by Meta and perhaps some of the other big platforms than it is about their confidence in their ability to win this case. Today, perhaps more important to Meta than keeping others off their public data is having access to everyone else’s public data. Meta is concerned that their perceived hypocrisy on these issues might just work against them. Just last month, Meta had its success in prior scraping cases thrown back in their face in a trespass to chattels case. Perhaps they were worried here that success on appeal might do them more harm than good.

In short, I think Meta cares more about access to large volumes of data and AI than it does about outsiders scraping their public data now. My hunch is that they know that any success in anti-scraping cases can be thrown back at them in their own attempts to build AI training databases and LLMs. And they care more about the latter than the former.

I’ve separately spoken to a few experts who were worried about the consequences if Zuckerman succeeded here. They were worried that it might simultaneously immunize potential bad actors. Specifically, you could see a kind of Cambridge Analytica or Clearview AI situation, where companies trying to get access to data for malign purposes convince people to install their middleware app. This could lead to a massive expropriation of data, and possibly some very sketchy services as a result.

But I’m less worried about that, mainly because it’s the sketchy eventuality of how that data is being used that would still (hopefully?) violate certain laws, not the access to the data itself. Still, there are at least some questions being raised about how this type of more proactive immunity might result in immunizing bad actors that is at least worth thinking about.

Either way, this is going to be a case worth following.

  • ✇Semiconductor Engineering
  • Commercial Chiplet Ecosystem May Be A Decade AwayAnn Mutschler
    Experts at the Table: Semiconductor Engineering sat down to talk about the challenges of establishing a commercial chiplet ecosystem with Frank Schirrmeister, vice president solutions and business development at Arteris; Mayank Bhatnagar, product marketing director in the Silicon Solutions Group at Cadence; Paul Karazuba, vice president of marketing at Expedera; Stephen Slater, EDA product management/integrating manager at Keysight; Kevin Rinebold, account technology manager for advanced packagi
     

Commercial Chiplet Ecosystem May Be A Decade Away

29. Únor 2024 v 09:08

Experts at the Table: Semiconductor Engineering sat down to talk about the challenges of establishing a commercial chiplet ecosystem with Frank Schirrmeister, vice president solutions and business development at Arteris; Mayank Bhatnagar, product marketing director in the Silicon Solutions Group at Cadence; Paul Karazuba, vice president of marketing at Expedera; Stephen Slater, EDA product management/integrating manager at Keysight; Kevin Rinebold, account technology manager for advanced packaging solutions at Siemens EDA; and Mick Posner, vice president of product management for high-performance computing IP solutions at Synopsys. What follows are excerpts of that discussion.

Experts at the Table: Semiconductor Engineering sat down to talk about the challenges of establishing a commercial chiplet ecosystem with Frank Schirrmeister, vice president solutions and business development at Arteris; Mayank Bhatnagar, product marketing director in the Silicon Solutions Group at Cadence; Paul Karazuba, vice president of marketing at Expedera; Stephen Slater, EDA product management/integrating manager at Keysight; Kevin Rinebold, account technology manager for advanced packaging solutions at Siemens EDA; and Mick Posner, vice president of product management for high-performance computing IP solutions at Synopsys. What follows are excerpts of that discussion.

L-R: Arteris’ Schirrmeister, Cadence’s Bhatnagar, Expedera’s Karazuba, Keysight’s Slater, Siemens EDA’s Rinebold, and Synopsys’ Posner.

SE: There’s a lot of buzz and activity around every aspect of chiplets today. What is your impression of where the commercial chiplet ecosystem stands today?

Schirrmeister: There’s a lot of interest today in an open chiplet ecosystem, but we are probably still quite a bit away from true openness. The proprietary versions of chiplets are alive and kicking out there. We see them in designs. We vendors are all supporting those to make it reality, like the UCIe proponents, but it will take some time to get to a fully open ecosystem. It’s probably at least three to five years before we get to a PCI Express type exchange environment.

Bhatnagar: The commercial chiplet ecosystem is at a very early stage. Many companies are providing chiplets, are designing them, and they’re shipping products — but they’re still single-vendor products, where the same company is designing all the pieces. I hope that with the advancements the UCIe standard is making, and with more standardization, we eventually can get to a marketplace-like environment for chiplets. We are not there.

Karazuba: The commercialization of homogeneous chiplets is pretty well understood by groups like AMD. But for the commercialization of heterogeneous chiplets, which is chiplets from multiple suppliers, there are still a lot of questions out there about that.

Slater: We participate in a lot of the board discussions, and attend industry events like TSMC’s OIP, and there’s a lot of excitement out there at the moment. I see a lot of even midsize and small customers starting to think about their development plans for what chiplet should be. I do think those that are going to be successful first will be those that are within a singular foundry ecosystem like TSMC’s. Today if you’re selecting your IP, you’ve got a variety of ways to be able to pick and choose which IP, see what’s been taped out before, how successful it’s been so you have a way to manage your risk and your costs as you’re putting things together. What we’ll see in the future will be that now you have a choice. Are you purchasing IP, or are you purchasing chiplets? Crucially, it’s all coming from the same foundry and put together in the same manner. The technical considerations of things like UCIe standard packaging versus advanced packaging, and the analysis tool sets for high-speed simulation, as well as for things like thermal, are going to just become that much more important.

Rinebold: I’ve been doing this about 30 years, so I can date back to some of the very earliest days of multi-chip modules and such. When we talk about the ecosystem, there are plenty of examples out there today where we see HBM and logic getting combined at the interposer level. This works if you believe HBM is a chiplet, and that’s a whole other argument. Some would argue that HBM falls into that category. The idea of a true LEGO, snap-together mix and match of chiplets continues to be aspirational for the mainstream market, but there are some business impediments that need to get addressed. Again, there are exceptions in some of the single-vendor solutions, where it’s more or less homogeneous integration, or an entirely vertically integrated type of environment where single vendors are integrating their own chiplets into some pretty impressive packages.

Posner: Aspirational is the word we use for an open ecosystem. I’m going to be a little bit more of a downer by saying I believe it’s 5 to 10 years out. Is it possible? Absolutely. But the biggest issue we see at the moment is a huge knowledge gap in what that really means. And as technology companies become more educated on really what that means, we’ll find that there will be some acceleration in adoption. But within what we call ‘captive’ — within a single company or a micro-ecosystem — we’re seeing multi-die systems pick up.

SE: Is it possible to define the pieces we have today from a technology point of view, to make a commercial chiplet ecosystem a reality?

Rinebold: What’s encouraging is the development of standards. There’s some adoption. We’ve already mentioned UCIe for some of the die-to-die protocols. Organizations like JEDEC announced the extension of their JEP30 PartModel format into the chiplet ecosystem to incorporate chiplet-style data. Think about this as an electronic data sheet. A lot of this work has been incorporated into the CDX working group under Open Compute. That’s encouraging. There were some comments a little bit earlier about having an open marketplace. I would agree we’re probably 3 to 10 years away from that coming to fruition. The underlying framework and infrastructure is there, but a lot of the licensing and distribution issues have to get resolved before you see any type of broad adoption.

Posner: The infrastructure is available. The EDA tools to create, to package, to analyze, to simulate, to manufacture — those tools are all there. The intellectual property that sits around it, either UCIe or some of the more traditional die-to-die interfaces, all of that’s there. What’s not established are full methodology and flows that lead to interoperability. Everything within captive is possible, but a broader ecosystem, a marketplace, is going to require silicon interoperability, simulation, packaging, all of that. That’s the area that we believe is missing — and still building.

Schirrmeister: Do we know what’s required? We probably can define that reasonably well. If the vision is an open ecosystem with IP on chiplets that you can just plug together like LEGO blocks, then the IP industry informs us of what’s required, and then there are some gaps on top of them. I hear people from the hard-coded IP world talking about the equivalent of PDKs for chiplets, but today’s IP ecosystem and the IP deliverables are informing us it doesn’t work like LEGO blocks yet. We are improving every year. But this whole, ‘I take my whiteboard and then everything just magically functions together’ is not what we have today. We need to think really hard about what the additional challenges are when you disaggregate that into chiplets and protocols. Then you get big systemic issues to deal with, like how do you deal with coherency across chiplets? It was challenging enough to get it done on a chip. Now you potentially have to deal with other partnerships you don’t even own. This is not a captive environment in an open ecosystem. That makes it very challenging, and it creates job security for at least 5 to 10 years.

Bhatnagar: On the technical side, what’s going well is adoption. We can see big companies like Intel, and then of course, IP providers like us and Synopsys. Everybody’s working toward standardizing chiplet integration, and that is working very well. EDA tools are also coming up to support that. But we are still very far from a marketplace because there are many issues that are not sorted out, like licensing and a few other things that need a bit more time.

Slater: The standards bodies and networking groups have excited a lot of people, and we’re getting a broad set of customers that are coming along. And one point I was thinking, is this only for very high-end compute? From the companies that I see presenting in those types of forums, it’s even companies working in automotive or aerospace/defense, planning out their future for the next 10 years or more. In the automotive case, it was a company that was thinking about creating chiplets for internal consumption — so maybe reorganizing how they look at creating many different variations or evolutions of their products, trying to do it as more modular chiplet types of blocks. ‘If we take the microprocessor part of it, would we sell that as a chiplet externally for other customers to integrate together into a bigger design?’ For me, the aha moment was seeing how broad the application would be. I do think that the standards work has been moving very fast, and that’s worked really well. For instance, at Keysight EDA, we just released a chiplet PHY designer. It’s a simulation for the high-speed digital link for UCIe, and that only comes about by having a standard that’s published, so an EDA company can take a look at it and realize what they need to do with it. The EDA tools are ready to handle these kinds of things. And maybe then, on to the last point is, in order to share the IP, in order to ensure that it’s available, database and process management is going to become all the more important. You need to keep track of which chip is made on which process, and be able to make it available inside the company to other potential users of that.

SE: What’s in place today from a business perspective, and what still needs to be worked out?

Karazuba: From a business perspective, speaking strictly of heterogeneous chiplets, I don’t think there’s anything really in place. Let me qualify that by asking, ‘Who is responsible for warranty? Who is responsible for testing? Who is responsible for faults? Who is responsible for supply chain?’ With homogeneous chiplets or monolithic silicon, that’s understood because that’s the way that this industry has been doing business since its inception. But when you talk about chiplets that are coming from multiple suppliers, with multiple IPs — and perhaps different interfaces, made in multiple fabs, then constructed by a third party, put together by a third party, tested by a fourth party, and then shipped — what happens when something goes wrong? Who do you point the finger at? Who do you go to and talk to? If a particular chiplet isn’t functioning as intended, it’s not necessarily that chiplet that’s bad. It may be the interface on another chiplet, or on a hub, whatever it might be. We’re going to get there, but right now that’s not understood. It’s not understood who is going to be responsible for things such as that. Is it the multi-chip module manufacturer, or is it the person buying it? I fear a return to the Wintel issue, where the chipmaker points to the OS maker, which points at the hardware maker, which points at the chipmaker. Understanding of the commercial side is is a real barrier to chiplets being adopted. Granted, the technical is much more difficult than the commercial, but I have no doubt the engineers will get there quicker than the business people.

Rinebold: I completely agree. What are the repercussions, warranty-related issues, things like that? I’d also go one step further. If you look at some of the larger silicon foundries right now, there is some concern about taking third-party wafers into their facilities to integrate in some type of heterogeneous, chiplet-type package. There are a lot of business and logistical issues that have to get addressed first. The technical stuff will happen quickly. It’s just a lot of these licensing- and distribution-type issues that need to get resolved. The other thing I want to back up to involves customers in the defense/industrial space. The trust and traceability and the province tracking of IP is going to be key for them, because they have so much expectation of multi-die or chiplet-type packaging as an alternative to monolithic scaling. Just look at all the government programs out there right now, with RESHAPE [Reshore Ecosystem for Secure Heterogeneous Advanced Packaging Electronics] and NGMM [Next-Generation Microelectronics Manufacturing] and such. They’re all in on this chiplet perspective, but they’re going to require a lot of security measures to understand who has touched the IP, where it comes from, how to you verify that.

Posner: Micro-ecosystems are forming because of all these challenges. If you naively think you can just go pick a die off the shelf and put it into your device, how do you warranty that? Who owns it? These micro-ecosystems are building up to fundamentally sort that out. So within a couple of different companies, be it automotive or high-performance compute, they’ll come to terms that are acceptable across all of them. And it’s these micro-ecosystems that are really going to end up driving open chiplets, and I think it’s going to be an organic type of growth. Chiplets are available for a specific application today, but we have this vision that someone else could use it, and we see that with the multiple modes being built into the dies. One mode is, ‘I’m connecting to myself. It’s a very tight, low-latency link.’ But I have this vision in the future that I’m going to need to have an interface or protocol that is more open and uses standard available stacks, and which can be bought off the shelf and integrated. That’s one side of the logistics. I want to mention two more things. It is possible to do interoperability across nodes. We demonstrated our TSMC N3 UCIe with Intel’s in-house UCIe, all put together on an Intel process. This was two separate companies working together, showing the first physical interoperability, so it’s possible. But going forward that’s still just a small part of the overall effort. In the IP space we’ve lived with an IP model of, ‘Build once, sell many.’ With the chiplet marketplace, unless there is a revenue stream from that chiplet, it will break that model. Companies think, ‘I only have to buy the IP once, and then I’m selling my silicon.’ But the infrastructure, the resources that are required to build all of this does not go away. There has to be money at the end of that tunnel for all of these different companies to be investing.

Schirrmeister: Mick is 100% right, but we may have a definition issue here with what we really mean by an ‘open’ chiplet ecosystem. I have two distinct conversations when I talk to partners and customers. On the one hand, you have board designers who are doing more and more integration, and they look at you with a wrinkled forehead and say, ‘We’ve been doing this for years. What are you talking about?’ It may not have been 3D-IC in the classic sense of all items, but they say, ‘Yeah, there are issues with warranties, and the user figures it out.’ The board people arrive from one side of the equation at chiplets because that’s the next evolution of integration. You need to be very efficient. That’s not what we call an open ecosystem of chiplets here. The idea is that you have this marketplace to mix things up, and you have the economies of scale by selling the same chiplet to multiple people. That’s really what the chip designers are thinking about, and some of them think even further because if you do it all in true 3D-IC fashion, then you actually have to co-design those chiplets in a way, and that’s a whole other dimension that needs to be sorted out. To pick a little bit on the big companies that have board and chip design groups in house, you see this even within the messaging of these companies. You have people who come from the board side, and for them it’s not a solved problem. It always has been challenging, but they’re going to take it to the next level. The chip guys are looking at this from a perspective of one interface, like PCI Express, now being UCIe. And then I think about this because the networks on chip need to become super NoCs across chiplets, which poses its own challenges. And that all needs to work together. But those are really chiplets designed for the purpose of being in a chiplet ecosystem. And to that end, Mick’s estimation of longer than five years is probably correct because those purpose-built chiplets, for the purpose of being in an open ecosystem, have all these challenges the board guys have already been dealing with for quite some time. They’re now ‘just getting smaller’ in the amount of integration they do.

Slater: When you put all these chiplets together and start to do that integration, in what order do you start placing the components down? You don’t want to throw away one very expensive chiplet because there was an issue with one of the smaller cheaper ones. So, there are now a lot of thoughts about how to go about doing almost like unit tests on individual chiplets first, but then you want to do some form of system test as you go along. That’s definitely something we need to think about. On the business front,  who is going to be most interested in purchasing a chiplet-style solution. It comes down to whether you have a yield problem. If your chips are getting to the size where you have yield concerns, then definitely it makes sense to think about using chiplets and breaking it up into smaller pieces. Not everything scales, so why move to the lowest process node when you could purchase something at a different process node that has less risk and costs less to manufacture, and then put it all together. The ones that have been successful so far — the big companies like Intel, AMD — were already driven to that edge. The chips got to a size that couldn’t fit on the reticle. We think about how many companies fit into that category, and that will factor into whether or not the cost and risk is worth it for them.

Bhatnagar: From a business perspective, what is really important is the standardization. Inside of the chiplets is fine, but how it impacts other chiplets around it is important. We would like to be able to make something and sell many copies of it. But if there is no standardization, then either we are taking a gamble by going for one thing and assuming everybody moves to it, or we make multiple versions of the same thing and that adds extra costs. To really justify a business case for any chiplet or, or any sort of IP with the chiplet, the standardization is key for the electrical interconnect, packaging, and all other aspects of a system.

Fig. 1:  A chiplet design. Source: Cadence. 

Related Reading
Chiplets: 2023 (EBook)
What chiplets are, what they are being used for today, and what they will be used for in the future.
Proprietary Vs. Commercial Chiplets
Who wins, who loses, and where are the big challenges for multi-vendor heterogeneous integration.

The post Commercial Chiplet Ecosystem May Be A Decade Away appeared first on Semiconductor Engineering.

❌
❌