FreshRSS

Normální zobrazení

Jsou dostupné nové články, klikněte pro obnovení stránky.
PředevčíremHlavní kanál
  • ✇IEEE Spectrum
  • The Rise of GroupwareErnie Smith
    A version of this post originally appeared on Tedium, Ernie Smith’s newsletter, which hunts for the end of the long tail. These days, computer users take collaboration software for granted. Google Docs, Microsoft Teams, Slack, Salesforce, and so on, are such a big part of many people’s daily lives that they hardly notice them. But they are the outgrowth of years of hard work done before the Internet became a thing, when there was a thorny problem: How could people collaborate effectively when
     

The Rise of Groupware

24. Červenec 2024 v 17:00


A version of this post originally appeared on Tedium, Ernie Smith’s newsletter, which hunts for the end of the long tail.

These days, computer users take collaboration software for granted. Google Docs, Microsoft Teams, Slack, Salesforce, and so on, are such a big part of many people’s daily lives that they hardly notice them. But they are the outgrowth of years of hard work done before the Internet became a thing, when there was a thorny problem: How could people collaborate effectively when everyone’s using a stand-alone personal computer?

The answer was groupware, an early term for collaboration software designed to work across multiple computers attached to a network. At first, those computers were located in the same office, but the range of operation slowly expanded from there, forming the highly collaborative networked world of today. This post will trace some of this history, starting from early ideas formed at Stanford Research Institute by the team of famed computer pioneer Douglas Engelbart, to a smaller company, Lotus, that hit the market with its groupware program, Notes, at the right time, to Microsoft’s ill-fated attempt to enter the groupware market, including never before seen footage of Bill Gates on Broadway.

A black and white photo of an old IBM PC on a desk next to computer manuals In the early days of the computing era, when IBM’s PC reigned supreme, collaboration was difficult. Ross Anthony Willis/Fairfax Media/Getty Images

How the PC made us forget about collaboration for a while

Imagine that it’s the early-to-mid-1980s and that you run a large company. You’ve invested a lot of money into personal computers, which your employees are now using—IBM PCs, Apple Macintoshes, clones, and the like. There’s just one problem: You have a bunch of computers, but they don’t talk to one another.

If you’re in a small office and need to share a file, it’s no big deal: You can just hand a floppy disk off to someone on the other side of the room. But what if you’re part of an enterprise company and the person you need to collaborate with is on the other side of the country? Passing your colleague a disk doesn’t work.

The new personal-computing technologies clearly needed to do more to foster collaboration. They needed to be able to take input from a large group of people inside an office, to allow files to be shared and distributed, and to let multiple users tweak and mash information with everyone being able to sign off on the final version.

The hardware that would enable such collaboration software, or “groupware” as it tended to be called early on, varied by era. In the 1960s and ’70s, it was usually a mainframe-to-terminal setup, rather than something using PCs. Later, in the 1980s, it was either a token ring or Ethernet network, which were competing local-networking technologies. But regardless of the hardware used for networking, the software for collaboration needed to be developed.

Black and white photo of a man talking from behind a desk. Stanford Research Institute engineer Douglas Engelbart is sometimes called “the father of groupware.”Getty Images

Some of the basic ideas behind groupware were first forged at the Stanford Research Institute by a Douglas Engelbart–led team, in the 1960s, working on what they called an oN-Line System (NLS). An early version of NLS was presented in 1968 during what became known as the “Mother of All Demos.” It was essentially a coming-out party for many computing innovations that would eventually become commonplace. If you have 90 minutes and want to see something 20-plus years ahead of its time, watch this video.

In the years that followed, on top of well-known innovations like the mouse, Engelbart’s team developed tools that anticipated groupware, including an “information center,” an early precursor of the server in a client-server architecture, and tracking edits made to text files by different people, an early precursor of version control.

By the late 1980s, at a point when the PC had begun to dominate the workplace, Engelbart was less impressed with what had been gained than with what had been lost in the process. He wrote (with Harvey Lehtman) in Byte magazine in 1988:

The emergence of the personal computer as a major presence in the 1970s and 1980s led to tremendous increases in personal productivity and creativity. It also caused setbacks in the development of tools aimed at increasing organizational effectiveness—tools developed on the older time-sharing systems.
To some extent, the personal computer was a reaction to the overloaded and frustrating time-sharing systems of the day. In emphasizing the power of the individual, the personal computer revolution turned its back on those tools that led to the empowering of both co-located and distributed work groups collaborating simultaneously and over time on common knowledge work.
The introduction of local- and wide-area networks into the personal computer environment and the development of mail systems are leading toward some of the directions explored on the earlier systems. However, some of the experiences of those earlier pioneering systems should be considered anew in evolving newer collaborative environments.

Back to top

Groupware comes of age

Groupware finally started to catch on in the late 1980s, with tech companies putting considerable resources into developing collaboration software—perhaps taken in by the idea of “orchestrating work teams,” as an Infoworld piece characterized the challenge in 1988. The San Francisco Examiner reported, for example, that General Motors had invested in the technology, and was beginning to require its suppliers to accept purchase orders electronically.

Focusing on collaboration software was a great way for independent software companies to stand out, this being an area that large companies—Microsoft in particular—had basically ignored. Today, Microsoft is the 800-pound gorilla of collaboration software, thanks to its combination of Teams and Office 365. But it took the tech giant a very long while to get there: Microsoft started taking the market seriously only around 1992.

One company in particular was well-positioned to take advantage of the opening that existed in the 1980s. That was the Lotus Development Corporation, a Cambridge, Mass.–based software company that made its name with its Lotus 1-2-3 spreadsheet program for IBM PCs.

Lotus did not invent groupware or coin the word—on top of Engelbart’s formative work at Stanford, the term had been around for years before Lotus Notes came on the scene. But it was the company that brought collaboration software to everyone’s attention.

On the left, a black and white photo of a man in a field talking. On the right, a box with disks. Ray Ozzie [left] was primarily responsible for the development of Lotus Notes, the first popular groupware solution. Left: Ann E. Yow-Dyson/Getty Images; Right: James Keyser/Getty Images

The person most associated with the development of Notes was Ray Ozzie, who was recruited to Lotus after spending time working on VisiCalc, an early spreadsheet program. Ozzie essentially built out what became Notes while working at Iris Associates, a direct offshoot of Lotus that Ozzie founded to develop the Notes application. After some years of development in stealth mode, the product was released in 1989.

Ozzie explained his inspiration for Notes to Jessica Livingston, who described this history in her book, Founders At Work:

In Notes, it was (and this is hard to imagine because it was a different time) the concept that we’d all be using computers on our desktops, and therefore we might want to use them as communication tools. This was a time when PCs were just emerging as spreadsheet tools and word processing replacements, still available only on a subset of desks, and definitely no networks. It was ’82 when I wrote the specs for it. It had been based on a system called PLATO [Programmed Logic for Automatic Teaching Operations] that I’d been exposed to at college, which was a large-scale interactive system that people did learning and interactive gaming on, and things like that. It gave us a little bit of a peek at the future—what it would be like if we all had access to interactive systems and technology.

Building an application based on PLATO turned out to be the right idea at the right time, and it gave Lotus an edge in the market. Notes included email, a calendaring and scheduling tool, an address book, a shared database, and programming capabilities, all in a single front-end application.

Lotus Notes on Computer Chronicles Fall 1989

As an all-in-one platform built for scale, Notes gained a strong reputation as an early example of what today would be called a business-transformation tool, one that managed many elements of collaboration. It was complicated from an IT standpoint and required a significant investment to maintain. In a way, what Notes did that was perhaps most groundbreaking was that it helped turn PCs into something that large companies could readily use.

As Fortune noted in 1994, Lotus had a massive lead in the groupware space, in part because the software worked essentially the same anywhere in a company’s network. We take that for granted now, but back then it was considered magical:

Like Lotus 1-2-3, Notes is easy to customize. A sales organization, for instance, might use it to set up an electronic bulletin board that lets people pool information about prospective clients. If some of the info is confidential, it can be restricted so not everyone can call it up.
Notes makes such homegrown applications and the data they contain accessible throughout an organization. The electronic bulletin board you consult in Singapore is identical to the one your counterparts see in Sioux City, Iowa. The key to this universality is a procedure called replication, by which Notes copies information from computer to computer throughout the network. You might say Ozzie figured out how to make the machines telepathic—each knows what the others are thinking.

This article reported that around 4,000 major companies had purchased Notes, including Chase Manhattan, Compaq Computer, Delta Air Lines, Fluor, General Motors, Harley-Davidson, Hewlett-Packard, IBM, Johnson & Johnson, J.P. Morgan, Nynex, Sybase, and 3M. While it wasn’t dominant in the way Windows was, its momentum was hard to ignore.

A 1996 commercial for Notes highlighted its use by FedEx. Other commercials would use the stand-up comedian Denis Leary or be highly conceptual. Rarely, if ever, would these television advertisements show the software.

In the mid-1990’s, it was common for magazines to publish stories about how Notes reshaped businesses large and small. A 1996 Inc. piece, for example, described how a natural-foods company successfully produced a new product in just eight months, a feat the company directly credited to Notes.

“It’s become our general manager,” Groveland Trading Co. president Steve McDonnell recalled.

Notes wasn’t cheap (InfoWorld lists the price circa 1990 as US $62,000), and it was complicated to manage. But the positive results it enabled were immensely hard to ignore. IBM noticed and ended up buying Lotus in 1995, almost entirely to get ahold of Notes. Even earlier, Microsoft had realized that office collaboration was a big deal, and they wanted in.

Back to top

Microsoft jumps on the groupware bandwagon

White old book on yellow background titled Microsoft Workgroup Add-on for Windows Microsoft’s first foray into collaboration software was its 1992 release of Windows for Workgroups. Despite great efforts to promote the release, the software was not a commercial success. Daltrois/Flickr

Microsoft had high hopes for Windows for Workgroups, the networking-focused variant of its popular Windows 3.1 software suite. To create buzz for it, the company pulled out all the stops. Seriously.

In the fall of 1992, Microsoft paid something like $2 million to put on a Broadway production with Bill Gates literally center stage, at New York City’s Gershwin Theater, one of the largest on Broadway. It was a wild show, and yet, somehow, there is no video of this event currently posted online—until now. The only person I know of who has a video recording of this extravaganza is, fittingly enough, Ray Ozzie, the groupware guru and Notes inventor. Ozzie later served as a top executive at Microsoft, famously replacing Bill Gates as Chief Software Architect in the mid-2000s, and he has shared this video with us for this post:


The 1992 one-day event was not a hit. Watch to see why. (Courtesy of Ray Ozzie and the Microsoft Corporation)

00:00 Opening number
02:23 “My VGA can hardly wait for your CPU to reciprocate”
05:17 Bill Gates enters the stage
27:55 “Get ready, get set” musical number
31:50 Bit with Mike Appe, Microsoft VP of sales
58:30 Bill Gates does jumping jacks

Back to top

A 1992 Washington Post article describes the performance, which involved dozens of actors, some of whom were dressed like the Blues Brothers. At one point, Gates did jumping jacks. Gates himself later said, “That was so bad, I thought [then Microsoft CEO] Ballmer was going to retch.” For those who don’t have an extra hour to spend, here is a summary:

To get a taste of the show, watch this news segment from channel 4. Courtesy of Microsoft Corporate Archives

Despite all the effort to generate fanfare, Windows for Workgroups was not a hit. While Windows 3.1 was dominant, Microsoft had built a program that didn’t seem to capture the burgeoning interest in collaborative work in a real way. Among other things, it didn’t initially support the TCP/IP networking protocol, despite the fact that it was the networking technology that was winning the market and enabled the rise of the Internet.

In its original version, Windows for Workgroups carried such a negative reputation in Microsoft’s own headquarters that the company nicknamed it Windows for Warehouses, referring to the company’s largely unsold inventory, according to Microsoft’s own expert on company lore, Raymond Chen.

Unsuccessful as it was, the fact that it existed in the first place hinted at Microsoft’s general acknowledgement that perhaps this networking thing was going to catch on with its users.

Launched in late 1992, a few months after Windows 3.1 itself, the product was Microsoft’s first attempt at integrated networking in a Windows package. The software enabled file-sharing across servers, printer sharing, and email—table stakes in the modern day but at the time a big deal.

This video presents a very accurate view of what it was like to use Windows in 1994.

Unfortunately, it was a big deal that came a few years late. Microsoft itself was so lukewarm on the product that the company had to update it to Windows for Workgroups 3.11 just a year later, whose marquee feature wasn’t improved network support but increased disk speed. Confusingly, the company had just released Windows NT by this point, a program that better matched the needs of enterprise customers.

The work group terminology Microsoft introduced with Windows for Workgroups stuck around, though, and it is actually used in Windows to this day.

In 2024, group-oriented software feels like the default paradigm, with single-user apps being the anomaly. Over time, groupware became so pervasive that people no longer think of it as groupware, though there are plenty of big, hefty, groupware-like tools out there, like Salesforce. Now, it’s just software. But no one should forget the long history of collaboration software or its ongoing value. It’s what got most of us through the pandemic, even if we never used the word “groupware” to describe it.

  • ✇IEEE Spectrum
  • The Trick to a Cleaner Google SearchErnie Smith
    A version of this post originally appeared on Tedium, Ernie Smith’s newsletter, which hunts for the end of the long tail.Last month, Google announced some big changes to its search engine that are, in a word, infuriating, to users like myself.Google has started adding AI overviews to many of its search results, which essentially generate pre-processed answers to search queries. If you’re using Google to actually find websites rather than get answers, it $!@(&!@ sucks.But in the midst of all
     

The Trick to a Cleaner Google Search

16. Červen 2024 v 16:00


A version of this post originally appeared on Tedium, Ernie Smith’s newsletter, which hunts for the end of the long tail.

Last month, Google announced some big changes to its search engine that are, in a word, infuriating, to users like myself.

Google has started adding AI overviews to many of its search results, which essentially generate pre-processed answers to search queries. If you’re using Google to actually find websites rather than get answers, it $!@(&!@ sucks.

But in the midst of all this, Google quietly added something else to its results—a “Web” filter that presents what Google used to look like a decade ago, no extra junk. While Google made its AI-focused changes known on its biggest stage—during its Google I/O event—the Web filter was curiously announced on Twitter by Search Liaison Danny Sullivan.

As Sullivan wrote:

We’ve added this after hearing from some that there are times when they’d prefer to just see links to web pages in their search results, such as if they’re looking for longer-form text documents, using a device with limited internet access, or those who just prefer text-based results shown separately from search features. If you’re in that group, enjoy!

The results are fascinating. It’s essentially Google, minus the extra fluff. No parsing of the information in the results. No surfacing metadata like address or link info. No knowledge panels, but also, no ads. It looks like the Google we learned to love in the early 2000s, buried under the “More” menu.

A Google search screenshot This is what Google search used to look like, without any extras, and it can look like that again.Ernie Smith

For power searchers like myself, it’s likely going to be an amazing tool. But Google’s decision to bury it ensures that few people will use it. The company has essentially bet that you’ll be better off with a pre-parsed guess produced by its AI engine.

It’s worth understanding the tradeoffs, though. A simplified view does not replace the declining quality of Google’s results, largely caused by decades of SEO optimization by website creators. The same overly optimized results are going to be there, like it or not. It is not Google circa 2001; it is a Google-circa-2001 presentation of Google circa 2024, a very different site.

But if you understand the tradeoffs, it can be a great tool.

And here’s the trick to using it without having to click the ‘Web’ option buried in a menu every single time. Google does not make it easy, but by adding a URL parameter to your search—in this case, “udm=14”—you can get directly to the Web results in a search.

That sounds like extra work until you realize that many browsers allow you to add custom search engines by adding the %s entry as a stand-in for the search term you put in. And it works great in the case of Google.


Screenshot of menu bar You can specify the default url for the omnibar search on your browser. Ernie Smith

In Vivaldi, my weapon of choice, I did this:

●Go to Settings -> Search

●Look at the list of search engines, and hit the plus button at the bottom left of the dialog box to add a new one

●Name the new item “Google Web Only,” and give it the nickname of “gw”

●Set the URL as https://www.google.com/search?q=%s&udm=14

●Set it as your default search

Now, when you use the omnibar on your browser of choice, it will automatically push you to the Google Web Only search. If you want a more traditional search, add a “g” in front of the search in your omnibar, and it will give you the full-fat search, knowledge panels and all. Don’t want to make it your default? Don’t.

A variant of this should work for most Chromium-based browsers, including Chrome proper. It is also possible in Firefox with an extension. Safari, which does not allow you to add custom search engines by default, is a little more complicated, but it is possible through the use of custom extensions like HyperWeb for iOS. I’m still looking for a Safari-for-Mac solution.

Or, you can use a front-end that I created at udm14.com or udm14.org.

When you want something more elemental, less adulterated, it’s there, no extra junk.

It’s depressing that it’s gotten to this, isn’t it?

  • ✇IEEE Spectrum
  • The Sneaky StandardErnie Smith
    A version of this post originally appeared on Tedium, Ernie Smith’s newsletter, which hunts for the end of the long tail.Personal computing has changed a lot in the past four decades, and one of the biggest changes, perhaps the most unheralded, comes down to compatibility. These days, you generally can’t fry a computer by plugging in a joystick that the computer doesn’t support. Simply put, standardization slowly fixed this. One of the best examples of a bedrock standard is the peripheral compon
     

The Sneaky Standard

18. Květen 2024 v 17:00


A version of this post originally appeared on Tedium, Ernie Smith’s newsletter, which hunts for the end of the long tail.

Personal computing has changed a lot in the past four decades, and one of the biggest changes, perhaps the most unheralded, comes down to compatibility. These days, you generally can’t fry a computer by plugging in a joystick that the computer doesn’t support. Simply put, standardization slowly fixed this. One of the best examples of a bedrock standard is the peripheral component interconnect, or PCI, which came about in the early 1990s and appeared in some of the decade’s earliest consumer machines three decades ago this year. To this day, PCI slots are used to connect network cards, sound cards, disc controllers, and other peripherals to computer motherboards via a bus that carries data and control signals. PCI’s lessons gradually shaped other standards, like USB, and ultimately made computers less frustrating. So how did we get it? Through a moment of canny deception.

Commercial - Intel Inside Pentium Processor (1994) www.youtube.com

Embracing standards: the computing industry’s gift to itself

In the 1980s, when you used the likes of an Apple II or a Commodore 64 or an MS-DOS machine, you were essentially locked into an ecosystem. Floppy disks often weren’t compatible. The peripherals didn’t work across platforms. If you wanted to sell hardware in the 1980s, you were stuck building multiple versions of the same device.

For example, the KoalaPad was a common drawing tool sold in the early 1980s for numerous platforms, including the Atari 800, the Apple II, the TRS-80, the Commodore 64, and the IBM PC. It was essentially the same device on every platform, and yet, KoalaPad’s manufacturer, Koala Technologies, had to make five different versions of this device, with five different manufacturing processes, five different connectors, five different software packages, and a lot of overhead. It was wasteful, made being a hardware manufacturer more costly, and added to consumer confusion.

Drawing on a 1983 KoalaPad (Apple IIe) www.youtube.com

This slowly began to change in around 1982, when the market of IBM PC clones started taking off. It was a happy accident—IBM’s decision to use a bunch of off-the-shelf components for its PC accidentally turned them into a de facto standard. Gradually, it became harder for computing platforms to become islands unto themselves. Even when IBM itself tried and failed to sell the computing world on a bunch of proprietary standards in its PS/2 line, it didn’t work. The cat was already out of the bag. It was too late.

So how did we end up with the standards that we have today, and the PCI expansion card standard specifically? PCI wasn’t the only game in town—you could argue, for example, that if things played out differently, we’d all be using NuBus or Micro Channel architecture. But it was a standard seemingly for the long haul, far beyond other competing standards of its era.

Who’s responsible for spearheading this standard? Intel. While PCI was a cross-platform technology, it proved to be an important strategy for the chipmaker to consolidate its power over the PC market at a time when IBM had taken its foot off the gas, choosing to focus on its own PowerPC architecture and narrower plays like the ThinkPad instead, and was no longer shaping the architecture of the PC.

The vision of PCI was simple: an interconnect standard that was not intended to be limited to one line of processors or one bus. But don’t mistake standardization for cooperation. PCI was a chess piece—a part of a different game than the one PC manufacturers were playing.

Close up of a board showing several black raised PCIe interconnects. The PCI standard and its derivatives have endured for over three decades. Modern computers with a GPU often use a PCIe interconnect. Alamy

In the early 1990s, Intel needed a win

In the years before Intel’s Pentium chipset came out in 1993, there seemed to be some skepticism about whether Intel could maintain its status at the forefront of the desktop-computing field.

In lower-end consumer machines, players like Advanced Micro Devices (AMD) and Cyrix were starting to shake their weight around. At the high end of the professional market, workstation-level computing from the likes of Sun Microsystems, Silicon Graphics, and Digital Equipment Corporation suggested there wasn’t room for Intel in the long run. And laterally, the company suddenly found itself competing with a triple threat of IBM, Motorola, and Apple, whose PowerPC chip was about to hit the market.

A Bloomberg piece from the period painted Intel as being boxed in between these various extremes:

If its rivals keep gaining, Intel could eventually lose ground all around.

This is no idle threat. Cyrix Corp. and Chips & Technologies Inc. have re-created—and improved—Intel’s 386 without, they say, violating copyrights or patents. AMD has at least temporarily won the right in court to make 386 clones under a licensing deal that Intel canceled in 1985. In the past 12 months, AMD has won 40% of a market that since 1985 has given Intel $2 billion in profits and a $2.3 billion cash hoard. The 486 may suffer next. Intel has been cutting its prices faster than for any new chip in its history. And in mid-May, it chopped 50% more from one model after Cyrix announced a chip with some similar features. Although the average price of a 486 is still four times that of a 386, analysts say Intel’s profits may grow less than 5% this year, to about $850 million.

Intel’s chips face another challenge, too. Ebbing demand for personal computers has slowed innovation in advanced PCs. This has left a gap at the top—and most profitable—end of the desktop market that Sun, Hewlett-Packard Co., and other makers of powerful workstations are working to fill. Thanks to microprocessors based on a technology known as RISC, or reduced instruction-set computing, workstations have dazzling graphics and more oomph—handy for doing complex tasks and moving data faster over networks. And some are as cheap as high-end PCs. So the workstation makers are now making inroads among such PC buyers as stock traders, banks, and airlines.

This was a deep underestimation of Intel’s market position, it turned out. The company was actually well-positioned to shape the direction of the industry through standardization. They had a direct say on what appeared on the motherboards of millions of computers, and that gave them impressive power to wield. If Intel didn’t want to support a given standard, that standard would likely be dead in the water.

How Intel crushed a standards body on the way to giving us an essential technology

The Video Electronics Standards Association, or VESA, is perhaps best known today for its mounting system for computer monitors and its DisplayPort technology. But in the early 1990s, it was working on a video-focused successor to the Industry Standard Architecture (ISA) internal bus, widely used in IBM PC clones.

A bus, the physical wiring that lets a CPU talk to internal and external peripheral devices, is something of a bedrock of computing—and in the wrong setting, a bottleneck. The ISA expansion card slot, which had become a de facto standard in the 1980s, had given the IBM PC clone market something to build against during its first decade. But by the early 1990s, for high-bandwidth applications, particularly video, it was holding back innovation. It just wasn’t fast enough to keep up, even after it had been upgraded from being able to handle 8 bits of data at once to 16.

That’s where the VESA Local Bus (VL-Bus) came into play. Built to work only with video cards, the standard offered a faster connection, and could handle 32 bits of data. It was targeted at the Super VGA standard, which offered higher resolution (up to 1280 x 1024 pixels) and richer colors at a time when Windows was finally starting to take hold in the market. To overcome the limitations of the ISA bus, graphics card and motherboard manufacturers started collaborating on proprietary interfaces, creating an array of incompatible graphics buses. The lack of a consistent experience around Super VGA led to VESA’s formation. The new VESA slot, which extended the existing 16-bit ISA bus with an additional 32-bit video-specific connector, was an attempt to fix that.

It wasn’t a massive leap—more like a stopgap improvement on the way to better graphics.

And it looked like Intel was going to go for the VL-BUS. But there was one problem—Intel actually wasn’t feeling it, and Intel didn’t exactly make that point clear to the companies supporting the VESA standards body until it was too late for them to react.

Intel revealed its hand in an interesting way, according to The San Francisco Examiner tech reporter Gina Smith:

Until now, virtually everyone expected VESA’s so-called VL-Bus technology to be the standard for building local bus products. But just two weeks before VESA was planning to announce what it came up with, Intel floored the VESA local bus committee by saying it won’t support the technology after all. In a letter sent to VESA local bus committee officials, Intel stated that supporting VESA’s local bus technology “was no longer in Intel’s best interest.” And sources say it went on to suggest that VESA and Intel should work together to minimize the negative press impact that might arise from the decision.

Good luck, Intel. Because now that Intel plans to announce a competing group that includes hardware heavyweights like IBM, Compaq, NCR and DEC, customers and investors (and yes, the press) are going to wonder what in the world is going on.

Not surprisingly, the people who work for VESA are hurt, confused and angry. “It’s a political nightmare. We’re extremely surprised they’re doing this,” said Ron McCabe, chairman for the committee and a product manager at VESA member Tseng Labs. “We’ll still make money and Intel will still make money, but instead of one standard, there will now be two. And it’s the customer who’s going to get hurt in the end.”

But Intel had seen an opportunity to put its imprint on the computing industry. That opportunity came in the form of PCI, a technology that the firm’s Intel Architecture Labs started developing around 1990, two years before the fateful rejection of VESA. Essentially, Intel had been playing both sides on the standards front.

Why PCI

Why make such a hard shift, screwing over a trusted industry standards body out of nowhere? Beyond wanting to put its mark on the standard, Intel also saw an opportunity to build something more future-proof; something that could benefit not just graphic cards but every expansion card in the machine.

As John R. Quinn wrote in PC Magazine in 1992:

Intel’s PCI bus specification requires more work on the part of peripheral chip-makers, but offers several theoretical advantages over the VL-Bus. In the first place, the specification allows up to ten peripherals to work on the PCI bus (including the PCI controller and an optional expansion-bus controller for ISA, EISA, or MCA). It, too, is limited to 33 MHz, but it allows the PCI controller to use a 32-bit or a 64-bit data connection to the CPU.

In addition, the PCI specification allows the CPU to run concurrently with bus-mastering peripherals—a necessary capability for future multimedia tasks. And the Intel approach allows a full burst mode for reads and writes (Intel’s 486 only allows bursts on reads).

Essentially, the PCI architecture is a CPU-to-local bus bridge with FIFO (first in, first out) buffers. Intel calls it an “intermediate” bus because it is designed to uncouple the CPU from the expansion bus while maintaining a 33-MHz 32-bit path to peripheral devices. By taking this approach, the PCI controller makes it possible to queue writes and reads between the CPU and PCI peripherals. In theory, this would enable manufacturers to use a single motherboard design for several generations of CPUs. It also means more sophisticated controller logic is necessary for the PCI interface and peripheral chips.

To put that all another way, VESA came up with a slightly faster bus standard for the next generation of graphics cards, one just fast enough to meet the needs of Intel’s recent i486 microprocessor users. Intel came up with an interface designed to reshape the next decade of computing, one that it would let its competitors use. This bus would allow people to upgrade their processor across generations without needing to upgrade their motherboard. Intel brought a gun to a knife fight, and it made the whole debate about VL-Bus seem insignificant in short order.

The result was that, no matter how miffed the VESA folks were, Intel had consolidated power for itself by creating an open standard that would eventually win the next generation of computers. Sure, Intel let other companies use the PCI standard, even companies like Apple that weren’t directly doing business with Intel on the CPU side. But Intel, by pushing forth PCI, suddenly made itself relevant to the entire next generation of the computing industry in a way that ensured it would have a second foothold in hardware. The “Intel Inside” marketing label was not limited to the processors, as it turned out.

The influence of Intel’s introduction of PCI is still felt: Thirty-two years later, and three decades after PCI became a major consumer standard, we’re still using PCI derivatives in modern computing devices.

PCI and other standards

Looking at PCI, and its successor PCI express, less as ways that we connect the peripherals we use with our computers, and more as a way for Intel to maintain its dominance over the PC industry, highlights something fascinating about standardization.

It turns out that perhaps Intel’s greatest investment in computing in the 1990s was not the Pentium chipset, but its investment in Intel Architecture Labs, which quietly made the entire computing industry better by working on the things that frustrated consumers and manufacturers alike.

Essentially, as IBM had begun to take its eye off the massive clone market it unwittingly built during this period, Intel used standardization to fill the power void. It worked pretty well, and made the company integral to computer hardware beyond the CPU. In fact, devices you use daily—that Intel played zero part in creating—have benefited greatly from the company’s standards work. If you’ve ever used a device with a USB or Bluetooth connection, you can thank Intel for that.

Five offshoots of the original PCI standard that you may be familiar with


Accelerated Graphics Port. Effectively a PCI-first approach to the VL-Bus standard, a slot dedicated especially to graphics, this port was a way to offer access to faster graphics cards at a time when 3D graphics were starting to hit the market in a big way. Its first appearance came not long after the original PCI standard.

PCI-X. Despite the name, Intel was less involved in this standard, which was intended for high-end workstations and server environments. Instead, the standard was developed by IBM, Compaq, and Hewlett-Packard, doubling the bandwidth of the existing PCI standard—and released in the wild not long before HP and Compaq merged in 2002. But the slot standard was effectively a dead end: It did not see wide use with PCs, likely because Intel chose not to give the technology its blessing, but was briefly utilized by the Power Macintosh G5 line of computers.

PCIe. This is the upgrade to PCI that Intel did choose to bless, and it’s the one used by desktop computers today, in part because it was developed to allow for a huge increase in flexibility compared to PCI, in exchange for somewhat more complexity. Key to PCIe’s approach is the use of “lanes” of data transfer speed, allowing high-speed cards like graphics adapters more bandwidth (up to 16 lanes) and slower technologies like network adapters or audio adapters less. This has given PCIe unparalleled backwards compatibility—it’s technically possible to run a modern card on a first-gen PCIe port in exchange for lower speed—while allowing the standard to continue improving. To give you an idea of how far it’s come: A one-lane fifth-generation PCIe slot is roughly as fast as a 16-lane first-generation slot.

Thunderbolt. Thunderbolt can best be thought of as a way to access PCIe lanes through a cable. First used by Apple in 2011, it has become common on laptops of all stripes in recent years. Unlike PCI and PCIe, which are open to all manufacturers, Thunderbolt is closely associated with Intel. This has meant its competitor AMD had traditionally not offered Thunderbolt ports until USB4—a reworked form of the Thunderbolt 3 standard—emerged.

Non-Volatile Memory Express (NVMe). This popular Intel-backed standard, dating to 2011, has completely rewritten the way we think about storage in computers. Once a technology built around mechanical parts, NVMe has allowed for ever-faster solid-state storage communication speeds that take advantage of innovations in the PCIe spec. Modern NVMe drives, which can reach speeds above 6,000 megabytes per second, are roughly 10 times the speed of comparable SATA solid state drives, which top out at 600 MB/s. And, thanks to the corresponding M.2 expansion card standard, they’re far smaller and significantly easier to install.

Craig Kinnie, the director of Intel Architecture Labs in the 1990s, said it best in 1995, upon coming to an agreement with Microsoft on a 3D graphics architecture for the PC platform. “What’s important to us is we move in the same direction,” he said. “We are working on convergent paths now.”

That was about collaborating with Microsoft. But really, it has been Intel’s modus operandi for decades—what’s good for the technology field is good for Intel. Innovations developed or invented by Intel—like Thunderbolt, Ultrabooks, and Next Unit Computers (NUCs)—have done much to shape the way we buy and use computers.

For all the talk of Moore’s Law as a driving factor behind Intel’s success, the true story might be its sheer cat-herding capabilities. The company that builds the standards builds the industry. Even as Intel faces increasing competition from alliterative processing players like ARM, Apple, and AMD, as long as it doesn’t lose sight of the roles standards played in its success, it might just hold on a few years longer.

Ironically, Intel’s standards-driving winning streak, now more than three decades old, might have all started the day it decided to walk out on a standards body.

❌
❌