FreshRSS

Normální zobrazení

Jsou dostupné nové články, klikněte pro obnovení stránky.
PředevčíremHlavní kanál
  • ✇IEEE Spectrum
  • Engineering the First Fitbit: The Inside StoryTekla S. Perry
    It was December 2006. Twenty-nine-year-old entrepreneur James Park had just purchased a Wii game system. It included the Wii Nunchuk, a US $29 handheld controller with motion sensors that let game players interact by moving their bodies—swinging at a baseball, say, or boxing with a virtual partner. Park became obsessed with his Wii. “I was a tech-gadget geek,” he says. “Anyone holding that nunchuk was fascinated by how it worked. It was the first time that I had seen a compelling consumer u
     

Engineering the First Fitbit: The Inside Story

7. Srpen 2024 v 15:00


It was December 2006. Twenty-nine-year-old entrepreneur James Park had just purchased a Wii game system. It included the Wii Nunchuk, a US $29 handheld controller with motion sensors that let game players interact by moving their bodies—swinging at a baseball, say, or boxing with a virtual partner.

Park became obsessed with his Wii.

“I was a tech-gadget geek,” he says. “Anyone holding that nunchuk was fascinated by how it worked. It was the first time that I had seen a compelling consumer use for accelerometers.”

After a while, though, Park spotted a flaw in the Wii: It got you moving, sure, but it trapped you in your living room. What if, he thought, you could take what was cool about the Wii and use it in a gadget that got you out of the house?

A clear plastic package contains a first-generation black Fitbit. Text reads \u201cFitbit,\u201d \u201cWireless Personal Tracker\u201d, and \u201cTracks your fitness & sleep\u201d The first generation of Fitbit trackers shipped in this package in 2009. NewDealDesign

“That,” says Park, “was the aha moment.” His idea became Fitbit, an activity tracker that has racked up sales of more than 136 million units since its first iteration hit the market in late 2009.

But back to that “aha moment.” Park quickly called his friend and colleague Eric Friedman. In 2002, the two, both computer scientists by training, had started a photo-sharing company called HeyPix, which they sold to CNET in 2005. They were still working for CNET in 2006, but it wasn’t a bad time to think about doing something different.

Friedman loved Park’s idea.

“My mother was an active walker,” Friedman says. “She had a walking group and always had a pedometer with her. And my father worked with augmentative engineering [assistive technology] for the elderly and handicapped. We’d played with accelerometer tech before. So it immediately made sense. We just had to refine it.”

The two left CNET, and in April 2007 they incorporated the startup with Park as CEO and Friedman as chief technology officer. Park and Friedman weren’t trying to build the first step counter—mechanical pedometers date back to the 1960s. They weren’t inventing the first smart activity tracker— BodyMedia, a medical device manufacturer, had in 1999 included accelerometers with other sensors in an armband designed to measure calories burned. And Park and Friedman didn’t get a smart consumer tracker to market first. In 2006, Nike had worked with Apple to launch the Nike+ for runners, a motion-tracking system that required a special shoe and a receiver that plugged into an iPod

Two people stand on a busy sidewalk, one wearing a dark sweater and jeans with arms crossed, the other in a brown checkered shirt and light-colored pants with hands on hips. Fitbit’s founders James Park [left] and Eric Friedman released their first product in 2009, when this photo was taken. Peter DaSilva/The New York Times/Redux

Park wasn’t aware of any of this when he thought about getting fitness out of the living room, but the two quickly did their research and figured out what they did and didn’t want to do.

“We didn’t want to create something expensive, targeted at athletes,” he says. “Or something that was dumb and not connected to software. And we wanted something that could provide social connection, like photo sharing did.”

That something had to be comfortable to wear all day, be easy to use, upload its data seamlessly so the data could be tracked and shared with friends, and rarely need charging. Not an easy combination of requirements.

“It’s one of those things where the simpler you get, the harder it becomes to design something well,” Park says.

The first Fitbit was designed for women

The first design decision was the biggest one. Where on the body did they expect people to put this wearable? They weren’t going to ask people to buy special shoes, like the Nike+, or wear a thick band on their upper arms, like BodyMedia’s tracker.

They hired NewDealDesign to figure out some of these details.

“In our first two weeks, after multiple discussions with Eric and James, we decided that the project was going to be geared to women,” says Gadi Amit, president and principal designer of NewDealDesign. “That decision was the driver of the form factor.”

“We wanted to start with something familiar to people,” Park says, “and people tended to clip pedometers to their belts.” So a clip-on device made sense. But women generally don’t wear belts.

To do what it needed to do, the clip-on gadget would have to contain a roughly 2.5-by-2.5-centimeter (1-by-1-inch) printed circuit board, Amit recalls. The big breakthrough came when the team decided to separate the electronics and the battery, which in most devices are stacked. “By doing that, and elongating it a bit, we found that women could put it anywhere,” Amit says. “Many would put it in their bras, so we targeted the design to fit a bra in the center front, purchasing dozens of bras for testing.”

The decision to design for women also drove the overall look, to “subdue the user interface,” as Amit puts it. They hid a low-resolution monochrome OLED display behind a continuous plastic cover, with the display lighting up only when you asked it to. This choice helped give the device an impressive battery life.

A black rectangular object displaying a small blue flower and clipped onto light blue fabric The earliest Fitbit devices used an animated flower as a progress indicator. NewDealDesign

They also came up with the idea of a flower as a progress indicator—inspired, Park says, by the Tamagotchi, one of the biggest toy fads of the late 1990s. “So we had a little animated flower that would shrink or grow based on how active you were,” Park explains.

And after much discussion over controls, the group gave the original Fitbit just one button.

Hiring an EE—from Dad—to design Fitbit’s circuitry

Park and Friedman knew enough about electronics to build a crude prototype, “stuffing electronics into a box made of cut-up balsam wood,” Park says. But they also knew that they needed to bring in a real electrical engineer to develop the hardware.

Fortunately, they knew just whom to call. Friedman’s father, Mark, had for years been working to develop a device for use in nursing homes, to remotely monitor the position of bed-bound patients. Mark’s partner in this effort was Randy Casciola, an electronics engineer and currently president of Morewood Design Labs.

Eric called his dad, told him about the gadget he and Park envisioned, and asked if he and Casciola could build a prototype.

“Mark and I thought we’d build a quick-and-dirty prototype, something they could get sensor data from and use for developing software. And then they’d go off to Asia and get it miniaturized there,” Casciola recalls. “But one revision led to another.” Casciola ended up working on circuit designs for Fitbits virtually full time until the sale of the company to Google, announced in 2019 and completed in early 2021.

“We saw some pretty scary manufacturers. Dirty facilities, flash marks on their injection-molded plastics, very low precision.”
—James Park

“We were just two little guys in a little office in Pittsburgh,” Casciola says. “Before Fitbit came along, we had realized that our nursing-home thing wasn’t likely to ever be a product and had started taking on some consulting work. I had no idea Fitbit would become a household name. I just like working on anything, whether I think it’s a good idea or not, or even whether someone is paying me or not.”

The earliest prototypes were pretty large, about 10 by 15 cm, Casciola says. They were big enough to easily hook up to test equipment, yet small enough to strap on to a willing test subject.

After that, Park and Eric Friedman—along with Casciola, two contracted software engineers, and a mechanical design firm—struggled with turning the bulky prototype into a small and sleek device that counted steps, stored data until it could be uploaded and then transmitted it seamlessly, had a simple user interface, and didn’t need daily charging.

“Figuring out the right balance of battery life, size, and capability kept us occupied for about a year,” Park says.

A black Fitbit sits vertically in a square stand with a wire coming out. The screen on the device reads \u201cBATT 6%\u201d The Fitbit prototype, sitting on its charger, booted up for the first time in December 2008. James Park

After deciding to include a radio transmitter, they made a big move: They turned away from the Bluetooth standard for wireless communications in favor of the ANT protocol, a technology developed by Garmin that used far less power. That meant the Fitbit wouldn’t be able to upload to computers directly. Instead, the team designed their own base station, which could be left plugged into a computer and would grab data anytime the Fitbit wearer passed within range.

Casciola didn’t have expertise in radio-frequency engineering, so he relied on the supplier of the ANT radio chips: Nordic Semiconductor, in Trondheim, Norway.

“They would do a design review of the circuit board layout,” he explains. “Then we would send our hardware to Norway. They would do RF measurements on it and tell me how to tweak the values of the capacitors and conductors in the RF chain, and I would update the schematic. It’s half engineering and half black magic to get this RF stuff working.”

Another standard they didn’t use was the ubiquitous USB charging connection.

“We couldn’t use USB,” Park says. “It just took up too much volume. Somebody actually said to us, ‘Whatever you do, don’t design a custom charging system because it’ll be a pain, it’ll be super expensive.’ But we went ahead and built one. And it was a pain and super expensive, but I think it added a level of magic. You just plopped your device on [the charger]. It looked beautiful, and it worked consistently.”

Most of the electronics they used were off the shelf, including a 16-bit Texas Instruments MSP430 microprocessor, and 92 kilobytes of flash memory and 4 kb of RAM to hold the operating system, the rest of the code, all the graphics, and at least seven days’ worth of collected data.

The Fitbit was designed to resist sweat, and they generally survived showers and quick dips, says Friedman. “But hot tubs were the bane of our existence. People clipped it to their swimsuits and forgot they had it on when they jumped into the hot tub.”

Fitbit’s demo or die moment

Up to this point, the company was surviving on $400,000 invested by Park, Friedman, and a few people who had backed their previous company. But more money would be needed to ramp up manufacturing. And so a critical next step would be a live public demo, which they scheduled for the TechCrunch conference in San Francisco in September 2008.

Live demonstrations of new technologies are always risky, and this one walked right up to the edge of disaster. The plan was to ask an audience member to call out a number, and then Park, wearing the prototype in its balsa-wood box, would walk that number of steps. The count would sync wirelessly to a laptop projecting to a screen on stage. When Friedman hit refresh on the browser, the step count would appear on the screen. What could go wrong?

A lot. Friedman explains: “You think counting steps is easy, but let’s say you do three steps. One, two, three. When you bring your feet together, is that a step or is that the end? It’s much easier to count 1,000 steps than it is to do 10 steps. If I walk 10 steps and am off by one, that’s a glaring error. With 1,000, that variance becomes noise.”

The first semi-assembled Fitbit records its inaugural step count. James Park

After a lot of practice, the two thought they could pull it off. Then came the demo. “While I was walking, the laptop crashed,” Park says. “I wasn’t aware of that. I was just walking happily. Eric had to reboot everything while I was still walking. But the numbers showed up; I don’t think anyone except Eric realized what had happened.”

That day, some 2,000 preorders poured in. And Fitbit closed a $2 million round of venture investment the next month.

Though Park and Friedman had hoped to get Fitbits into users’ hands—or clipped onto their bras—by Christmas of 2008, they missed that deadline by a year.

The algorithms that determine Fitbit’s count

Part of Fitbit’s challenge of getting from prototype to shippable product was software development. They couldn’t expect users to walk as precisely as Park did for the demo. Instead, the device’s algorithms needed to determine what a step was and what was a different kind of motion—say, someone scratching their nose.

“Data collection was difficult,” Park says. “Initially, it was a lot of us wearing prototype devices doing a variety of different activities. Our head of research, Shelten Yuen, would follow, videotaping so we could go back and count the exact number of steps taken. We would wear multiple devices simultaneously, to compare the data collects against each other.”

Friedman remembers one such outing. “James was tethered to the computer, and he was pretending to walk his dog around the Haight [in San Francisco], narrating this little play that he’s putting on: ‘OK, I’m going to stop. The dog is going to pee on this tree. And now he’s going over there.’ The great thing about San Francisco is that nobody looks strangely at two guys tethered together walking around talking to themselves.”

“Older people tend to have an irregular cadence—to the device, older people look a lot like buses going over potholes.” –James Park

“Pushing baby strollers was an issue,” because the wearer’s arms aren’t swinging, Park says. “So one of our guys put an ET doll in a baby stroller and walked all over the city with it.”

Road noise was another big issue. “Yuen, who was working on the algorithms, was based in Cambridge, Mass.,” Park says. “They have more potholes than we do. When he took the bus, the bus would hit the potholes and [the device would] be bouncing along, registering steps.” They couldn’t just fix the issue by looking for a regular cadence to count steps, he adds, because not everyone has a regular cadence. “Older people tend to have an irregular cadence—to the device, older people look a lot like buses going over potholes.”

Fitbit’s founders enter the world of manufacturing

A consumer gadget means mass manufacturing, potentially in huge quantities. They talked to a lot of contract-manufacturing firms, Park recalls. They realized that as a startup with an unclear future market, they wouldn’t be of interest to the top tier of manufacturers. But they couldn’t go with the lowest-budget operations, because they needed a reasonable level of quality.

“We saw some pretty scary manufacturers,” Park said. “Dirty facilities, flash marks on their injection-molded plastics [a sign of a bad seal or other errors], very low precision.” They eventually found a small manufacturer that was “pretty good but still hungry for business.” The manufacturer was headquartered in Singapore, while their surface-mount supplier, which put components directly onto printed circuit boards, was in Batam, Indonesia.

Two rows of women wearing light blue shirts stand at long tables assembling devices. Workers assemble Fitbits by hand in October of 2008. James Park

Working with that manufacturer, Park and Friedman made some tweaks in the design of the circuitry and the shape of the case. They struggled over how to keep water—and sweat—out of the device, settling on ultrasonic welding for the case and adding a spray-on coating for the circuitry after some devices were returned with corrosion on the electronics. That required tweaking the layout to make sure the coating would get between the chips. The coating on each circuit board had to be checked and touched up by hand. When they realized that the coating increased the height of the chips, they had to tweak the layout some more.

In December 2009, just a week before the ship date, Fitbits began rolling off the production line.

“I was in a hotel room in Singapore testing one of the first fully integrated devices,” Park says. “And it wasn’t syncing to my computer. Then I put the device right next to the base station, and it started to sync. Okay, that’s good, but what was the maximum distance it could sync? And that turned out to be literally just a few inches. In every other test we had done, it was fine. It could sync from 15 or 20 feet [5 or 6 meters] away.”

The problem, Park eventually figured out, occurred when the two halves of the Fitbit case were ultrasonically welded together. In previous syncing tests, the cases had been left unsealed. The sealing process pushed the halves closer together, so that the cable for the display touched or nearly touched the antenna printed on the circuit board, which affected the radio signal. Park tried squeezing the halves together on an unsealed unit and reproduced the problem.

Two photos. One photo shows 3 men working in a lab wearing cleanroom suits. One man is seated and handling electronic components, and the others stand observing. The other photo shows a row of six black rectangular devices with green circuit boards hanging out of them Getting the first generation of Fitbits into mass production required some last-minute troubleshooting. Fitbit cofounder James Park [top, standing in center] helps debug a device at the manufacturer shortly before the product’s 2009 launch. Early units from the production line are shown partially assembled [bottom]. James Park

“I thought, if we could just push that cable away from the antenna, we’d be okay,” Park said. “The only thing I could find in my hotel room to do that was toilet paper. So I rolled up some toilet paper really tight and shoved it in between the cable and the antenna. That seemed to work, though I wasn’t really confident.”

Park went to the factory the next day to discuss the problem—and his solution—with the manufacturing team. They refined his fix—replacing the toilet paper with a tiny slice of foam—and that’s how the first generation of Fitbits shipped.

Fitbit’s fast evolution

The company sold about 5,000 of those $99 first-generation units in 2009, and more than 10 times that number in 2010. The rollout wasn’t entirely smooth. Casciola recalls that Fitbit’s logistics center was sending him a surprising number of corroded devices that had been returned by customers. Casciola’s task was to tear them down and diagnose the problem.

“One of the contacts on the device, over time, was growing a green corrosion,” Casciola says. “But the other two contacts were not.” It turned out the problem came from Casciola’s design of the system-reset trigger, which allowed users to reset the device without a reset button or a removable battery. “Inevitably,” Casciola says, “firmware is going to crash. When you can’t take the battery out, you have to have another way of forcing a reset; you don’t want to have someone waiting six days for the battery to run out before restarting.”

The reset that Casciola designed was “a button on the charging station that you could poke with a paper clip. If you did this with the tracker sitting on the charger, it would reset. Of course, we had to have a way for the tracker to see that signal. When I designed the circuit to allow for that, I ended up with a nominal voltage on one pin.” This low voltage was causing the corrosion.

“If you clipped the tracker onto sweaty clothing—remember, sweat has a high salt content—a very tiny current would flow,” says Casciola. “It was just fractions of a microamp, not enough to cause a reset, but enough, over time, to cause greenish corrosion.”

Two men in white cleanroom suits with hoods stand in front of a door. Cofounders Eric Friedman [left] and James Park visit Fitbit’s manufacturer in December of 2008. James Park

On the 2012 generation of the Fitbit, called the Fitbit One, Casciola added a new type of chip, one that hadn’t been available when he was working on the original design. It allowed the single button to trigger a reset when it was held down for some seconds while the device was sitting on the charger. That eliminated the need for the active pin.

The charging interface was the source of another early problem. In the initial design, the trim of the Fitbit’s plastic casing was painted with chrome. “We originally wanted an actual metal trim,” Friedman says, “but that interfered with the radio signal.”

Chrome wasn’t a great choice either. “It caused problems with the charger interface,” Park adds. “We had to do a lot of work to prevent shorting there.”

They dropped the chrome after some tens of thousands of units were shipped—and then got compliments from purchasers about the new, chrome-less look.

Evolution happened quickly, particularly in the way the device transmitted data. In 2012, when Bluetooth LE became widely available as a new low-power communications standard, the base station was replaced by a small Bluetooth communications dongle. And eventually the dongles disappeared altogether.

“We had a huge debate about whether or not to keep shipping that dongle,” Park says. “Its cost was significant, and if you had a recent iPhone, you didn’t need it. But we didn’t want someone buying the device and then returning it because their cellphone couldn’t connect.” The team closely tracked the penetration rate of Bluetooth LE in cellphones; when they felt that number was high enough, they killed off the dongle.

Fitbit’s wrist-ward migration

After several iterations of the original Fitbit design, sometimes called the “clip” for its shape, the fitness tracker moved to the wrist. This wasn’t a matter of simply redesigning the way the device attached to the body but a rethinking of algorithms.

The impetus came from some users’ desire to better track their sleep. The Fitbit’s algorithms allowed it to identify sleep patterns, a design choice that, Park says, “was pivotal, because it changed the device from being just an activity tracker to an all-day wellness tracker.” But nightclothes didn’t offer obvious spots for attachment. So the Fitbit shipped with a thin fabric wristband intended for use just at night. Users began asking customer support if they could keep the wristband on around the clock. The answer was no; Fitbit’s step-counting algorithms at the time didn’t support that.

“My father, who turned 80 on July 5, is fixated on his step count. From 11 at night until midnight, he’s in the parking garage, going up flights of stairs. And he is in better shape than I ever remember him.” —Eric Friedman

Meanwhile, a cultural phenomenon was underway. In the mid-2000s, yellow Livestrong bracelets, made out of silicone and sold to support cancer research, were suddenly everywhere. Other causes and movements jumped on the trend with their own brightly colored wristbands. By early 2013, Fitbit and its competitors Nike and Jawbone had launched wrist-worn fitness trackers in roughly the same style as those trendy bracelets. Fitbit’s version was called the Flex, once again designed by NewDealDesign.

A no-button user interface for the Fitbit Flex

The Flex’s interface was even simpler than the original Fitbit’s one button and OLED screen: It had no buttons and no screen, just five LEDs arranged in a row and a vibrating motor. To change modes, you tapped on the surface.

“We didn’t want to replace people’s watches,” Park says. The technology wasn’t yet ready to “build a compelling device—one that had a big screen and the compute power to drive really amazing interactions on the wrist that would be worthy of that screen. The technology trends didn’t converge to make that possible until 2014 or 2015.”

A photo shows a hand wearing a light blue Fitbit Flex reaching toward a tablet displaying the Fitbit app. Another photo shows a black Fitbit Flex. The Fitbit Flex [right], the first Fitbit designed to be worn on the wrist, was released in 2013. It had no buttons and no screen. Users controlled it by tapping; five LEDs indicated progress toward a step count selected via an app [left]. iStock

“The amount of stuff the team was able to convey with just the LEDs was amazing,” Friedman recalls. “The status of where you are towards reaching your [step] goal, that’s obvious. But [also] the lights cycling to show that it’s searching for something, the vibrating when you hit your step goal, things like that.”

The tap part of the interface, though, was “possibly something we didn’t get entirely right,” Park concedes. It took much fine-tuning of algorithms after the launch to better sort out what was not tapping—like applauding. Even more important, some users couldn’t quite intuit the right way to tap.

“If it works for 98 percent of your users, but you’re growing to millions of users, 2 percent really starts adding up,” Park says. They brought the button back for the next generation of Fitbit devices.

And the rest is history

In 2010, its first full year on the market, the Fitbit sold some 50,000 units. Fitbit sales peaked in 2015, with almost 23 million devices sold that year, according to Statista. Since then, there’s been a bit of a drop-off, as multifunctional smart watches have come down in price and grown in popularity and Fitbit knockoffs entered the market. In 2021, Fitbit still boasted more than 31 million active users, according to Market.us.Media. And Fitbit may now be riding the trend back to simplicity, as people find themselves wanting to get rid of distractions and move back to simpler devices. I see this happening in my own family: My smartwatch-wearing daughter traded in that wearable for a Fitbit Charge 6 earlier this year.

Related Articles


My First Fitbit

The Consumer Electronics Hall of Fame: Fitbit

Fitbit went public in 2015 at a valuation of $4.1 billion. In 2021 Google completed its $2.1 billion purchase of the company and absorbed it into its hardware division. In April of this year, Park and Friedman left Google. Early retirement? Hardly. The two, now age 47, have started a new company that’s currently in stealth mode.

The idea of encouraging people to be active by electronically tracking steps has had staying power.

“My father, who turned 80 on July 5, is fixated on his step count,” Friedman says. “From 11 at night until midnight, he’s in the parking garage, going up flights of stairs. And he is in better shape than I ever remember him.”

What could be a better reward than that?

This article appears in the September 2024 print issue.

  • ✇IEEE Spectrum
  • Will This Flying Camera Finally Take Off?Tekla S. Perry
    Ten years. Two countries. Multiple redesigns. Some US $80 million invested. And, finally, Zero Zero Robotics has a product it says is ready for consumers, not just robotics hobbyists—the HoverAir X1. The company has sold several hundred thousand flying cameras since the HoverAir X1 started shipping last year. It hasn’t gotten the millions of units into consumer hands—or flying above them—that its founders would like to see, but it’s a start.“It’s been like a 10-year-long Ph.D. project,” says Zer
     

Will This Flying Camera Finally Take Off?

31. Červenec 2024 v 14:00


Ten years. Two countries. Multiple redesigns. Some US $80 million invested. And, finally, Zero Zero Robotics has a product it says is ready for consumers, not just robotics hobbyists—the HoverAir X1. The company has sold several hundred thousand flying cameras since the HoverAir X1 started shipping last year. It hasn’t gotten the millions of units into consumer hands—or flying above them—that its founders would like to see, but it’s a start.

“It’s been like a 10-year-long Ph.D. project,” says Zero Zero founder and CEO Meng Qiu Wang. “The thesis topic hasn’t changed. In 2014 I looked at my cell phone and thought that if I could throw away the parts I don’t need—like the screen—and add some sensors, I could build a tiny robot.”

I first spoke to Wang in early 2016, when Zero Zero came out of stealth with its version of a flying camera—at $600. Wang had been working on the project for two years. He started the project in Silicon Valley, where he and cofounder Tony Zhang were finishing up Ph.D.s in computer science at Stanford University. Then the two decamped for China, where development costs are far less.

Flying cameras were a hot topic at the time; startup Lily Robotics demonstrated a $500 flying camera in mid-2015 (and was later charged with fraud for faking its demo video), and in March of 2016 drone-maker DJI introduced a drone with autonomous flying and tracking capabilities that turned it into much the same type of flying camera that Wang envisioned, albeit at the high price of $1400.

Wang aimed to make his flying camera cheaper and easier to use than these competitors by relying on image processing for navigation—no altimeter, no GPS. In this approach, which has changed little since the first design, one camera looks at the ground and algorithms follow the camera’s motion to navigate. Another camera looks out ahead, using facial and body recognition to track a single subject.

The current version, at $349, does what Wang had envisioned, which is, he told me, “to turn the camera into a cameraman.” But, he points out, the hardware and software, and particularly the user interface, changed a lot. The size and weight have been cut in half; it’s just 125 grams. This version uses a different and more powerful chipset, and the controls are on board; while you can select modes from a smart phone app, you don’t have to.

I can verify that it is cute (about the size of a paperback book), lightweight, and extremely easy to use. I’ve never flown a standard drone without help or crashing but had no problem sending the HoverAir up to follow me down the street and then land on my hand.

It isn’t perfect. It can’t fly over water—the movement of the water confuses the algorithms that judge speed through video images of the ground. And it only tracks people; though many would like it to track their pets, Wang says animals behave erratically, diving into bushes or other places the camera can’t follow. Since the autonomous navigation algorithms rely on the person being filmed to avoid objects and simply follows that path, such dives tend to cause the drone to crash.

Since we last spoke eight years ago, Wang has been through the highs and lows of the startup rollercoaster, turning to contract engineering for a while to keep his company alive. He’s become philosophical about much of the experience.

Here’s what he had to say.

We last spoke in 2016. Tell me how you’ve changed.

Meng Qiu Wang: When I got out of Stanford in 2014 and started the company with Tony [Zhang], I was eager and hungry and hasty and I thought I was ready. But retrospectively, I wasn’t ready to start a company. I was chasing fame and money, and excitement.

Now I’m 42, I have a daughter—everything seems more meaningful now. I’m not a Buddhist, but I have a lot of Zen in my philosophy now.

I was trying so hard to flip the page to see the next chapter of my life, but now I realize, there is no next chapter, flipping the page itself is life.

You were moving really fast in 2016 and 2017. What happened during that time?

Wang: After coming out of stealth, we ramped up from 60 to 140 people planning to take this product into mass production. We got a crazy amount of media attention—covered by 2,200 media outlets. We went to CES, and it seemed like we collected every trophy there was there.

And then Apple came to us, inviting us to retail at all the Apple stores. This was a big deal; I think we were the first third party robotic product to do live demos in Apple stores. We produced about 50,000 units, bringing in about $15 million in revenue in six months.

Then a giant company made us a generous offer and we took it. But it didn’t work out. It was a certainly lesson learned for us. I can’t say more about that, but at this point if I walk down the street and I see a box of pizza, I would not try to open it; there really is no free lunch.

a black caged drone with fans and a black box in the middle This early version of the Hover flying camera generated a lot of initial excitement, but never fully took off.Zero Zero Robotics

How did you survive after that deal fell apart?

Wang: We went from 150 to about 50 people and turned to contract engineering. We worked with toy drone companies, with some industrial product companies. We built computer vision systems for larger drones. We did almost four years of contract work.

But you kept working on flying cameras and launched a Kickstarter campaign in 2018. What happened to that product?

Wang: It didn’t go well. The technology wasn’t really there. We filled some orders and refunded ones that we couldn’t fill because we couldn’t get the remote controller to work.

We really didn’t have enough resources to create a new product for a new product category, a flying camera, to educate the market.

So we decided to build a more conventional drone—our V-Coptr, a V-shaped bi-copter with only two propellers—to compete against DJI. We didn’t know how hard it would be. We worked on it for four years. Key engineers left out of total dismay, they lost faith, they lost hope.

We came so close to going bankrupt so many times—at least six times in 10 years I thought I wasn’t going to be able to make payroll for the next month, but each time I got super lucky with something random happening. I never missed paying one dime—not because of my abilities, just because of luck.

We still have a relatively healthy chunk of the team, though. And this summer my first ever software engineer is coming back. The people are the biggest wealth that we’ve collected over the years. The people who are still with us are not here for money or for success. We just realized along the way that we enjoy working with each other on impossible problems.

When we talked in 2016, you envisioned the flying camera as the first in a long line of personal robotics products. Is that still your goal?

Wang: In terms of short-term strategy, we are focusing 100 percent on the flying camera. I think about other things, but I’m not going to say I have an AI hardware company, though we do use AI. After 10 years I’ve given up on talking about that.

Do you still think there’s a big market for a flying camera?

Wang: I think flying cameras have the potential to become the second home robot [the first being the robotic vacuum] that can enter tens of millions of homes.

  • ✇IEEE Spectrum
  • Insomniacs Rejoice! This Headband Helps You Fall AsleepTekla S. Perry
    Elemind, a 5-year-old startup based in Cambridge, Mass., today unveiled a US $349 wearable for neuromodulation, the company’s first product. According to cofounder and CEO Meredith Perry, the technology tracks the oscillation of brain waves using electroencephalography (EEG) sensors that detect the electrical activity of the brain and then influence those oscillations using bursts of sound delivered via bone conduction.Elemind’s first application for this wearable aims to suppress alpha waves to
     

Insomniacs Rejoice! This Headband Helps You Fall Asleep

4. Červen 2024 v 15:06


Elemind, a 5-year-old startup based in Cambridge, Mass., today unveiled a US $349 wearable for neuromodulation, the company’s first product. According to cofounder and CEO Meredith Perry, the technology tracks the oscillation of brain waves using electroencephalography (EEG) sensors that detect the electrical activity of the brain and then influence those oscillations using bursts of sound delivered via bone conduction.

Elemind’s first application for this wearable aims to suppress alpha waves to help induce sleep. There are other wearables on the market that monitor brain waves and, through biofeedback, encourage users to actively modify their alpha patterns. Elemind’s headband appears to be the first device to use sound to directly influence the brain waves of a passive user.

In a clinical trial, says Perry [no relation to author], 76 percent of subjects fell asleep more quickly. Those who did see a difference averaged 48 percent less time to progress from awake to asleep. The results were similar to those of comparable trials of pharmaceutical sleep aids, Perry indicated.

“For me,” Perry said, “it cuts through my rumination, quiets my thinking. It’s like noise cancellation for the brain.”

I briefly tested Elemind’s headband in May. I found it comfortable, with a thick cushioned band that sits across the forehead connected to a stretchy elastic loop to keep it in place. In the band are multiple EEG electrodes, a processor, a three-axis accelerometer, a rechargeable lithium-polymer battery, and custom electronics that gather the brain’s electrical signals, estimate their phase, and generate pink noise through a bone-conduction speaker. The whole thing weighs about 60 grams—about as much as a small kiwi fruit.

My test conditions were far from optimal for sleep: early afternoon, a fairly bright conference room, a beanbag chair as bed, and a vent blowing. And my test lasted just 4 minutes. I can say that I didn’t find the little bursts of pink noise (white noise without the higher frequencies) unpleasant. And since I often wear an eye mask, feeling fabric on my face wasn’t disturbing. It wasn’t the time or place to try for sound sleep, but I—and the others in the room—noted that after 2 minutes I was yawning like crazy.

How Elemind tweaks brain waves

What was going on in my brain? Briefly, different brain states are associated with different frequencies of waves. Someone who is relaxed with eyes closed but not asleep produces alpha waves at around 10 hertz. As they drift off to sleep, the alpha waves are supplanted by theta waves, at around 5 Hz. Eventually, the delta waves of deep sleep show up at around 1 Hz.

Ryan Neely, Elemind’s vice president of science and research, explains: “As soon as you put the headband on,” he says, “the EEG system starts running. It uses straightforward signal processing with bandpass filtering to isolate the activity in the 8- to 12-Hz frequency range—the alpha band.”

“Then,” Neely continues, “our algorithm looks at the filtered signal to identify the phase of each oscillation and determines when to generate bursts of pink noise.”

two graphs with black and pink lines, blue text above and a small orange arrow To help a user fall asleep more quickly [top], bursts of pink noise are timed to generate a brain response that is out of phase with alpha waves and so suppresses them. To enhance deep sleep [bottom], the pink noise is timed to generate a brain response that is in phase with delta waves.Source: Elemind

These auditory stimuli, he explains, create ripples in the waves coming from the brain. Elemind’s system tries to align these ripples with a particular phase in the wave. Because there is a gap between the stimulus and the evoked response, Elemind tested its system on 21 people and calculated the average delay, taking that into account when determining when to trigger a sound.

To induce sleep, Elemind’s headband targets the trough in the alpha wave, the point at which the brain is most excitable, Neely says.

“You can think of the alpha rhythm as a gate for communication between different areas of the brain,” he says. “By interfering with that communication, that coordination between different brain areas, you can disrupt patterns, like the ruminations that keep you awake.”

With these alpha waves suppressed, Neely says, the slower oscillations, like the theta waves of light sleep, take over.

Elemind doesn’t plan to stop there. The company plans to add an algorithm that addresses delta waves, the low-frequency 0.5- to 2-Hz waves characteristic of deep sleep. Here, Elemind’s technology will attempt to amplify this pattern with the intent of improving sleep quality.

Is this safe? Yes, Neely says, because auditory stimulation is self-limiting. “Your brain waves have a natural space they can occupy,” he explains, “and this stimulation just moved it within that natural space, unlike deep-brain stimulation, which can move the brain activity outside natural parameters.”

Going beyond sleep to sedation, memory, and mental health

Applications may eventually go beyond inducing and enhancing sleep. Researchers at the University of Washington and McGill University have completed a clinical study to determine if Elemind’s technology can be used to increase the pain threshold of subjects undergoing sedation. The results are being prepared for peer review.

Elemind is also working with a team involving researchers at McGill and the Leuven Brain Institute to determine if the technology can enhance memory consolidation in deep sleep and perhaps have some usefulness for people with mild cognitive impairment and other memory disorders.

Neely would love to see more applications investigated in the future.

“Inverse alpha stimulation [enhancing instead of suppressing the signal] could increase arousal,” he says. “That’s something I’d love to look into. And looking into mental-health treatment would be interesting, because phase coupling between the different brain regions appears to be an important factor in depression and anxiety disorders.”

Perry, who previously founded the wireless power startup UBeam, cofounded Elemind with four university professors with expertise in neuroscience, optogenetics, biomedical engineering, and artificial intelligence. The company has $12 million in funding to date and currently has 13 employees.

Preorders at $349 start today for beta units, and Elemind expects to start general sales later this year. The company will offer customers an optional membership at $7 to $13 monthly that will allow cloud storage of sleep data and access to new apps as they are released.

  • ✇IEEE Spectrum
  • Robert Kahn: The Great InterconnectorTekla S. Perry
    In the mid-1960s, Robert Kahn began thinking about how computers with different operating systems could talk to each other across a network. He didn’t think much about what they would say to one another, though. He was a theoretical guy, on leave from the faculty of the Massachusetts Institute of Technology for a stint at the nearby research-and-development company Bolt, Beranek and Newman (BBN). He simply found the problem interesting. “The advice I was given was that it would be a bad thing
     

Robert Kahn: The Great Interconnector

20. Duben 2024 v 17:00


In the mid-1960s, Robert Kahn began thinking about how computers with different operating systems could talk to each other across a network. He didn’t think much about what they would say to one another, though. He was a theoretical guy, on leave from the faculty of the Massachusetts Institute of Technology for a stint at the nearby research-and-development company Bolt, Beranek and Newman (BBN). He simply found the problem interesting.

“The advice I was given was that it would be a bad thing to work on. They would say it wasn’t going to lead to anything,” Kahn recalls. “But I was a little headstrong at the time, and I just wanted to work on it.”

Robert E. Kahn


Photo of an older man in a dark suit in front of a blue and green lined background

Current job:

Chairman, CEO, and president of the Corporation for National Research Initiatives (CNRI)

Date of birth:

23 December 1938

Birthplace:

Brooklyn, New York

Family:

Patrice Ann Lyons, his wife

Education:

BEE 1960, City College of New York; M.A. 1962 and Ph.D. 1964, Princeton University

First job:

Runner for a Wall Street brokerage firm

First electronics job:

Bell Telephone Laboratories, New York City

Biggest surprise in career:

Leaving—and then staying out of—academics

Patents:

Several, including two related to the digital-object architecture and two on remote pointing devices

Heroes:

His parents, his wife, Egon Brenner, Irwin Jacobs, Jack Wozencraft

Favorite books:

March of Folly: From Troy to Vietnam (1984) by Barbara W. Tuchman, The Two-Ocean War: A Short History of the United States Navy in the Second World War (1963) by Samuel Eliot Morison

Favorite movies:

The Day the Earth Stood Still (1951), Casablanca (1942)

Favorite kind of music:

Opera, operatic musicals

Favorite TV shows:

Golf, tennis, football, soccer—basically any sports show

Favorite food:

Chinese that he cooks himself, as taught to him by Franklin Kuo, codeveloper of ALOHAnet at the University of Hawaii

Favorite restaurants:

Le Bernardin, New York City, and L’Auberge Chez Francois, Great Falls, Va.

Leisure activities past and present:

Skiing, whitewater canoeing, tennis, golf, cooking

Key organizational memberships:

IEEE, Association for Computing Machinery (ACM), the U.S. National Academies of Science and Engineering, the Marconi Society

Major awards:

IEEE Medal of Honor “for pioneering technical and leadership contributions in packet communication technologies and foundations of the Internet,” the Presidential Medal of Freedom, the National Medal of Technology and Innovation, the Queen Elizabeth Prize for Engineering, the Japan Prize, the Prince of Asturias Award

Kahn ended up “working on it” for the next half century. And he is still involved in networking research today.

It is for this work on packet communication technologies—as part of the project that became the ARPANET and in the foundations of the Internet—that Kahn is being awarded the 2024 IEEE Medal of Honor.

The ARPANET Is Born

Kahn wasn’t the only one thinking about connecting disparate computers in the 1960s. In 1965, Larry Roberts, then at the MIT Lincoln Laboratory, connected one computer in Massachusetts to another in California over a telephone line. Bob Taylor, then at the Advanced Research Projects Agency (ARPA), got interested in connecting computers, in part to save the organization money by getting the expensive computers it funded at universities and research organizations to share their resources over a packet-switched network. This method of communications involves cutting up data files into blocks and reassembling them at their destination. It allows each fragment to take a variety of paths across a network and helps mitigate any loss of data, because individual packets can easily be resent.

Taylor’s project—the ARPANET—would be far more than theoretical. It would ultimately produce the world’s first operational packet network linking distributed interactive computers.

Meanwhile, over at BBN, Kahn intended to spend a couple of years in industry so he could return to academia with some real-world experience and ideas for future research.

“I wasn’t hired to do anything in particular,” Kahn says. “They were just accumulating people who they thought could contribute. But I had come from the conceptual side of the world. The people at BBN viewed me as other.”

Kahn didn’t know much about computers at the time—his Ph.D. thesis involved signal processing. But he did know something about communication networks. After earning a bachelor’s degree in electrical engineering from City College of New York in 1960, Kahn had joined Bell Telephone Laboratories, working at its headquarters in Manhattan, where he helped to analyze the overall architecture and performance of the Bell telephone system. That involved conceptualizing what the network needed to do, developing overall plans, and handling the mathematical calculations related to the architecture as implemented, Kahn recalls.

“We would figure out things like: Do we need more lines between Denver and Chicago?” he says.

Kahn stayed at Bell Labs for about nine months; to his surprise, a graduate fellowship came through that he decided to accept. He was off to Princeton University in the autumn of 1961, returning to Bell Labs for the next few summers.

So, when Kahn was at BBN a few years later, he knew enough to realize that you wouldn’t want to use the telephone network as the basis of a computer network: Dial-up connections took 10 or 20 seconds to go through, the bandwidth was low, the error rate was high, and you could connect to only one machine at a time.

Other than generally thinking that it would be nice if computers could talk to one another, Kahn didn’t give much thought to applications.

“If you were engineering the Bell System,” he says, “you weren’t trying to figure out who in San Francisco is going to say what to whom in New York. You were just trying to figure out how to enable conversations.”

A black and white graduation portrait of a man in a cap and gown. Bob Kahn graduated from high school in 1955.Bob Kahn

Kahn wrote a series of reports laying out how he thought a network of computers could be implemented. They landed on the desk of Jerry Elkind, a BBN vice president who later joined Xerox PARC. And Elkind told Kahn about ARPA’s interest in computer networking.

“I didn’t really know what ARPA was, other than I had seen the name,” Kahn says. Elkind told him to send his reports to Larry Roberts, the recently hired program manager for ARPA’s networking project.

“The next thing I know,” Kahn says, “there’s an RFQ [request for quotation] from ARPA for building a four-node net.” Kahn, still the consummate academic, hadn’t thought he’d have to do much beyond putting his thoughts down on paper. “It never dawned on me that I’d actually get involved in building it,” he says.

Kahn handled the technical portion of BBN’s proposal, and ARPA awarded BBN the four-node-network contract in January of 1969. The nodes rolled out later that year: at UCLA in September; the Stanford Research Institute (SRI) in October; the University of California, Santa Barbara, in November; and the University of Utah in December.

Kahn postponed his planned return to MIT and continued to work on expanding this network. In October 1972, the ARPANET was publicly unveiled at the first meeting of the International Conference on Computer Communications, in Washington, D.C.

“I was pretty sure it would work,” Kahn says, “but it was a big event. There were 30 or 40 nodes on the ARPANET at the time. We put 40 different kinds of terminals in the [Washington Hilton] ballroom, and people could walk around and try this terminal, that terminal, which might connect to MIT, and so forth. You could use Doug Engelbart’s NLS [oN-Line System] at SRI and manipulate a document, or you could go onto a BBN computer that demonstrated air-traffic control, showing an airplane leaving one airport, which happened to be on a computer in one place, and landing at another airport, which happened to be on a computer in another place.”

The demos, he recalled, ran 24 hours a day for nearly a week. The reaction, he says, “was ‘Oh my God, this is amazing’ for everybody, even people who worried about how it would affect their businesses.”

Goodbye BBN, Hello DARPA

Kahn officially left BBN the day after the demo concluded to join DARPA (the agency having recently added the word “Defense” to its name). He felt he’d done what he could on networking and was ready for a new challenge.

“They hired me to run a hundred-million-dollar program on automated manufacturing. It was an opportunity of a lifetime, to get on the factory floor, to figure out how to distribute processing, distribute artificial intelligence, use distributed sensors.”

A formal black and white portrait of a man wearing a suit and tie. Bob Kahn served on the MIT faculty from 1964 to 1966.Bob Kahn

Soon after he arrived at DARPA, Congress pulled the plug on funding for the proposed automated-manufacturing effort. Kahn shrugged his shoulders and figured he’d go back to MIT. But Roberts asked Kahn to stay. Kahn did, but rather than work on ARPANET he focused on developing packet radio, packet satellite, and even, he says, packetizing voice, a technology that led to VoIP (Voice over Internet Protocol) today.

Getting those new networks up and running wasn’t always easy. Irwin Jacobs, who had just cofounded Linkabit and later cofounded Qualcomm, worked on the project. He recalls traveling through Europe with Kahn, trying to convince organizations to become part of the network.

“We visited three PTTs [postal, telegraph, and telephone services],” Jacobs said, “in Germany, in France, and in the U.K. The reactions were all the same. They were very friendly, they gave us the morning to explain packet switching and what we were thinking of doing, then they would serve us lunch and throw us out.” But the two of them kept at it.

“We took a little hike one day,” Jacobs says. “There was a steep trail that went up the side of a fjord, water coming down the opposite side. We came across an old man, casting a line into the stream rushing downhill. He said he was fishing for salmon, and we laughed—what were his chances? But as we walked uphill, he yanked on his rod and pulled out a salmon.” The Americans were impressed with his determination.

“You have to have confidence in what you are trying to do,” Jacobs says. “Bob had that. He was able to take rejection and keep persisting.”

Ultimately, a government laboratory in Norway, the Norwegian Defence Research Establishment, and a laboratory at University College London came on board—enough to get the satellite network up and running.

And Then Came the Internet

With the ARPANET, packet-radio, and packet-satellite networks all operational, it was clear to Kahn that the next step would be to connect them. He knew that the ARPANET design all by itself wouldn’t be useful for bringing together these disparate networks.

“Number one,” he says, “the original ARPANET protocols required perfect delivery, and if something didn’t get through and you didn’t get acknowledgment, you kept trying until it got through. That’s not going to work if you’re in a noisy environment, if you’re in a tunnel, if you’re behind a mountain, or if somebody’s jamming you. So I wanted something that didn’t require perfect communication.”

“Number two,” he continues, “you wanted something that didn’t have to wait for everything in a message to get through before the next message could get through.

“And you had no way in the ARPANET protocols for telling a destination what to do with the information when it got there. If a router got a packet and it wasn’t for another node on the ARPANET, it would assume ‘Oh, must be for me.’ It had nowhere else to send it.”

Initially, Kahn assigned the network part of the IP addresses himself, keeping a record on a single index card he carried in his shirt pocket.

“Vint, as a computer scientist, thought of things in terms of bits and computer programs. As an electrical engineer, I thought about signals and bandwidth and the nondigital side of the world.”—Bob Kahn

He approached Vint Cerf, then an assistant professor at Stanford University, who had been involved with Kahn in testing the ARPANET during its development, and he asked him to collaborate.

“Vint, as a computer scientist, thought of things in terms of bits and computer programs. As an electrical engineer, I thought about signals and bandwidth and the nondigital side of the world. We brought together different sets of talents,” Kahn says.

“Bob came out to Stanford to see me in the spring of 1973 and raised the problem of multiple networks,” Cerf recalls. “He thought they should have a set of rules that allowed them to be autonomous but interact with each other. He called it internetworking.”

“He’d already given this serious thought,” Cerf continues. “He wanted SRI to host the operations of the packet-radio network, and he had people in the Norwegian defense-research establishment working on the packet-satellite network. He asked me how we could make it so that a host on any network could communicate with another in a standardized way.”

Cerf was in.

The two met regularly over the next six months to work on “the internetworking problem.” Between them, they made some half a dozen cross-country trips and also met one-on-one whenever they found themselves attending the same conference. In July 1973, they decided it was time to commit their ideas to paper.

“I remember renting a conference room at the Cabana Hyatt in Palo Alto,” Kahn says. The two planned to sequester themselves there in August and write until they were done. Kahn says it took a day; Cerf remembers it as two, or at least a day and a half. In any case, they got it done in short order.

Cerf took the first crack at it. “I sat down with my yellow pad of paper,” he says. “And I couldn’t figure out where to start.”

“I went out to pay for the conference room,” Kahn says. “When I came back Vint was sitting there with the pencil in his hand—and not a single word on the paper.”

Kahn admits that the task wasn’t easy. “If you tried to describe the United States government,” he says, “what would you say first? It’s the buildings, it’s the people, it’s the Constitution. Do you talk about Britain? Do you talk about Indians? Where do you start?”

Two men wearing medals on striped ribbons around their necks chat with President Bill Clinton In 1997, President Bill Clinton [right] presented the National Medal of Technology to Bob Kahn [center] and Vint Cerf [left].Bob Kahn

Kahn took the pencil from Cerf and started writing. “That’s his style,” Cerf says, “write as much as you can and edit later. I tend to be more organized, to start with an outline.”

“I told him to go away,” Kahn says, “and I wrote the first eight or nine pages. When Vint came back, he looked at what I had done and said, ‘Okay, give me the pencil.’ And he wrote the next 20 or 30 pages. And then we went back and forth.”

Finally, Cerf walked off with the handwritten version to give to his secretary to type. When she finished, he told her to throw that original draft away. “Historians have been mad at me ever since,” Cerf says.

“It might be worth a fortune today,” Kahn muses. The resulting paper, published in the IEEE Transactions on Communications in 1974, represented the basis of the Internet as we now know it. It introduced the Transmission Control Protocol, later separated into two parts and now known as TCP/IP.

A New World on an Index Card

A key to making this network of networks work was the Internet Protocol (IP) addressing system. Every new host coming onto the network required a new IP address. These numerical labels uniquely identify computers and are used for routing packets to their locations on the network.

Initially, Kahn assigned the network part of the IP addresses himself, keeping a record of who had been allotted what set of numbers on a single index card he carried in his shirt pocket. When that card began to fill up in the late ‘70s, he decided it was time to turn over the task to others. It became the responsibility of Jon Postel, and subsequently that of the Internet Assigned Numbers Authority (IANA) at the University of Southern California. IANA today is part of ICANN, the Internet Corporation for Assigned Names and Numbers.

Two men in casual dress stand in front of a rocky trail Bob Kahn and Vint Cerf visited Yellowstone National Park together in the early 2000s.Bob Kahn

Kahn moved up the DARPA ladder, to chief scientist, deputy director, and, in 1979, director of the Information Processing Techniques Office. He stayed in that last role until late 1985. At DARPA, in addition to his networking efforts, he launched the VLSI [very-large-scale integration] Architecture and Design Project and the billion-dollar Strategic Computing Initiative.

In 1985, with political winds shifting and government research budgets about to shrink substantially, Kahn left DARPA to form a nonprofit dedicated to fostering research on new infrastructures, including designing and prototyping networks for computing and communications. He established it as the Corporation for National Research Initiatives (CNRI).

Kahn reached out to industry for funding, making it clear that, as a nonprofit, CNRI intended to make its research results open to all. Bell Atlantic, Bellcore, Digital Equipment Corp., IBM, MCI, NYNEX, Xerox, and others stepped up with commitments that totaled over a million dollars a year for several years. He also reached out to the U.S. National Science Foundation and received funding to build testbeds to demonstrate technology and applications for computer networks at speeds of at least a gigabit. CNRI also obtained U.S. government funding to create a secretariat for the Internet Activities Board, which eventually led to the establishment of the Internet Engineering Task Force, which has helped evolve Internet protocols and standards. CNRI ran the secretariat for about 18 years.

Cerf joined Kahn at CNRI about six months after it started. “We were thinking about applications of the Internet,” Cerf says. “We were interested in digital libraries, as were others.” Kahn and Cerf sought support for such work, and DARPA again came through, funding CNRI to undertake a research effort involving building and linking digital libraries at universities.

They also began working on the concept of “Knowbots,” mobile software programs that could collect and store information to be used to handle distributed tasks on a network.

As part of that digital library project, Kahn collaborated with Robert Wilensky at the University of California, Berkeley, on a paper called “A Framework for Distributed Digital Object Services,” published in the International Journal on Digital Libraries in 2006.

The Digital Object Emerges

Out of this work came the idea that today forms the basis of much of Kahn’s current efforts: digital objects, also known as digital entities. A digital object is a sequence of bits, or a set of such sequences, having a unique identifier. A digital object may incorporate a wide variety of information—documents, movies, software programs, wills, and even cryptocurrency. The concept of a digital object, together with distributed repositories, metadata registries, and a decentralized identifier resolution system, form the digital-object architecture. From its identifier, a digital object can be located even if it moves to a different place on the net. Kahn’s collaborator on much of this work is his wife, Patrice Lyons, a copyright and communications lawyer.

Initially, CNRI maintained the registry of Digital Object Identifier (DOI) records. Then those came to be kept locally, and CNRI maintained just the registry of prefix records. In 2014, CNRI handed off that responsibility to a newly formed international body, the DONA Foundation in Geneva. Kahn serves as chair of the DONA board. The organization uses multiple distributed administrators to operate prefix registries. One, the International DOI Foundation, has handled close to 100 billion identifiers to date. The DOI system is used by a host of publishers, including IEEE, as well as other organizations to manage their digital assets.

A man in a suit stands in front of a sign with a paragraph of text.  The title, \u201cARPANET\u201d is legible. A plaque commemorating the ARPANET now stands in front of the Arlington, Va., headquarters of the Defense Advanced Research Projects Agency (DARPA). Bob Kahn

Kahn sees this current effort as a logical extension of the work he did on the ARPANET and then the Internet. “It’s all about how we use the Internet to manage information,” he says.

Kahn, now 85, works more than five days a week and has no intention of slowing down. The Internet, he says, is still in its startup phase. Why would he step back now?

“I once had dinner with [historian and author] David McCullough,” Kahn explains. Referring to the 1974 paper he wrote with Cerf, he says, “I told him that if I were sitting in the audience at a meeting, people wouldn’t say ‘Here’s what the writers of this paper really meant,’ because I would get up and say, ‘Well we wrote that and….’ “

“I asked McCullough, ‘When do you consider the end of the beginning of America?’” After some discussion, McCullough put the date at 4 July 1826, when both John Adams and Thomas Jefferson passed away.

Kahn agreed that their deaths marked the end of the country’s startup phase, because Adams and Jefferson never stopped worrying about the country that they helped create.

“It was such an important thing that they were doing that their lives were completely embedded in it,” Kahn says. “And the same is true for me and the Internet.”

This article appears in the May 2024 print issue as “The Great Interconnector.”

  • ✇IEEE Spectrum
  • What Software Engineers Need to Know About AI JobsTekla S. Perry
    AI hiring has been growing at least slightly in most regions around the world, with Hong Kong leading the pack; however, AI careers are losing ground compared with the overall job market, according to the 2024 AI Index Report. This annual effort by Stanford’s Institute for Human-Centered Artificial Intelligence (HAI) draws from a host of data to understand the state of the AI industry today.Stanford’s AI Index looks at the performance of AI models, investment, research, and regulations. But tuck
     

What Software Engineers Need to Know About AI Jobs

16. Duben 2024 v 16:08


AI hiring has been growing at least slightly in most regions around the world, with Hong Kong leading the pack; however, AI careers are losing ground compared with the overall job market, according to the 2024 AI Index Report. This annual effort by Stanford’s Institute for Human-Centered Artificial Intelligence (HAI) draws from a host of data to understand the state of the AI industry today.

Stanford’s AI Index looks at the performance of AI models, investment, research, and regulations. But tucked within the 385 pages of the 2024 Index are several insights into AI career trends, based on data from LinkedIn and Lightcast, a labor market analytics firm.

Here’s a quick look at that analysis, in four charts.

Overall hiring is up—a little


But don’t get too excited—as a share of overall labor demand, AI jobs are slipping


Python is still the best skill to have


Machine learning loses luster


❌
❌