FreshRSS

Zobrazení pro čtení

Jsou dostupné nové články, klikněte pro obnovení stránky.

Video Friday: Silly Robot Dog Jump



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

ICRA@40: 23–26 September 2024, ROTTERDAM, NETHERLANDS
IROS 2024: 14–18 October 2024, ABU DHABI, UAE
ICSR 2024: 23–26 October 2024, ODENSE, DENMARK
Cybathlon 2024: 25–27 October 2024, ZURICH

Enjoy today’s videos!

The title of this video is “Silly Robot Dog Jump” and that’s probably more than you need to know.

[ Deep Robotics ]

It’ll be great when robots are reliably autonomous, but until they get there, collaborative capabilities are a must.

[ Robust AI ]

I am so INCREDIBLY EXCITED for this.

[ IIT Instituto Italiano di Tecnologia ]

In this 3 minutes long one-take video, the LimX Dynamics CL-1 takes on the challenge of continuous heavy objects loading among shelves in a simulated warehouse, showcasing the advantages of the general-purpose form factor of humanoid robots.

[ LimX Dynamics ]

Birds, bats and many insects can tuck their wings against their bodies when at rest and deploy them to power flight. Whereas birds and bats use well-developed pectoral and wing muscles, how insects control their wing deployment and retraction remains unclear because this varies among insect species. Here we demonstrate that rhinoceros beetles can effortlessly deploy their hindwings without necessitating muscular activity. We validated the hypothesis using a flapping microrobot that passively deployed its wings for stable, controlled flight and retracted them neatly upon landing, demonstrating a simple, yet effective, approach to the design of insect-like flying micromachines.

[ Nature ]

Agility Robotics’ CTO, Pras Velagapudi, talks about data collection, and specifically about the different kinds we collect from our real-world robot deployments and generally what that data is used for.

[ Agility Robotics ]

Robots that try really hard but are bad at things are utterly charming.

[ University of Tokyo JSK Lab ]

The DARPA Triage Challenge unsurprisingly has a bunch of robots in it.

[ DARPA ]

The Cobalt security robot has been around for a while, but I have to say, the design really holds up—it’s a good looking robot.

[ Cobalt AI ]

All robots that enter elevators should be programmed to gently sway back and forth to the elevator music. Even if there’s no elevator music.

[ Somatic ]

ABB Robotics and the Texas Children’s Hospital have developed a groundbreaking lab automation solution using ABB’s YuMi® cobot to transfer fruit flies (Drosophila melanogaster) used in the study for developing new drugs for neurological conditions such as Alzheimer’s, Huntington’s and Parkinson’s.

[ ABB ]

Extend Robotics are building embodied AI enabling highly flexible automation for real-world physical tasks. The system features intuitive immersive interface enabling tele-operation, supervision and training AI models.

[ Extend Robotics ]

The recorded livestream of RSS 2024 is now online, in case you missed anything.

[ RSS 2024 ]

Video Friday: The Secrets of Shadow Robot’s New Hand



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

ICRA@40: 23–26 September 2024, ROTTERDAM, NETHERLANDS
IROS 2024: 14–18 October 2024, ABU DHABI, UAE
ICSR 2024: 23–26 October 2024, ODENSE, DENMARK
Cybathlon 2024: 25–27 October 2024, ZURICH

Enjoy today’s videos!

At ICRA 2024, in Tokyo last May, we sat down with the director of Shadow Robot, Rich Walker, to talk about the journey toward developing its newest model. Designed for reinforcement learning, the hand is extremely rugged, has three fingers that act like thumbs, and has fingertips that are highly sensitive to touch.

[ IEEE Spectrum ]

Food Angel is a food delivery robot to help with the problems of food insecurity and homelessness. Utilizing autonomous wheeled robots for this application may seem to be a good approach, especially with a number of successful commercial robotic delivery services. However, besides technical considerations such as range, payload, operation time, autonomy, etc., there are a number of important aspects that still need to be investigated, such as how the general public and the receiving end may feel about using robots for such applications, or human-robot interaction issues such as how to communicate the intent of the robot to the homeless.

[ RoMeLa ]

The UKRI FLF team RoboHike of UCL Computer Science of the Robot Perception and Learning lab with Forestry England demonstrate the ANYmal robot to help preserve the cultural heritage of an historic mine in the Forest of Dean, Gloucestershire, UK.

This clip is from a reboot of the British TV show “Time Team.” If you’re not already a fan of “Time Team,” let me just say that it is one of the greatest retro reality TV shows ever made, where actual archaeologists wander around the United Kingdom and dig stuff up. If they can find anything. Which they often can’t. And also it has Tony Robinson (from “Blackadder”), who runs everywhere for some reason. Go to Time Team Classics on YouTube for 70+ archived episodes.

[ UCL RPL ]

UBTECH humanoid robot Walker S Lite is working in Zeekr’s intelligent factory to complete handling tasks at the loading workstation for 21 consecutive days, and assist its employees with logistics work.

[ UBTECH ]

Current visual navigation systems often treat the environment as static, lacking the ability to adaptively interact with obstacles. This limitation leads to navigation failure when encountering unavoidable obstructions. In response, we introduce IN-Sight, a novel approach to self-supervised path planning, enabling more effective navigation strategies through interaction with obstacles.

[ ETH Zurich paper / IROS 2024 ]

When working on autonomous cars, sometimes it’s best to start small.

[ University of Pennsylvania ]

MIT MechE researchers introduce an approach called SimPLE (Simulation to Pick Localize and placE), a method of precise kitting, or pick and place, in which a robot learns to pick, regrasp, and place objects using the object’s computer-aided design (CAD) model, and all without any prior experience or encounters with the specific objects.

[ MIT ]

Staff, students (and quadruped robots!) from UCL Computer Science wish the Great Britain athletes the best of luck this summer in the Olympic Games & Paralympics.

[ UCL Robotics Institute ]

Walking in tall grass can be hard for robots, because they can’t see the ground that they’re actually stepping on. Here’s a technique to solve that, published in Robotics and Automation Letters last year.

[ ETH Zurich Robotic Systems Lab ]

There is no such thing as excess batter on a corn dog, and there is also no such thing as a defective donut. And apparently, making Kool-Aid drink pouches is harder than it looks.

[ Oxipital AI ]

Unitree has open-sourced its software to teleoperate humanoids in VR for training-data collection.

[ Unitree / GitHub ]

Nothing more satisfying than seeing point-cloud segments wiggle themselves into place, and CSIRO’s Wildcat SLAM does this better than anyone.

[ IEEE Transactions on Robotics ]

A lecture by Mentee Robotics CEO Lior Wolf, on Mentee’s AI approach.

[ Mentee Robotics ]

Figure 02 Robot Is a Sleeker, Smarter Humanoid



Today, Figure is introducing the newest, slimmest, shiniest, and least creatively named next generation of its humanoid robot: Figure 02. According to the press release, Figure 02 is the result of “a ground-up hardware and software redesign” and is “the highest performing humanoid robot,” which may even be true for some arbitrary value of “performing.” Also notable is that Figure has been actively testing robots with BMW at a manufacturing plant in Spartanburg, S.C., where the new humanoid has been performing “data collection and use case training.”

The rest of the press release is pretty much, “Hey, check out our new robot!” And you’ll get all of the content in the release by watching the videos. What you won’t get from the videos is any additional info about the robot. But we sent along some questions to Figure about these videos, and have a few answers from Michael Rose, director of controls, and Vadim Chernyak, director of hardware.


First, the trailer:

How many parts does Figure 02 have, and is this all of them?

Figure: A couple hundred unique parts and a couple thousand parts total. No, this is not all of them.

Does Figure 02 make little Figure logos with every step?

Figure: If the surface is soft enough, yes.

Swappable legs! Was that hard to do, or easier to do because you only have to make one leg? Figure: We chose to make swappable legs to help with manufacturing.

Is the battery pack swappable too?

Figure: Our battery is swappable, but it is not a quick swap procedure.

What’s that squishy-looking stuff on the back of Figure 02’s knees and in its elbow joints?

Figure: These are soft stops which limit the range of motion in a controlled way and prevent robot pinch points

Where’d you hide that thumb motor?

Figure: The thumb is now fully contained in the hand.

Tell me about the “skin” on the neck!

Figure: The skin is a soft fabric which is able to keep a clean seamless look even as the robot moves its head.

And here’s the reveal video:

When Figure 02’s head turns, its body turns too, and its arms move. Is that necessary, or aesthetic?

Figure: Aesthetic.

The upper torso and shoulders seem very narrow compared to other humanoids. Why is that?

Figure: We find it essential to package the robot to be of similar proportions to a human. This allows us to complete our target use cases and fit into our environment more easily.

What can you tell me about Figure 02’s walking gait?

Figure: The robot is using a model predictive controller to determine footstep locations and forces required to maintain balance and follow the desired robot trajectory.

How much runtime do you get from 2.25 kilowatt-hours doing the kinds of tasks that we see in the video?

Figure: We are targeting a 5-hour run time for our product.


A photo a grey and black humanoid robot with a shiny black face plate standing in front of a white wall. Slick, but also a little sinister?Figure

This thing looks slick. I’d say that it’s maybe a little too far on the sinister side for a robot intended to work around humans, but the industrial design is badass and the packaging is excellent, with the vast majority of the wiring now integrated within the robot’s skins and flexible materials covering joints that are typically left bare. Figure, if you remember, raised a US $675 million Series B that valued the company at $2.6 billion, and somehow the look of this robot seems appropriate to that.

I do still have some questions about Figure 02, such as where the interesting foot design came from and whether a 16-degree-of-freedom hand is really worth it in the near term. It’s also worth mentioning that Figure seems to have a fair number of Figure 02 robots running around—at least five units at its California headquarters, plus potentially a couple of more at the BMW Spartanburg manufacturing facility.

I also want to highlight this boilerplate at the end of the release: “our humanoid is designed to perform human-like tasks within the workforce and in the home.” We are very, very far away from a humanoid robot in the home, but I appreciate that it’s still an explicit goal that Figure is trying to achieve. Because I want one.

Video Friday: UC Berkeley’s Little Humanoid



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

ICRA@40: 23–26 September 2024, ROTTERDAM, NETHERLANDS
IROS 2024: 14–18 October 2024, ABU DHABI, UNITED ARAB EMIRATES
ICSR 2024: 23–26 October 2024, ODENSE, DENMARK
Cybathlon 2024: 25–27 October 2024, ZURICH

Enjoy today’s videos!

We introduce Berkeley Humanoid, a reliable and low-cost mid-scale humanoid research platform for learning-based control. Our lightweight, in-house-built robot is designed specifically for learning algorithms with low simulation complexity, anthropomorphic motion, and high reliability against falls. Capable of omnidirectional locomotion and withstanding large perturbations with a compact setup, our system aims for scalable, sim-to-real deployment of learning-based humanoid systems.

[ Berkeley Humanoid ]

This article presents Ray, a new type of audio-animatronic robot head. All the mechanical structure of the robot is built in one step by 3-D printing... This simple, lightweight structure and the separate tendon-based actuation system underneath allow for smooth, fast motions of the robot. We also develop an audio-driven motion generation module that automatically synthesizes natural and rhythmic motions of the head and mouth based on the given audio.

[ Paper ]

CSAIL researchers introduce a novel approach allowing robots to be trained in simulations of scanned home environments, paving the way for customized household automation accessible to anyone.

[ MIT News ]

Okay, sign me up for this.

[ Deep Robotics ]

NEURA Robotics is among the first joining the early access NVIDIA Humanoid Robot Developer Program.

This could be great, but there’s an awful lot of jump cuts in that video.

[ Neura ] via [ NVIDIA ]

I like that Unitree’s tagline in the video description here is “Let’s have fun together.”

Is that “please don’t do dumb stuff with our robots” at the end of the video new...?

[ Unitree ]

NVIDIA CEO Jensen Huang presented a major breakthrough on Project GR00T with WIRED’s Lauren Goode at SIGGRAPH 2024. In a two-minute demonstration video, NVIDIA explained a systematic approach they discovered to scale up robot data, addressing one of the most challenging issues in robotics.

[ Nvidia ]

In this research, we investigated the innovative use of a manipulator as a tail in quadruped robots to augment their physical capabilities. Previous studies have primarily focused on enhancing various abilities by attaching robotic tails that function solely as tails on quadruped robots. While these tails improve the performance of the robots, they come with several disadvantages, such as increased overall weight and higher costs. To mitigate these limitations, we propose the use of a 6-DoF manipulator as a tail, allowing it to serve both as a tail and as a manipulator.

[ Paper ]

In this end-to-end demo, we showcase how MenteeBot transforms the shopping experience for individuals, particularly those using wheelchairs. Through discussions with a global retailer, MenteeBot has been designed to act as the ultimate shopping companion, offering a seamless, natural experience.

[ Menteebot ]

Nature Fresh Farms, based in Leamington, Ontario, is one of North America’s largest greenhouse farms growing high-quality organics, berries, peppers, tomatoes, and cucumbers. In 2022, Nature Fresh partnered with Four Growers, a FANUC Authorized System Integrator, to develop a robotic system equipped with AI to harvest tomatoes in the greenhouse environment.

[ FANUC ]

Contrary to what you may have been led to believe by several previous Video Fridays, WVUIRL’s open source rover is quite functional, most of the time.

[ WVUIRL ]

Honeybee Robotics, a Blue Origin company, is developing Lunar Utility Navigation with Advanced Remote Sensing and Autonomous Beaming for Energy Redistribution, also known as LUNARSABER. In July 2024, Honeybee Robotics captured LUNARSABER’s capabilities during a demonstration of a scaled prototype.

[ Honeybee Robotics ]

Bunker Mini is a compact tracked mobile robot specifically designed to tackle demanding off-road terrains.

[ AgileX ]

In this video we present results of our lab from the latest field deployments conducted in the scope of the Digiforest EU project, in Stein am Rhein, Switzerland. Digiforest brings together various partners working on aerial and legged robots, autonomous harvesters, and forestry decision-makers. The goal of the project is to enable autonomous robot navigation, exploration, and mapping, both below and above the canopy, to create a data pipeline that can support and enhance foresters’ decision-making systems.

[ ARL ]

A Robot Dentist Might Be a Good Idea, Actually



I’ll be honest: when I first got this pitch for an autonomous robot dentist, I was like: “Okay, I’m going to talk to these folks and then write an article, because there’s no possible way for this thing to be anything but horrific.” Then they sent me some video that was, in fact, horrific, in the way that only watching a high speed drill remove most of a tooth can be.

But fundamentally this has very little to do with robotics, because getting your teeth drilled just sucks no matter what. So the real question we should be asking is this: How can we make a dental procedure as quick and safe as possible, to minimize that inherent horrific-ness?And the answer, surprisingly, may be this robot from a startup called Perceptive.

Perceptive is today announcing two new technologies that I very much hope will make future dental experiences better for everyone. While it’s easy to focus on the robot here (because, well, it’s a robot), the reason the robot can do what it does (which we’ll get to in a minute) is because of a new imaging system. The handheld imager, which is designed to operate inside of your mouth, uses optical coherence tomography (OCT) to generate a 3D image of the inside of your teeth, and even all the way down below the gum line and into the bone. This is vastly better than the 2D or 3D x-rays that dentists typically use, both in resolution and positional accuracy.

A hand in a blue medical glove holds a black wand-like device with a circuit board visible. Perceptive’s handheld optical coherence tomography imager scans for tooth decay.Perceptive

X-Rays, it turns out, are actually really bad at detecting cavities; Perceptive CEO Chris Ciriello tells us that the accuracy is on the order of 30 percent of figuring out the location and extent of tooth decay. In practice, this isn’t as much of a problem as it seems like it should be, because the dentist will just start drilling into your tooth and keep going until they find everything. But obviously this won’t work for a robot, where you need all of the data beforehand. That’s where the OCT comes in. You can think of OCT as similar to an ultrasound, in that it uses reflected energy to build up an image, but OCT uses light instead of sound for much higher resolution.

A short video shows outlines of teeth in progressively less detail, but highlights some portions in blood red. Perceptive’s imager can create detailed 3D maps of the insides of teeth.Perceptive

The reason OCT has not been used for teeth before is because with conventional OCT, the exposure time required to get a detailed image is several seconds, and if you move during the exposure, the image will blur. Perceptive is instead using a structure from motion approach (which will be familiar to many robotics folks), where they’re relying on a much shorter exposure time resulting in far fewer data points, but then moving the scanner and collecting more data to gradually build up a complete 3D image. According to Ciriello, this approach can localize pathology within about 20 micrometers with over 90 percent accuracy, and it’s easy for a dentist to do since they just have to move the tool around your tooth in different orientations until the scan completes.

Again, this is not just about collecting data so that a robot can get to work on your tooth. It’s about better imaging technology that helps your dentist identify and treat issues you might be having. “We think this is a fundamental step change,” Ciriello says. “We’re giving dentists the tools to find problems better.”

A silvery robotic arm with a small drill at the end. The robot is mechanically coupled to your mouth for movement compensation.Perceptive

Ciriello was a practicing dentist in a small mountain town in British Columbia, Canada. People in such communities can have a difficult time getting access to care. “There aren’t too many dentists who want to work in rural communities,” he says. “Sometimes it can take months to get treatment, and if you’re in pain, that’s really not good. I realized that what I had to do was build a piece of technology that could increase the productivity of dentists.”

Perceptive’s robot is designed to take a dental procedure that typically requires several hours and multiple visits, and complete it in minutes in a single visit. The entry point for the robot is crown installation, where the top part of a tooth is replaced with an artificial cap (the crown). This is an incredibly common procedure, and it usually happens in two phases. First, the dentist will remove the top of the tooth with a drill. Next, they take a mold of the tooth so that a crown can be custom fit to it. Then they put a temporary crown on and send you home while they mail the mold off to get your crown made. A couple weeks later, the permanent crown arrives, you go back to the dentist, and they remove the temporary one and cement the permanent one on.

With Perceptive’s system, it instead goes like this: on a previous visit where the dentist has identified that you need a crown in the first place, you’d have gotten a scan of your tooth with the OCT imager. Based on that data, the robot will have planned a drilling path, and then the crown could be made before you even arrive for the drilling to start, which is only possible because the precise geometry is known in advance. You arrive for the procedure, the robot does the actually drilling in maybe five minutes or so, and the perfectly fitting permanent crown is cemented into place and you’re done.

A silvery robotic arm with a small drill at the end. The arm is mounted on a metal cart with a display screen. The robot is still in the prototype phase but could be available within a few years.Perceptive

Obviously, safety is a huge concern here, because you’ve got a robot arm with a high-speed drill literally working inside of your skull. Perceptive is well aware of this.

The most important thing to understand about the Perceptive robot is that it’s physically attached to you as it works. You put something called a bite block in your mouth and bite down on it, which both keeps your mouth open and keeps your jaw from getting tired. The robot’s end effector is physically attached to that block through a series of actuated linkages, such that any motions of your head are instantaneously replicated by the end of the drill, even if the drill is moving. Essentially, your skull is serving as the robot’s base, and your tooth and the drill are in the same reference frame. Purely mechanical coupling means there’s no vision system or encoders or software required: it’s a direct physical connection so that motion compensation is instantaneous. As a patient, you’re free to relax and move your head somewhat during the procedure, because it makes no difference to the robot.

Human dentists do have some strategies for not stabbing you with a drill if you move during a procedure, like putting their fingers on your teeth and then supporting the drill on them. But this robot should be safer and more accurate than that method, because of the rigid connection leading to only a few tens of micrometers of error, even on a moving patient. It’ll move a little bit slower than a dentist would, but because it’s only drilling exactly where it needs to, it can complete the procedure faster overall, says Ciriello.

There’s also a physical counterbalance system within the arm, a nice touch that makes the arm effectively weightless. (It’s somewhat similar to the PR2 arm, for you OG robotics folks.) And the final safety measure is the dentist-in-the-loop via a foot pedal that must remain pressed or the robot will stop moving and turn off the drill.

Ciriello claims that not only is the robot able to work faster, it also will produce better results. Most restorations like fillings or crowns last about five years, because the dentist either removed too much material from the tooth and weakened it, or removed too little material and didn’t completely solve the underlying problem. Perceptive’s robot is able to be far more exact. Ciriello says that the robot can cut geometry that’s “not humanly possible,” fitting restorations on to teeth with the precision of custom-machined parts, which is pretty much exactly what they are.

A short video shows a d dental drill working on a tooth in a person's mouth. Perceptive has successfully used its robot on real human patients, as shown in this sped-up footage. In reality the robot moves slightly slower than a human dentist.Perceptive

While it’s easy to focus on the technical advantages of Perceptive’s system, dentist Ed Zuckerberg (who’s an investor in Perceptive) points out that it’s not just about speed or accuracy, it’s also about making patients feel better. “Patients think about the precision of the robot, versus the human nature of their dentist,” Zuckerberg says. It gives them confidence to see that their dentist is using technology in their work, especially in ways that can address common phobias. “If it can enhance the patient experience or make the experience more comfortable for phobic patients, that automatically checks the box for me.”

There is currently one other dental robot on the market. Called Yomi, it offers assistive autonomy for one very specific procedure for dental implants. Yomi is not autonomous, but instead provides guidance for a dentist to make sure that they drill to the correct depth and angle.

While Perceptive has successfully tested their first-generation system on humans, it’s not yet ready for commercialization. The next step will likely be what’s called a pivotal clinical trial with the FDA, and if that goes well, Cirello estimates that it could be available to the public in “several years”. Perceptive has raised US $30 million in funding so far, and here’s hoping that’s enough to get them across the finish line.

Video Friday: Robot Baby With a Jet Pack



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

ICRA@40: 23–26 September 2024, ROTTERDAM, NETHERLANDS
IROS 2024: 14–18 October 2024, ABU DHABI, UAE
ICSR 2024: 23–26 October 2024, ODENSE, DENMARK
Cybathlon 2024: 25–27 October 2024, ZURICH

Enjoy today’s videos!

If the Italian Institute of Technology’s iRonCub3 looks this cool while learning to fly, just imagine how cool it will look when it actually takes off!

Hovering is in the works, but this is a really hard problem, which you can read more about in Daniele Pucci’s post on LinkedIn.

[ LinkedIn ]

Stanford Engineering and the Toyota Research Institute achieve the world’s first autonomous tandem drift. Leveraging the latest AI technology, Stanford Engineering and TRI are working to make driving safer for all. By automating a driving style used in motorsports called drifting—in which a driver deliberately spins the rear wheels to break traction—the teams have unlocked new possibilities for future safety systems.

[ TRI ]

Researchers at the Istituto Italiano di Tecnologia (Italian Institute of Technology) have demonstrated that under specific conditions, humans can treat robots as coauthors of the results of their actions. The condition that enables this phenomenon is a robot that behaves in a social, humanlike manner. Engaging in eye contact and participating in a common emotional experience, such as watching a movie, are key.

[ Science Robotics ]

If Aibo is not quite catlike enough for you, here you go.

[ Maicat ] via [ RobotStart ]

I’ve never been more excited for a sim-to-real gap to be bridged.

[ USC Viterbi ]

I’m sorry, but this looks exactly like a quadrotor sitting on a test stand.

The 12-pound Quad-Biplane combines four rotors and two wings without any control surfaces. The aircraft takes off like a conventional quadcopter and transitions to a more-efficient horizontal cruise flight, similar to that of a biplane. This combines the simplicity of a quadrotor design, providing vertical flight capability, with the cruise efficiency of a fixed-wing aircraft. The rotors are responsible for aircraft control both in vertical and forward cruise flight regimes.

[ AVFL ]

Tensegrity robots are so weird, and I so want them to be useful.

[ Suzumori Endo Lab ]

Top-performing robots need all the help they can get.

[ Team B-Human ]

And now: a beetle nearly hit by an autonomous robot.

[ WVUIRL ]

Humans possess a remarkable ability to react to unpredictable perturbations through immediate mechanical responses, which harness the visco-elastic properties of muscles to maintain balance. Inspired by this behavior, we propose a novel design of a robotic leg utilizing fiber-jammed structures as passive compliant mechanisms to achieve variable joint stiffness and damping.

[ Paper ]

I don’t know what this piece of furniture is, but your cats will love it.

[ ABB ]

This video shows a dexterous avatar humanoid robot with VR teleoperation, hand tracking, and speech recognition to achieve highly dexterous mobile manipulation. Extend Robotics is developing a dexterous remote-operation interface to enable data collection for embodied AI and humanoid robots.

[ Extend Robotics ]

I never really thought about this, but wind turbine blades are hollow inside and need to be inspected sometimes, which is really one of those jobs where you’d much rather have a robot do it.

[ Flyability ]

Here’s a full, uncut drone-delivery mission, including a package pickup from our AutoLoader—a simple, nonpowered mechanical device that allows retail partners to utilize drone delivery with existing curbside-pickup workflows.

[ Wing ]

Daniel Simu and his acrobatic robot competed in “America’s Got Talent,” and even though his robot did a very robot thing by breaking itself immediately beforehand, the performance went really well.

[ Acrobot ]

A tour of the Creative Robotics Mini Exhibition at the Creative Computing Institute, University of the Arts London.

[ UAL ]

Thanks, Hooman!

Zoox CEO Aicha Evans and cofounder and chief technology officer Jesse Levinson hosted a LinkedIn Live last week to reflect on the past decade of building Zoox and their predictions for the next 10 years of the autonomous-vehicle industry.

[ Zoox ]

iRobot’s Autowash Dock Is (Almost) Automated Floor Care



The dream of robotic floor care has always been for it to be hands-off and mind-off. That is, for a robot to live in your house that will keep your floors clean without you having to really do anything or even think about it. When it comes to robot vacuuming, that’s been more or less solved thanks to self-emptying robots that transfer debris into docking stations, which iRobot pioneered with the Roomba i7+ in 2018. By 2022, iRobot’s Combo j7+ added an intelligent mopping pad to the mix, which definitely made for cleaner floors but was also a step backwards in the sense that you had to remember to toss the pad into your washing machine and fill the robot’s clean water reservoir every time. The Combo j9+ stuffed a clean water reservoir into the dock itself, which could top off the robot with water by itself for a month.

With the new Roomba Combo 10 Max, announced today, iRobot has cut out (some of) that annoying process thanks to a massive new docking station that self-empties vacuum debris, empties dirty mop water, refills clean mop water, and then washes and dries the mopping pad, completely autonomously.


iRobot

The Roomba part of this is a mildly upgraded j7+, and most of what’s new on the hardware side here is in the “multifunction AutoWash Dock.” This new dock is a beast: It empties the robot of all of the dirt and debris picked up by the vacuum, refills the Roomba’s clean water tank from a reservoir, and then starts up a wet scrubby system down under the bottom of the dock. The Roomba deploys its dirty mopping pad onto that system, and then drives back and forth while the scrubby system cleans the pad. All the dirty water from this process gets sucked back up into a dedicated reservoir inside the dock, and the pad gets blow-dried while the scrubby system runs a self-cleaning cycle.

A round black vacuuming robot sits inside of a large black docking station that is partially transparent to show clean and dirty water tanks inside. The dock removes debris from the vacuum, refills it with clean water, and then uses water to wash the mopping pad.iRobot

This means that as a user, you’ve only got to worry about three things: dumping out the dirty water tank every week (if you use the robot for mopping most days), filling the clean water tank every week, and then changing out the debris every two months. That is not a lot of hands-on time for having consistently clean floors.

The other thing to keep in mind about all of these robots is that they do need relatively frequent human care if you want them to be happy and successful. That means flipping them over and getting into their guts to clean out the bearings and all that stuff. iRobot makes this very easy to do, and it’s a necessary part of robot ownership, so the dream of having a robot that you can actually forget completely is probably not achievable.

The consequence for this convenience is a real chonker of a dock. The dock is basically furniture, and to the company’s credit, iRobot designed it so that the top surface is useable as a shelf—Access to the guts of the dock are from the front, not the top. This is fine, but it’s also kind of crazy just how much these docks have expanded, especially once you factor in the front ramp that the robot drives up, which sticks out even farther.

A round black robot on a wooden floor approaches a dirty carpet and uses a metal arm to lift a wet mopping pad onto its back. The Roomba will detect carpet and lift its mopping pad up to prevent drips.iRobot

We asked iRobot director of project management Warren Fernandez about whether docks are just going to keep on getting bigger forever until we’re all just living in giant robot docks, to which he said: “Are you going to continue to see some large capable multifunction docks out there in the market? Yeah, I absolutely think you will—but when does big become too big?” Fernandez says that there are likely opportunities to reduce dock size going forward through packaging efficiencies or dual-purpose components, but that there’s another option, too: Distributed docks. “If a robot has dry capabilities and wet capabilities, do those have to coexist inside the same chassis? What if they were separate?” says Fernandez.

We should mention that iRobot is not the first in the robotic floor care robot space to have a self-cleaning mop, and it’s also not the first to think about distributed docks, although as Fernandez explains, this is a more common approach in Asia where you can also take advantage of home plumbing integration. “It’s a major trend in China, and starting to pop up a little bit in Europe, but not really in North America yet. How amazing could it be if you had a dock that, in a very easy manner, was able to tap right into plumbing lines for water supply and sewage disposal?”

According to Fernandez, this tends to be much easier to do in China, both because the labor cost for plumbing work is far lower than in the United States and Europe, and also because it’s fairly common for apartments in China to have accessible floor drains. “We don’t really yet see it in a major way at a global level,” Fernandez tells us. “But that doesn’t mean it’s not coming.”

A round black robot on a wooden floor approaches a dirty carpet and uses a metal arm to lift a wet mopping pad onto its back. The robot autonomously switches mopping mode on and off for different floor surfaces.iRobot

We should also mention the Roomba Combo 10 Max, which includes some software updates:

  • The front-facing camera and specialized bin sensors can identify dirtier areas eight times as effectively as before.
  • The Roomba can identify specific rooms and prioritize the order they’re cleaned in, depending on how dirty they get.
  • A new cleaning behavior called “Smart Scrub” adds a back-and-forth scrubbing motion for floors that need extra oomph.

And here’s what I feel like the new software should do, but doesn’t:

  • Use the front-facing camera and bin sensors to identify dirtier areas and then autonomously develop a schedule to more frequently clean those areas.
  • Activate Smart Scrub when the camera and bin sensors recognize an especially dirty floor.

I say “should do” because the robot appears to be collecting the data that it needs to do these things but it doesn’t do them yet. New features (especially new features that involve autonomy) take time to develop and deploy, but imagine a robot that makes much more nuanced decisions about where and when to clean based on very detailed real-time data and environmental understanding that iRobot has already implemented.

I also appreciate that even as iRobot is emphasizing autonomy and leveraging data to start making more decisions for the user, the company is also making sure that the user has as much control as possible through the app. For example, you can set the robot to mop your floor without vacuuming first, even though if you do that, all you’re going to end up with a much dirtier mop. Doesn’t make a heck of a lot of sense, but if that’s what you want, iRobot has empowered you to do it.

A round black vacuuming robot sits inside of a large black docking station that is opened to show clean and dirty water tanks inside. The dock opens from the front for access to the clean- and dirty-water storage and the dirt bag.iRobot

The Roomba Combo 10 Max will be launching in August for US $1,400. That’s expensive, but it’s also how iRobot does things: A new Roomba with new tech always gets flagship status and premium cost. Sooner or later it’ll be affordable enough that the rest of us will be able to afford it, too.

Video Friday: Robot Crash-Perches, Hugs Tree



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

ICRA@40: 23–26 September 2024, ROTTERDAM, NETHERLANDS
IROS 2024: 14–18 October 2024, ABU DHABI, UAE
ICSR 2024: 23–26 October 2024, ODENSE, DENMARK
Cybathlon 2024: 25–27 October 2024, ZURICH

Enjoy today’s videos!

Perching with winged Unmanned Aerial Vehicles has often been solved by means of complex control or intricate appendages. Here, we present a method that relies on passive wing morphing for crash-landing on trees and other types of vertical poles. Inspired by the adaptability of animals’ and bats’ limbs in gripping and holding onto trees, we design dual-purpose wings that enable both aerial gliding and perching on poles.

[ Nature Communications Engineering ]

Pretty impressive to have low enough latency in controlling your robot’s hardware that it can play ping pong, although it makes it impossible to tell whether the robot or the human is the one that’s actually bad at the game.

[ IHMC ]

How to be a good robot when boarding an elevator.

[ NAVER ]

Have you ever wondered how insects are able to go so far beyond their home and still find their way? The answer to this question is not only relevant to biology but also to making the AI for tiny, autonomous robots. We felt inspired by biological findings on how ants visually recognize their environment and combine it with counting their steps in order to get safely back home.

[ Science Robotics ]

Team RoMeLa Practice with ARTEMIS humanoid robots, featuring Tsinghua Hephaestus (Booster Alpha). Fully autonomous humanoid robot soccer match with the official goal of beating the human WorldCup Champions by the year 2050.

[ RoMeLa ]

Triangle is the most stable shape, right?

[ WVU IRL ]

We propose RialTo, a new system for robustifying real-world imitation learning policies via reinforcement learning in “digital twin” simulation environments constructed on the fly from small amounts of real-world data.

[ MIT CSAIL ]

There is absolutely no reason to watch this entire video, but Moley Robotics is still working on that robotic kitchen of theirs.

I will once again point out that the hardest part of cooking (for me, anyway) is the prep and the cleanup, and this robot still needs you to do all that.

[ Moley ]

B-Human has so far won 10 titles at the RoboCup SPL tournament. Can we make it 11 this year? Our RoboCup starts off with a banger game against HTWK Robots form Leipzig!

[ Team B-Human ]

AMBIDEX is a dual-armed robot with an innovative mechanism developed for safe coexistence with humans. Based on an innovative cable structure, it is designed to be both strong and stable.

[ NAVER ]

As NASA’s Perseverance rover prepares to ascend to the rim of Jezero Crater, its team is investigating a rock unlike any that they’ve seen so far on Mars. Deputy project scientist Katie Stack Morgan explains why this rock, found in an ancient channel that funneled water into the crater, could be among the oldest that Perseverance has investigated—or the youngest.

[ NASA ]

We present a novel approach for enhancing human-robot collaboration using physical interactions for real-time error correction of large language model (LLM) parameterized commands.

[ Figueroa Robotics Lab ]

Husky Observer was recently used to autonomously inspect solar panels at a large solar panel farm. As part of its mission, the robot navigated rows of solar panels, stopping to inspect areas with its integrated thermal camera. Images were taken by the robot and enhanced to detect potential “hot spots” in the panels.

[ Clearpath Robotics ]

Most of the time, robotic workcells contain just one robot, so it’s cool to see a pair of them collaborating on tasks.

[ Leverage Robotics ]

Thanks, Roman!

Meet Hydrus, the autonomous underwater drone revolutionising underwater data collection by eliminating the barriers to its entry. Hydrus ensures that even users with limited resources can execute precise and regular subsea missions to meet their data requirements.

[ Advanced Navigation ]

Those adorable Disney robots have finally made their way into a paper.

[ RSS 2024 ]

Robot Dog Cleans Up Beaches With Foot-Mounted Vacuums



Cigarette butts are the second most common undisposed-of litter on Earth—of the six trillion-ish cigarettes inhaled every year, it’s estimated that over 4 trillion of the butts are just tossed onto the ground, each one leeching over 700 different toxic chemicals into the environment. Let’s not focus on the fact that all those toxic chemicals are also going into people’s lungs, and instead talk about the ecosystem damage that they can do and also just the general grossness of having bits of sucked-on trash everywhere. Ew.

Preventing those cigarette butts from winding up on the ground in the first place would be the best option, but it would require a pretty big shift in human behavior. Operating under the assumption that humans changing their behavior is a nonstarter, roboticists from the Dynamic Legged Systems unit at the Italian Institute of Technology (IIT), in Genoa, have instead designed a novel platform for cigarette-butt cleanup in the form of a quadrupedal robot with vacuums attached to its feet.

IIT

There are, of course, far more efficient ways of at least partially automating the cleanup of litter with machines. The challenge is that most of that automation relies on mobility systems with wheels, which won’t work on the many beautiful beaches (and many beautiful flights of stairs) of Genoa. In places like these, it still falls to humans to do the hard work, which is less than ideal.

This robot, developed in Claudio Semini’s lab at IIT, is called VERO (Vacuum-cleaner Equipped RObot). It’s based around an AlienGo from Unitree, with a commercial vacuum mounted on its back. Hoses go from the vacuum down the leg to each foot, with a custom 3D-printed nozzle that puts as much suction near the ground as possible without tripping the robot up. While the vacuum is novel, the real contribution here is how the robot autonomously locates things on the ground and then plans how to interact with those things using its feet.

First, an operator designates an area for VERO to clean, after which the robot operates by itself. After calculating an exploration path to explore the entire area, the robot uses its onboard cameras and a neural network to detect cigarette butts. This is trickier than it sounds, because there may be a lot of cigarette butts on the ground, and they all probably look pretty much the same, so the system has to filter out all of the potential duplicates. The next step is to plan its next steps: VERO has to put the vacuum side of one of its feet right next to each cigarette butt while calculating a safe, stable pose for the rest of its body. Since this whole process can take place on sand or stairs or other uneven surfaces, VERO has to prioritize not falling over before it decides how to do the collection. The final collecting maneuver is fine-tuned using an extra Intel RealSense depth camera mounted on the robot’s chin.

A collage of six photos of a quadruped robot navigating different environments. VERO has been tested successfully in six different scenarios that challenge both its locomotion and detection capabilities.IIT

Initial testing with the robot in a variety of different environments showed that it could successfully collect just under 90 percent of cigarette butts, which I bet is better than I could do, and I’m also much more likely to get fed up with the whole process. The robot is not very quick at the task, but unlike me it will never get fed up as long as it’s got energy in its battery, so speed is somewhat less important.

As far as the authors of this paper are aware (and I assume they’ve done their research), this is “the first time that the legs of a legged robot are concurrently utilized for locomotion and for a different task.” This is distinct from other robots that can (for example) open doors with their feet, because those robots stop using the feet as feet for a while and instead use them as manipulators.

So, this is about a lot more than cigarette butts, and the researchers suggest a variety of other potential use cases, including spraying weeds in crop fields, inspecting cracks in infrastructure, and placing nails and rivets during construction.

Some use cases include potentially doing multiple things at the same time, like planting different kinds of seeds, using different surface sensors, or driving both nails and rivets. And since quadrupeds have four feet, they could potentially host four completely different tools, and the software that the researchers developed for VERO can be slightly modified to put whatever foot you want on whatever spot you need.

VERO: A Vacuum‐Cleaner‐Equipped Quadruped Robot for Efficient Litter Removal, by Lorenzo Amatucci, Giulio Turrisi, Angelo Bratta, Victor Barasuol, and Claudio Semini from IIT, was published in the Journal of Field Robotics.

Video Friday: Morphy Drone



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

RoboCup 2024: 17–22 July 2024, EINDHOVEN, NETHERLANDS
ICRA@40: 23–26 September 2024, ROTTERDAM, NETHERLANDS
IROS 2024: 14–18 October 2024, ABU DHABI, UAE
ICSR 2024: 23–26 October 2024, ODENSE, DENMARK
Cybathlon 2024: 25–27 October 2024, ZURICH

Enjoy today’s videos!

We present Morphy, a novel compliant and morphologically aware flying robot that integrates sensorized flexible joints in its arms, thus enabling resilient collisions at high speeds and the ability to squeeze through openings narrower than its nominal dimensions.

Morphy represents a new class of soft flying robots that can facilitate unprecedented resilience through innovations both in the “body” and “brain.” The novel soft body can, in turn, enable new avenues for autonomy. Collisions that previously had to be avoided have now become acceptable risks, while areas that are untraversable for a certain robot size can now be negotiated through self-squeezing. These novel bodily interactions with the environment can give rise to new types of embodied intelligence.

[ ARL ]

Thanks, Kostas!

Segments of daily training for robots driven by reinforcement learning. Multiple tests done in advance for friendly service humans. The training includes some extreme tests. Please do not imitate!

[ Unitree ]

Sphero is not only still around, it’s making new STEM robots!

[ Sphero ]

Googly eyes mitigate all robot failures.

[ WVUIRL ]

Here I am, without the ability or equipment (or desire) required to iron anything that I own, and Flexiv’s got robots out there ironing fancy leather car seats.

[ Flexiv ]

Thanks, Noah!

We unveiled a significant leap forward in perception technology for our humanoid robot GR-1. The newly adapted pure-vision solution integrates bird’s-eye view, transformer models, and an occupancy network for precise and efficient environmental perception.

[ Fourier ]

Thanks, Serin!

LimX Dynamics’ humanoid robot CL-1 was launched in December 2023. It climbed stairs based on real-time terrain perception, two steps per stair. Four months later, in April 2024, the second demo video showcased CL-1 in the same scenario. It had advanced to climb the same stair, one step per stair.

[ LimX Dynamics ]

Thanks, Ou Yan!

New research from the University of Massachusetts Amherst shows that programming robots to create their own teams and voluntarily wait for their teammates results in faster task completion, with the potential to improve manufacturing, agriculture, and warehouse automation.

[ HCRL ] via [ UMass Amherst ]

Thanks, Julia!

LASDRA (Large-size Aerial Skeleton with Distributed Rotor Actuation system (ICRA18) is a scalable and modular aerial robot. It can assume a very slender, long, and dexterous form factor and is very lightweight.

[ SNU INRoL ]

We propose augmenting initially passive structures built from simple repeated cells, with novel active units to enable dynamic, shape-changing, and robotic applications. Inspired by metamaterials that can employ mechanisms, we build a framework that allows users to configure cells of this passive structure to allow it to perform complex tasks.

[ CMU ]

Testing autonomous exploration at the Exyn Office using Spot from Boston Dynamics. In this demo, Spot autonomously explores our flight space while on the hunt for one of our engineers.

[ Exyn ]

Meet Heavy Picker, the strongest robot in bulky-waste sorting and an absolute pro at lifting and sorting waste. With skills that would make a concert pianist jealous and a work ethic that never needs coffee breaks, Heavy Picker was on the lookout for new challenges.

[ Zen Robotics ]

AI is the biggest and most consequential business, financial, legal, technological, and cultural story of our time. In this panel, you will hear from the underrepresented community of women scientists who have been leading the AI revolution—from the beginning to now.

[ Stanford HAI ]

Here’s the Most Buglike Robot Bug Yet



Insects have long been an inspiration for robots. The insect world is full of things that are tiny, fully autonomous, highly mobile, energy efficient, multimodal, self-repairing, and I could go on and on but you get the idea—insects are both an inspiration and a source of frustration to roboticists because it’s so hard to get robots to have anywhere close to insect capability.

We’re definitely making progress, though. In a paper published last month in IEEE Robotics and Automation Letters, roboticists from Shanghai Jong Tong University demonstrated the most buglike robotic bug I think I’ve ever seen.


A Multi-Modal Tailless Flapping-Wing Robot www.youtube.com

Okay so it may not look the most buglike, but it can do many very buggy bug things, including crawling, taking off horizontally, flying around (with six degrees of freedom control), hovering, landing, and self-righting if necessary. JT-fly weighs about 35 grams and has a wingspan of 33 centimeters, using four wings at once to fly at up to 5 meters per second and six legs to scurry at 0.3 m/s. Its 380 milliampere-hour battery powers it for an actually somewhat useful 8-ish minutes of flying and about 60 minutes of crawling.

While that amount of endurance may not sound like a lot, robots like these aren’t necessarily intended to be moving continuously. Rather, they move a little bit, find a nice safe perch, and then do some sensing or whatever until you ask them to move to a new spot. Ideally, most of that movement would be crawling, but having the option to fly makes JT-fly exponentially more useful.

Or, potentially more useful, because obviously this is still very much a research project. It does seem like there’s a bunch more optimization that could be done here. For example, JT-fly uses completely separate systems for flying and crawling, with two motors powering the legs and two additional motors powering the wings—plus two wing servos for control. There’s currently a limited amount of onboard autonomy, with an inertial measurement unit, barometer, and wireless communication, but otherwise not much in the way of useful payload.

Insects are both an inspiration and a source of frustration to roboticists because it’s so hard to get robots to have anywhere close to insect capability.

It won’t surprise you to learn that the researchers have disaster-relief applications in mind for this robot, suggesting that “after natural disasters such as earthquakes and mudslides, roads and buildings will be severely damaged, and in these scenarios, JT-fly can rely on its flight ability to quickly deploy into the mission area.” One day, robots like these will actually be deployed for disaster relief, and although that day is not today, we’re just a little bit closer than we were before.

“A Multi-Modal Tailless Flapping-Wing Robot Capable of Flying, Crawling, Self-Righting and Horizontal Takeoff,” by Chaofeng Wu, Yiming Xiao, Jiaxin Zhao, Jiawang Mou, Feng Cui, and Wu Liu from Shanghai Jong Tong University, is published in the May issue of IEEE Robotics and Automation Letters.

Video Friday: Drone vs. Flying Canoe



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

RoboCup 2024: 17–22 July 2024, EINDHOVEN, NETHERLANDS
ICRA@40: 23–26 September 2024, ROTTERDAM, NETHERLANDS
IROS 2024: 14–18 October 2024, ABU DHABI, UAE
ICSR 2024: 23–26 October 2024, ODENSE, DENMARK
Cybathlon 2024: 25–27 October 2024, ZURICH

Enjoy today’s videos!

There’s a Canadian legend about a flying canoe, because of course there is. The legend involves drunkenness, a party with some ladies, swearing, and a pact with the devil, because of course it does. Fortunately for the drone in this video, it needs none of that to successfully land on this (nearly) flying canoe, just some high-friction shock absorbing legs and judicious application of reverse thrust.

[ Createk ]

Thanks, Alexis!

This paper summarizes an autonomous driving project by musculoskeletal humanoids. The musculoskeletal humanoid, which mimics the human body in detail, has redundant sensors and a flexible body structure. We reconsider the developed hardware and software of the musculoskeletal humanoid Musashi in the context of autonomous driving. The respective components of autonomous driving are conducted using the benefits of the hardware and software. Finally, Musashi succeeded in the pedal and steering wheel operations with recognition.

[ Paper ] via [ JSK Lab ]

Thanks, Kento!

Robust AI has been kinda quiet for the last little while, but their Carter robot continues to improve.

[ Robust AI ]

One of the key arguments for building robots that have similar form factors to human beings is that we can leverage the massive human data for training. In this paper, we introduce a full-stack system for humanoids to learn motion and autonomous skills from human data. We demonstrate the system on our customized 33-degrees-of-freedom 180 centimeter humanoid, autonomously completing tasks such as wearing a shoe to stand up and walk, unloading objects from warehouse racks, folding a sweatshirt, rearranging objects, typing, and greeting another robot with 60-100 percent success rates using up to 40 demonstrations.

[ HumanPlus ]

We present OmniH2O (Omni Human-to-Humanoid), a learning-based system for whole-body humanoid teleoperation and autonomy. Using kinematic pose as a universal control interface, OmniH2O enables various ways for a human to control a full-sized humanoid with dexterous hands, including using real-time teleoperation through VR headset, verbal instruction, and RGB camera. OmniH2O also enables full autonomy by learning from teleoperated demonstrations or integrating with frontier models such as GPT-4.

[ OmniH2O ]

A collaboration between Boxbot, Agility Robotics, and Robust.AI at Playground Global. Make sure and watch until the end to hear the roboticists in the background react when the demo works in a very roboticist way.

::clap clap clap:: yaaaaayyyyy....

[ Robust AI ]

The use of drones and robotic devices threatens civilian and military actors in conflict areas. We started trials with robots to see how we can adapt our HEAT (Hostile Environment Awareness Training) courses to this new reality.

[ CSD ]

Thanks, Ebe!

How to make humanoids do versatile parkour jumping, clapping dance, cliff traversal, and box pick-and-move with a unified RL framework? We introduce WoCoCo: Whole-body humanoid Control with sequential Contacts

[ WoCoCo ]

A selection of excellent demos from the Learning Systems and Robotics Lab at TUM and the University of Toronto.

[ Learning Systems and Robotics Lab ]

Harvest Automation, one of the OG autonomous mobile robot companies, hasn’t updated their website since like 2016, but some videos just showed up on YouTube this week.

[ Harvest Automation ]

Northrop Grumman has been pioneering capabilities in the undersea domain for more than 50 years. Now, we are creating a new class of uncrewed underwater vehicles (UUV) with Manta Ray. Taking its name from the massive “winged” fish, Manta Ray will operate long-duration, long-range missions in ocean environments where humans can’t go.

[ Northrop Grumman ]

Akara Robotics’ autonomous robotic UV disinfection demo.

[ Akara Robotics ]

Scientists have computationally predicted hundreds of thousands of novel materials that could be promising for new technologies—but testing to see whether any of those materials can be made in reality is a slow process. Enter A-Lab, which uses robots guided by artificial intelligence to speed up the process.

[ A-Lab ]

We wrote about this research from CMU a while back, but here’s a quite nice video.

[ CMU RI ]

Aw yiss pick and place robots.

[ Fanuc ]

Axel Moore describes his lab’s work in orthopedic biomechanics to relieve joint pain with robotic assistance.

[ CMU ]

The field of humanoid robots has grown in recent years with several companies and research laboratories developing new humanoid systems. However, the number of running robots did not noticeably rise. Despite the need for fast locomotion to quickly serve given tasks, which require traversing complex terrain by running and jumping over obstacles. To provide an overview of the design of humanoid robots with bioinspired mechanisms, this paper introduces the fundamental functions of the human running gait.

[ Paper ]

Video Friday: 1X Robots Tidy Up



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

RoboCup 2024: 17–22 July 2024, EINDHOVEN, NETHERLANDS
ICRA@40: 23–26 September 2024, ROTTERDAM, NETHERLANDS
IROS 2024: 14–18 October 2024, ABU DHABI, UNITED ARAB EMIRATES
ICSR 2024: 23–26 October 2024, ODENSE, DENMARK
Cybathlon 2024: 25–27 October 2024, ZURICH

Enjoy today’s videos!

In this video, you see the start of 1X’s development of an advanced AI system that chains simple tasks into complex actions using voice commands, allowing seamless multi-robot control and remote operation. By starting with single-task models, we ensure smooth transitions to more powerful unified models, ultimately aiming to automate high-level actions using AI.

This video does not contain teleoperation, computer graphics, cuts, video speedups, or scripted trajectory playback. It’s all controlled via neural networks.

[ 1X ]

As the old adage goes, one cannot claim to be a true man without a visit to the Great Wall of China. XBot-L, a full-sized humanoid robot developed by Robot Era, recently acquitted itself well in a walk along sections of the Great Wall.

[ Robot Era ]

The paper presents a novel rotary wing platform, that is capable of folding and expanding its wings during flight. Our source of inspiration came from birds’ ability to fold their wings to navigate through small spaces and dive. The design of the rotorcraft is based on the monocopter platform, which is inspired by the flight of Samara seeds.

[ AirLab ]

We present a variable stiffness robotic skin (VSRS), a concept that integrates stiffness-changing capabilities, sensing, and actuation into a single, thin modular robot design. Reconfiguring, reconnecting, and reshaping VSRSs allows them to achieve new functions both on and in the absence of a host body.

[ Yale Faboratory ]

Heimdall is a new rover design for the 2024 University Rover Challenge (URC). This video shows highlights of Heimdall’s trip during the four missions at URC 2024.

Heimdall features a split body design with whegs (wheel legs), and a drill for sub-surface sample collection. It also has the ability to manipulate a variety of objects, collect surface samples, and perform onboard spectrometry and chemical tests.

[ WVU ]

I think this may be the first time I’ve seen an autonomous robot using a train? This one is delivering lunch boxes!

[ JSME ]

The AI system used identifies and separates red apples from green apples, after which a robotic arm picks up the red apples identified with a qb SoftHand Industry and gently places them in a basket.

My favorite part is the magnetic apple stem system.

[ QB Robotics ]

DexNex (v0, June 2024) is an anthropomorphic teleoperation testbed for dexterous manipulation at the Center for Robotics and Biosystems at Northwestern University. DexNex recreates human upper-limb functionality through a near 1-to-1 mapping between Operator movements and Avatar actions.

Motion of the Operator’s arms, hands, fingers, and head are fed forward to the Avatar, while fingertip pressures, finger forces, and camera images are fed back to the Operator. DexNex aims to minimize the latency of each subsystem to provide a seamless, immersive, and responsive user experience. Future research includes gaining a better understanding of the criticality of haptic and vision feedback for different manipulation tasks; providing arm-level grounded force feedback; and using machine learning to transfer dexterous skills from the human to the robot.

[ Northwestern ]

Sometimes the best path isn’t the smoothest or straightest surface, it’s the path that’s actually meant to be a path.

[ RaiLab ]

Fulfilling a school requirement by working in a Romanian locomotive factory one week each month, Daniela Rus learned to operate “machines that help us make things.” Appreciation for the practical side of math and science stuck with Daniela, who is now Director of the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL).

[ MIT ]

For AI to achieve its full potential, non-experts need to be let into the development process, says Rumman Chowdhury, CEO and cofounder of Humane Intelligence. She tells the story of farmers fighting for the right to repair their own AI-powered tractors (which some manufacturers actually made illegal), proposing everyone should have the ability to report issues, patch updates or even retrain AI technologies for their specific uses.

[ TED ]

Video Friday: Multitasking



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

RoboCup 2024: 17–22 July 2024, EINDHOVEN, NETHERLANDS
ICSR 2024: 23–26 October 2024, ODENSE, DENMARK
Cybathlon 2024: 25–27 October 2024, ZURICH

Enjoy today’s videos!

Do you have trouble multitasking? Cyborgize yourself through muscle stimulation to automate repetitive physical tasks while you focus on something else.

[ SplitBody ]

By combining a 5,000 frame-per-second (FPS) event camera with a 20-FPS RGB camera, roboticists from the University of Zurich have developed a much more effective vision system that keeps autonomous cars from crashing into stuff, as described in the current issue of Nature.

[ Nature ]

Mitsubishi Electric has been awarded the GUINNESS WORLD RECORDS title for the fastest robot to solve a puzzle cube. The robot’s time of 0.305 second beat the previous record of 0.38 second, for which it received a GUINNESS WORLD RECORDS certificate on 21 May 2024.

[ Mitsubishi ]

Sony’s AIBO is celebrating its 25th anniversary, which seems like a long time, and it is. But back then, the original AIBO could check your email for you. Email! In 1999!

I miss Hotmail.

[ AIBO ]

SchniPoSa: schnitzel with french fries and a salad.

[ Dino Robotics ]

Cloth-folding is still a really hard problem for robots, but progress was made at ICRA!

[ ICRA Cloth Competition ]

Thanks, Francis!

MIT CSAIL researchers enhance robotic precision with sophisticated tactile sensors in the palm and agile fingers, setting the stage for improvements in human-robot interaction and prosthetic technology.

[ MIT ]

We present a novel adversarial attack method designed to identify failure cases in any type of locomotion controller, including state-of-the-art reinforcement-learning-based controllers. Our approach reveals the vulnerabilities of black-box neural network controllers, providing valuable insights that can be leveraged to enhance robustness through retraining.

[ Fan Shi ]

In this work, we investigate a novel integrated flexible OLED display technology used as a robotic skin-interface to improve robot-to-human communication in a real industrial setting at Volkswagen or a collaborative human-robot interaction task in motor assembly. The interface was implemented in a workcell and validated qualitatively with a small group of operators (n=9) and quantitatively with a large group (n=42). The validation results showed that using flexible OLED technology could improve the operators’ attitude toward the robot; increase their intention to use the robot; enhance their perceived enjoyment, social influence, and trust; and reduce their anxiety.

[ Paper ]

Thanks, Bram!

We introduce InflatableBots, shape-changing inflatable robots for large-scale encountered-type haptics in VR. Unlike traditional inflatable shape displays, which are immobile and limited in interaction areas, our approach combines mobile robots with fan-based inflatable structures. This enables safe, scalable, and deployable haptic interactions on a large scale.

[ InflatableBots ]

We present a bioinspired passive dynamic foot in which the claws are actuated solely by the impact energy. Our gripper simultaneously resolves the issue of smooth absorption of the impact energy and fast closure of the claws by linking the motion of an ankle linkage and the claws through soft tendons.

[ Paper ]

In this video, a 3-UPU exoskeleton robot for a wrist joint is designed and controlled to perform wrist extension, flexion, radial-deviation, and ulnar-deviation motions in stroke-affected patients. This is the first time a 3-UPU robot has been used effectively for any kind of task.

“UPU” stands for “universal-prismatic-universal” and refers to the actuators—the prismatic joints between two universal joints.

[ BAS ]

Thanks, Tony!

BRUCE Got Spot-ted at ICRA2024.

[ Westwood Robotics ]

Parachutes: maybe not as good of an idea for drones as you might think.

[ Wing ]

In this paper, we propose a system for the artist-directed authoring of stylized bipedal walking gaits, tailored for execution on robotic characters. To demonstrate the utility of our approach, we animate gaits for a custom, free-walking robotic character, and show, with two additional in-simulation examples, how our procedural animation technique generalizes to bipeds with different degrees of freedom, proportions, and mass distributions.

[ Disney Research ]

The European drone project Labyrinth aims to keep new and conventional air traffic separate, especially in busy airspaces such as those expected in urban areas. The project provides a new drone-traffic service and illustrates its potential to improve the safety and efficiency of civil land, air, and sea transport, as well as emergency and rescue operations.

[ DLR ]

This Carnegie Mellon University Robotics Institute seminar, by Kim Baraka at Vrije Universiteit Amsterdam, is on the topic “Why We Should Build Robot Apprentices and Why We Shouldn’t Do It Alone.”

For robots to be able to truly integrate human-populated, dynamic, and unpredictable environments, they will have to have strong adaptive capabilities. In this talk, I argue that these adaptive capabilities should leverage interaction with end users, who know how (they want) a robot to act in that environment. I will present an overview of my past and ongoing work on the topic of human-interactive robot learning, a growing interdisciplinary subfield that embraces rich, bidirectional interaction to shape robot learning. I will discuss contributions on the algorithmic, interface, and interaction design fronts, showcasing several collaborations with animal behaviorists/trainers, dancers, puppeteers, and medical practitioners.

[ CMU RI ]

Video Friday: Robot Bees



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

ICRA 2024: 13–17 May 2024, YOKOHAMA, JAPAN
RoboCup 2024: 17–22 July 2024, EINDHOVEN, NETHERLANDS
ICSR 2024: 23–26 October 2024, ODENSE, DENMARK
Cybathlon 2024: 25–27 October 2024, ZURICH

Enjoy today’s videos!

Festo has robot bees!

It’s a very clever design, but the size makes me terrified of whatever the bees are that Festo seems to be familiar with.

[ Festo ]

Boing, boing, boing!

[ USC ]

Why the heck would you take the trouble to program a robot to make sweet potato chips and then not scarf them down yourself?

[ Dino Robotics ]

Mobile robots can transport payloads far greater than their mass through vehicle traction. However, off-road terrain features substantial variation in height, grade, and friction, which can cause traction to degrade or fail catastrophically. This paper presents a system that utilizes a vehicle-mounted, multipurpose manipulator to physically adapt the robot with unique anchors suitable for a particular terrain for autonomous payload transport.

[ DART Lab ]

Turns out that working on a collaborative task with a robot can make humans less efficient, because we tend to overestimate the robot’s capabilities.

[ CHI 2024 ]

Wing posts a video with the title “What Do Wing’s Drones Sound Like” but only includes a brief snippet—though nothing without background room noise—revealing to curious viewers and listeners exactly what Wing’s drones sound like.

Because, look, a couple seconds of muted audio underneath a voiceover is in fact not really answering the question.

[ Wing ]

This first instance of ROB 450 in Winter 2024 challenged students to synthesize the knowledge acquired through their Robotics undergraduate courses at the University of Michigan to use a systematic and iterative design and analysis process and apply it to solving a real, open-ended Robotics problem.

[ Michigan Robotics ]

This Microsoft Future Leaders in Robotics and AI Seminar is from Catie Cuan at Stanford, on “Choreorobotics: Teaching Robots How to Dance With Humans.”

As robots transition from industrial and research settings into everyday environments, robots must be able to (1) learn from humans while benefiting from the full range of the humans’ knowledge and (2) learn to interact with humans in safe, intuitive, and social ways. I will present a series of compelling robot behaviors, where human perception and interaction are foregrounded in a variety of tasks.

[ UMD ]

The New Shadow Hand Can Take a Beating



For years, Shadow Robot Company’s Shadow Hand has arguably been the gold standard for robotic manipulation. Beautiful and expensive, it is able to mimic the form factor and functionality of human hands, which has made it ideal for complex tasks. I’ve personally experienced how amazing it is to use Shadow Hands in a teleoperation context, and it’s hard to imagine anything better.

The problem with the original Shadow hand was (and still is) fragility. In a research environment, this has been fine, except that research is changing: Roboticists no longer carefully program manipulation tasks by, uh, hand. Now it’s all about machine learning, in which you need robotic hands to massively fail over and over again until they build up enough data to understand how to succeed.

“We’ve aimed for robustness and performance over anthropomorphism and human size and shape.” —Rich Walker, Shadow Robot Company

Doing this with a Shadow Hand was just not realistic, which Google DeepMind understood five years ago when it asked Shadow Robot to build it a new hand with hardware that could handle the kind of training environments that now typify manipulation research. So Shadow Robot spent the last half-decade-ish working on a new, three-fingered Shadow Hand, which the company unveiled today. The company is calling it, appropriately enough, “the new Shadow Hand.”


As you can see, this thing is an absolute beast. Shadow Robot says that the new hand is “robust against a significant amount of misuse, including aggressive force demands, abrasion and impacts.” Part of the point, though, is that what robot-hand designers might call “misuse,” robot-manipulation researchers might very well call “progress,” and the hand is designed to stand up to manipulation research that pushes the envelope of what robotic hardware and software are physically capable of.

Shadow Robot understands that despite its best engineering efforts, this new hand will still occasionally break (because it’s a robot and that’s what robots do), so the company designed it to be modular and easy to repair. Each finger is its own self-contained unit that can be easily swapped out, with five Maxon motors in the base of the finger driving the four finger joints through cables in a design that eliminates backlash. The cables themselves will need replacement from time to time, but it’s much easier to do this on the new Shadow Hand than it was on the original. Shadow Robot says that you can swap out an entire New Hand’s worth of cables in the same time it would take you to replace a single cable on the old hand.

Shadow Robot

The new Shadow Hand itself is somewhat larger than a typical human hand, and heavier too: Each modular finger unit weighs 1.2 kilograms, and the entire three-fingered hand is just over 4 kg. The fingers have humanlike kinematics, and each joint can move up to 180 degrees per second with the capability of exerting at least 8 newtons of force at each fingertip. Both force control and position control are available, and the entire hand runs Robot Operating System, the Open Source Robotics Foundation’s collection of open-source software libraries and tools.

One of the coolest new features of this hand is the tactile sensing. Shadow Robot has decided to take the optical route with fingertip sensors, GelSight-style. Each fingertip is covered in soft, squishy gel with thousands of embedded particles. Cameras in the fingers behind the gel track each of those particles, and when the fingertip touches something, the particles move. Based on that movement, the fingertips can very accurately detect the magnitude and direction of even very small forces. And there are even more sensors on the insides of the fingers too, with embedded Hall effect sensors to help provide feedback during grasping and manipulation tasks.

Shadow Robot

The most striking difference here is how completely different of a robotic-manipulation philosophy this new hand represents for Shadow Robot. “We’ve aimed for robustness and performance over anthropomorphism and human size and shape,” says Rich Walker, director of Shadow Robot Company. “There’s a very definite design choice there to get something that really behaves much more like an optimized manipulator rather than a humanlike hand.”

Walker explains that Shadow Robot sees two different approaches to manipulation within the robotics community right now: There’s imitation learning, where a human does a task and then a robot tries to do the task the same way, and then there’s reinforcement learning, where a robot tries to figure out how do the task by itself. “Obviously, this hand was built from the ground up to make reinforcement learning easy.”

The hand was also built from the ground up to be rugged and repairable, which had a significant effect on the form factor. To make the fingers modular, they have to be chunky, and trying to cram five of them onto one hand was just not practical. But because of this modularity, Shadow Robot could make you a five-fingered hand if you really wanted one. Or a two-fingered hand. Or (and this is the company’s suggestion, not mine) “a giant spider.” Really, though, it’s probably not useful to get stuck on the form factor. Instead, focus more on what the hand can do. In fact, Shadow Robot tells me that the best way to think about the hand in the context of agility is as having three thumbs, not three fingers, but Walker says that “if we describe it as that, people get confused.”

There’s still definitely a place for the original anthropomorphic Shadow Hand, and Shadow Robot has no plans to discontinue it. “It’s clear that for some people anthropomorphism is a deal breaker, they have to have it,” Walker says. “But for a lot of people, the idea that they could have something which is really robust and dexterous and can gather lots of data, that’s exciting enough to be worth saying okay, what can we do with this? We’re very interested to find out what happens.”

The Shadow New Hand is available now, starting at about US $74,000 depending on configuration.

Video Friday: Loco-Manipulation



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

Eurobot Open 2024: 8–11 May 2024, LA ROCHE-SUR-YON, FRANCE
ICRA 2024: 13–17 May 2024, YOKOHAMA, JAPAN
RoboCup 2024: 17–22 July 2024, EINDHOVEN, NETHERLANDS
Cybathlon 2024: 25–27 October 2024, ZURICH

Enjoy today’s videos!

In this work, we present LocoMan, a dexterous quadrupedal robot with a novel morphology to perform versatile manipulation in diverse constrained environments. By equipping a Unitree Go1 robot with two low-cost and lightweight modular 3-DoF loco-manipulators on its front calves, LocoMan leverages the combined mobility and functionality of the legs and grippers for complex manipulation tasks that require precise 6D positioning of the end effector in a wide workspace.

[ CMU ]

Thanks, Changyi!

Object manipulation has been extensively studied in the context of fixed-base and mobile manipulators. However, the overactuated locomotion modality employed by snake robots allows for a unique blend of object manipulation through locomotion, referred to as loco-manipulation. In this paper, we present an optimization approach to solving the loco-manipulation problem based on nonimpulsive implicit-contact path planning for our snake robot COBRA.

[ Silicon Synapse Lab ]

Okay, but where that costume has eyes is not where Spot has eyes, so the Spot in the costume can’t see, right? And now I’m skeptical of the authenticity of the mutual snoot-boop.

[ Boston Dynamics ]

Here’s some video of Field AI’s robots operating in relatively complex and unstructured environments without prior maps. Make sure to read our article from this week for details!

[ Field AI ]

Is it just me, or is it kind of wild that researchers are now publishing papers comparing their humanoid controller to the “manufacturer’s” humanoid controller? It’s like humanoids are a commodity now or something.

[ OSU ]

I, too, am packing armor for ICRA.

[ Pollen Robotics ]

Honey Badger 4.0 is our latest robotic platform, created specifically for traversing hostile environments and difficult terrains. Equipped with multiple cameras and sensors, it will make sure no defect is omitted during inspection.

[ MAB Robotics ]

Thanks, Jakub!

Have an automation task that calls for the precision and torque of an industrial robot arm…but you need something that is more rugged or a nonconventional form factor? Meet the HEBI Robotics H-Series Actuator! With 9x the torque of our X-Series and seamless compatibility with the HEBI ecosystem for robot development, the H-Series opens a new world of possibilities for robots.

[ HEBI ]

Thanks, Dave!

This is how all spills happen at my house too: super passive-aggressively.

[ 1X ]

EPFL’s team, led by Ph.D. student Milad Shafiee along with coauthors Guillaume Bellegarda and BioRobotics Lab head Auke Ijspeert, have trained a four-legged robot using deep-reinforcement learning to navigate challenging terrain, achieving a milestone in both robotics and biology.

[ EPFL ]

At Agility, we make robots that are made for work. Our robot Digit works alongside us in spaces designed for people. Digit handles the tedious and repetitive tasks meant for a machine, allowing companies and their people to focus on the work that requires the human element.

[ Agility ]

With a wealth of incredible figures and outstanding facts, here’s Jan Jonsson, ABB Robotics veteran, sharing his knowledge and passion for some of our robots and controllers from the past.

[ ABB ]

I have it on good authority that getting robots to mow a lawn (like, any lawn) is much harder than it looks, but Electric Sheep has built a business around it.

[ Electric Sheep ]

The AI Index, currently in its seventh year, tracks, collates, distills, and visualizes data relating to artificial intelligence. The Index provides unbiased, rigorously vetted, and globally sourced data for policymakers, researchers, journalists, executives, and the general public to develop a deeper understanding of the complex field of AI. Led by a steering committee of influential AI thought leaders, the Index is the world’s most comprehensive report on trends in AI. In this seminar, HAI Research Manager Nestor Maslej offers highlights from the 2024 report, explaining trends related to research and development, technical performance, technical AI ethics, the economy, education, policy and governance, diversity, and public opinion.

[ Stanford HAI ]

This week’s CMU Robotics Institute seminar, from Dieter Fox at Nvidia and the University of Washington, is “Where’s RobotGPT?”

In this talk, I will discuss approaches to generating large datasets for training robot-manipulation capabilities, with a focus on the role simulation can play in this context. I will show some of our prior work, where we demonstrated robust sim-to-real transfer of manipulation skills trained in simulation, and then present a path toward generating large-scale demonstration sets that could help train robust, open-world robot-manipulation models.

[ CMU ]

How Field AI Is Conquering Unstructured Autonomy



One of the biggest challenges for robotics right now is practical autonomous operation in unstructured environments. That is, doing useful stuff in places your robot hasn’t been before and where things may not be as familiar as your robot might like. Robots thrive on predictability, which has put some irksome restrictions on where and how they can be successfully deployed.

But over the past few years, this has started to change, thanks in large part to a couple of pivotal robotics challenges put on by DARPA. The DARPA Subterranean Challenge ran from 2018 to 2021, putting mobile robots through a series of unstructured underground environments. And the currently ongoing DARPA RACER program tasks autonomous vehicles with navigating long distances off-road. Some extremely impressive technology has been developed through these programs, but there’s always a gap between this cutting-edge research and any real-world applications.

Now, a bunch of the folks involved in these challenges, including experienced roboticists from NASA, DARPA, Google DeepMind, Amazon, and Cruise (to name just a few places) are applying everything that they’ve learned to enable real-world practical autonomy for mobile robots at a startup called Field AI.


Field AI was cofounded by Ali Agha, who previously was a group leader for NASA JPL’s Aerial Mobility Group as well as JPL’s Perception Systems Group. While at JPL, Agha led Team CoSTAR, which won the DARPA Subterranean Challenge Urban Circuit. Agha has also been the principal investigator for DARPA RACER, first with JPL, and now continuing with Field AI. “Field AI is not just a startup,” Agha tells us. “It’s a culmination of decades of experience in AI and its deployment in the field.”

Unstructured environments are where things are constantly changing, which can play havoc with robots that rely on static maps.

The “field” part in Field AI is what makes Agha’s startup unique. Robots running Field AI’s software are able to handle unstructured, unmapped environments without reliance on prior models, GPS, or human intervention. Obviously, this kind of capability was (and is) of interest to NASA and JPL, which send robots to places where there are no maps, GPS doesn’t exist, and direct human intervention is impossible.

But DARPA SubT demonstrated that similar environments can be found on Earth, too. For instance, mines, natural caves, and the urban underground are all extremely challenging for robots (and even for humans) to navigate. And those are just the most extreme examples: robots that need to operate inside buildings or out in the wilderness have similar challenges understanding where they are, where they’re going, and how to navigate the environment around them.

driverless dune buggy-type vehicle with waving American flag drives through a blurred landscape of sand and scrub brush An autonomous vehicle drives across kilometers of desert with no prior map, no GPS, and no road.Field AI

Despite the difficulty that robots have operating in the field, this is an enormous opportunity that Field AI hopes to address. Robots have already proven their worth in inspection contexts, typically where you either need to make sure that nothing is going wrong across a large industrial site, or for tracking construction progress inside a partially completed building. There’s a lot of value here because the consequences of something getting messed up are expensive or dangerous or both, but the tasks are repetitive and sometimes risky and generally don’t require all that much human insight or creativity.

Uncharted Territory as Home Base

Where Field AI differs from other robotics companies offering these services, as Agha explains, is that his company wants to do these tasks without first having a map that tells the robot where to go. In other words, there’s no lengthy setup process, and no human supervision, and the robot can adapt to changing and new environments. Really, this is what full autonomy is all about: going anywhere, anytime, without human interaction. “Our customers don’t need to train anything,” Agha says, laying out the company’s vision. “They don’t need to have precise maps. They press a single button, and the robot just discovers every corner of the environment.” This capability is where the DARPA SubT heritage comes in. During the competition, DARPA basically said, “here’s the door into the course. We’re not going to tell you anything about what’s back there or even how big it is. Just go explore the whole thing and bring us back the info we’ve asked for.” Agha’s Team CoSTAR did exactly that during the competition, and Field AI is commercializing this capability.

“With our robots, our aim is for you to just deploy it, with no training time needed. And then we can just leave the robots.” —Ali Agha, Field AI

The other tricky thing about these unstructured environments, especially construction environments, is that things are constantly changing, which can play havoc with robots that rely on static maps. “We’re one of the few, if not the only company that can leave robots for days on continuously changing construction sites with minimal supervision,” Agha tells us. “These sites are very complex—every day there are new items, new challenges, and unexpected events. Construction materials on the ground, scaffolds, forklifts, and heavy machinery moving all over the place, nothing you can predict.”

Field AI

Field AI’s approach to this problem is to emphasize environmental understanding over mapping. Agha says that essentially, Field AI is working towards creating “field foundation models” (FFMs) of the physical world, using sensor data as an input. You can think of FFMs as being similar to the foundation models of language, music, and art that other AI companies have created over the past several years, where ingesting a large amount of data from the Internet enables some level of functionality in a domain without requiring specific training for each new situation. Consequently, Field AI’s robots can understand how to move in the world, rather than just where to move. “We look at AI quite differently from what’s mainstream,” Agha explains. “We do very heavy probabilistic modeling.” Much more technical detail would get into Field AI’s IP, says Agha, but the point is that real-time world modeling becomes a by-product of Field AI’s robots operating in the world rather than a prerequisite for that operation. This makes the robots fast, efficient, and resilient.

Developing field-foundation models that robots can use to reliably go almost anywhere requires a lot of real-world data, which Field AI has been collecting at industrial and construction sites around the world for the past year. To be clear, they’re collecting the data as part of their commercial operations—these are paying customers that Field AI has already. “In these job sites, it can traditionally take weeks to go around a site and map where every single target of interest that you need to inspect is,” explains Agha. “But with our robots, our aim is for you to just deploy it, with no training time needed. And then we can just leave the robots. This level of autonomy really unlocks a lot of use cases that our customers weren’t even considering, because they thought it was years away.” And the use cases aren’t just about construction or inspection or other areas where we’re already seeing autonomous robotic systems, Agha says. “These technologies hold immense potential.”

There’s obviously demand for this level of autonomy, but Agha says that the other piece of the puzzle that will enable Field AI to leverage a trillion dollar market is the fact that they can do what they do with virtually any platform. Fundamentally, Field AI is a software company—they make sensor payloads that integrate with their autonomy software, but even those payloads are adjustable, ranging from something appropriate for an autonomous vehicle to something that a drone can handle.

Heck, if you decide that you need an autonomous humanoid for some weird reason, Field AI can do that too. While the versatility here is important, according to Agha, what’s even more important is that it means you can focus on platforms that are more affordable, and still expect the same level of autonomous performance, within the constraints of each robot’s design, of course. With control over the full software stack, integrating mobility with high-level planning, decision making, and mission execution, Agha says that the potential to take advantage of relatively inexpensive robots is what’s going to make the biggest difference toward Field AI’s commercial success.

Group shot in a company parking lot of ten men and 12 robots Same brain, lots of different robots: the Field AI team’s foundation models can be used on robots big, small, expensive, and somewhat less expensive.Field AI

Field AI is already expanding its capabilities, building on some of its recent experience with DARPA RACER by working on deploying robots to inspect pipelines for tens of kilometers and to transport materials across solar farms. With revenue coming in and a substantial chunk of funding, Field AI has even attracted interest from Bill Gates. Field AI’s participation in RACER is ongoing, under a sort of subsidiary company for federal projects called Offroad Autonomy, and in the meantime its commercial side is targeting expansion to “hundreds” of sites on every platform it can think of, including humanoids.

Video Friday: RACER Heavy



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

Eurobot Open 2024: 8–11 May 2024, LA ROCHE-SUR-YON, FRANCE
ICRA 2024: 13–17 May 2024, YOKOHAMA, JAPAN
RoboCup 2024: 17–22 July 2024, EINDHOVEN, NETHERLANDS
Cybathlon 2024: 25–27 October 2024, ZURICH

Enjoy today’s videos!

DARPA’s Robotic Autonomy in Complex Environments with Resiliency (RACER) program recently conducted its fourth experiment (E4) to assess the performance of off-road unmanned vehicles. These tests, conducted in Texas in late 2023, were the first time the program tested its new vehicle, the RACER Heavy Platform (RHP). The video shows autonomous route following for mobility testing and demonstration, including sensor point cloud visualizations.

The 12-ton RHP is significantly larger than the 2-ton RACER Fleet Vehicles (RFVs) already in use in the program. Using the algorithms on a very different platform helps RACER toward its goal of platform agnostic autonomy of combat-scale vehicles in complex, mission-relevant off-road environments that are significantly more unpredictable than on-road conditions.

[ DARPA ]

In our new Science Robotics paper, we introduce an autonomous navigation system developed for our wheeled-legged quadrupeds, designed for fast and efficient navigation within large urban environments. Driven by neural network policies, our simple, unified control system enables smooth gait transitions, smart navigation planning, and highly responsive obstacle avoidance in populated urban environments.

[ Github ]

Generation 7 of “Phoenix” robots include improved human-like range of motion. Improvements in uptime, visual perception, and tactile sensing increase the capability of the system to perform complex tasks over longer periods. Design iteration significantly decreases build time. The speed at which new tasks can be automated has increased 50x, marking a major inflection point in task automation speed.

[ Sanctuary AI ]

We’re proud to celebrate our one millionth commercial delivery—that’s a million deliveries of lifesaving blood, critical vaccines, last-minute groceries, and so much more. But the best part? This is just the beginning.

[ Zipline ]

Work those hips!

[ RoMeLa ]

This thing is kind of terrifying, and I’m fascinated by it.

[ AVFL ]

We propose a novel humanoid TWIMP, which combines a human mimetic musculoskeletal upper limb with a two-wheel inverted pendulum. By combining the benefit of a musculoskeletal humanoid, which can achieve soft contact with the external environment, and the benefit of a two-wheel inverted pendulum with a small footprint and high mobility, we can easily investigate learning control systems in environments with contact and sudden impact.

From Humanoids 2018.

[ Paper ] via [ JSK Lab ]

Thanks, Kento!

Ballbots are uniquely capable of pushing wheelchairs—arguably better than legged platforms, because they can move in any direction without having to reposition themselves.

[ Paper ]

Charge Robotics is building robots that automate the most labor-intensive parts of solar construction. Solar has rapidly become the cheapest form of power generation in many regions. Demand has skyrocketed, and now the primary barrier to getting it installed is labor logistics and bandwidth. Our robots remove the labor bottleneck, allowing construction companies to meet the rising demand for solar, and enabling the world to switch to renewables faster.

[ Charge Robotics ]

Robots doing precision assembly is cool and all, but those vibratory bowl sorters seem like magic.

[ FANUC ]

The QUT CGRAS project’s robot prototype captures images of baby corals, destined for the Great Barrier Reef, monitoring and counting them in grow tanks. The team uses state-of-the-art AI algorithms to automatically detect and count these coral babies and track their growth over time – saving human counting time and money.

[ QUT ]

We are conducting research to develop Unmanned Aerial Systems to aid in wildfire monitoring. The hazardous, dynamic, and visually degraded environment of wildfire gives rise to many unsolved fundamental research challenges.

[ CMU ]

Here’s a little more video of that robot elevator, but I’m wondering why it’s so slow—clamp those bots in there and rocket that elevator up and down!

[ NAVER ]

In March 2024, Northwestern University’s Center for Robotics and Biosystems demonstrated the Omnid mobile collaborative robots (mocobots) at MARS, a conference in Ojai, California on Machine learning, Automation, Robotics, and Space, hosted by Jeff Bezos. The “swarm” of mocobots is designed to collaborate with humans, allowing a human to easily manipulate large, heavy, or awkward payloads. In this case, the mocobots cancel the effect of gravity, so the human can easily manipulate the mock airplane wing in six degrees of freedom. In general, human-cobot systems combine the best of human capabilities with the best of robot capabilities.

[ Northwestern ]

There’s something so soothing about watching a lithium battery get wrecked and burn for 8 minutes.

[ Hardcore Robotics ]

EELS, or Exobiology Extant Life Surveyor, is a versatile, snake-like robot designed for exploration of previously inaccessible terrain. This talk on EELS was presented at the 2024 Amazon MARS conference.

[ JPL ]

The convergence of AI and robotics will unlock a wonderful new world of possibilities in everyday life, says robotics and AI pioneer Daniela Rus. Diving into the way machines think, she reveals how “liquid networks”—a revolutionary class of AI that mimics the neural processes of simple organisms—could help intelligent machines process information more efficiently and give rise to “physical intelligence” that will enable AI to operate beyond digital confines and engage dynamically in the real world.

[ TED ]

Video Friday: SpaceHopper



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

RoboCup German Open: 17–21 April 2024, KASSEL, GERMANY
AUVSI XPONENTIAL 2024: 22–25 April 2024, SAN DIEGO
Eurobot Open 2024: 8–11 May 2024, LA ROCHE-SUR-YON, FRANCE
ICRA 2024: 13–17 May 2024, YOKOHAMA, JAPAN
RoboCup 2024: 17–22 July 2024, EINDHOVEN, NETHERLANDS

Enjoy today’s videos!

In the SpaceHopper project, students at ETH Zurich developed a robot capable of moving in low gravity environments through hopping motions. It is intended to be used in future space missions to explore small celestial bodies.

The exploration of asteroids and moons could provide insights into the formation of the universe, and they may contain valuable minerals that humanity could use in the future.The project began in 2021 as an ETH focus project for bachelor’s students. Now, it is being continued as a regular research project. A particular challenge in developing exploration robots for asteroids is that, unlike larger celestial bodies like Earth, there is low gravity on asteroids and moons. The students have therefore tested their robot’s functionality in zero gravity during a parabolic flight. The parabolic flight was conducted in collaboration with the European Space Agency as part of the ESA Academy Experiments Programme.

[ SpaceHopper ]

It’s still kind of wild to me that it’s now possible to just build a robot like Menteebot. Having said that, at present it looks to be a fairly long way from being able to usefully do tasks in a reliable way.

[ Menteebot ]

Look, it’s the robot we all actually want!

[ Github ]

I wasn’t quite sure what made this building especially “robot-friendly” until I saw the DEDICATED ROBOT ELEVATOR.

[ NAVER ]

We are glad to announce the latest updates with our humanoid robot CL-1. In the test, it demonstrates stair climbing in a single stride based on real-time terrain perception. For the very first time, CL-1 accomplishes back and forth running, in a stable and dynamic way!

[ LimX Dynamics ]

EEWOC [Extended-reach Enhanced Wheeled Orb for Climbing] uses a unique locomotion scheme to climb complex steel structures with its magnetic grippers. Its lightweight and highly extendable tape spring limb can reach over 1.2 meters, allowing it to traverse gaps and obstacles much larger than other existing climbing robots. Its ability to bend allows it to reach around corners and over ledges, and it can transition between surfaces easily thanks to assistance from its wheels. The wheels also let it to drive more quickly and efficiently on the ground. These features make EEWOC well-suited for climbing the complex steel structures seen in real-world environments.

[ Paper ]

Thanks to its “buttock-contact sensors,” JSK’s musculoskeletal humanoid has mastered(ish) the chair-scoot.

[ University of Tokyo ]

Thanks, Kento!

Physical therapy seems like a great application for a humaonid robot when you don’t really need that humanoid robot to do much of anything.

[ Fourier Intelligence ]

NASA’s Ingenuity Mars helicopter became the first vehicle to achieve powered, controlled flight on another planet when it took to the Martian skies on 19 April 2021. This video maps the location of the 72 flights that the helicopter took over the course of nearly three years. Ingenuity far surpassed expectations—soaring higher and faster than previously imagined.

[ JPL ]

No thank you!

[ Paper ]

MERL introduces a new autonomous robotic assembly technology, offering an initial glimpse into how robots will work in future factories. Unlike conventional approaches where humans set pre-conditions for assembly, our technology empowers robots to adapt to diverse scenarios. We showcase the autonomous assembly of a gear box that was demonstrated live at CES2024.

[ Mitsubishi ]

Thanks, Devesh!

In November, 2023 Digit was deployed in a distribution center unloading totes from an AMR as part of regular facility operations, including a shift during Cyber Monday.

[ Agility ]

The PR2 just refuses to die. Last time I checked, official support for it ceased in 2016!

[ University of Bremen ]

DARPA’s Air Combat Evolution (ACE) program has achieved the first-ever in-air tests of AI algorithms autonomously flying a fighter jet against a human-piloted fighter jet in within-visual-range combat scenarios (sometimes referred to as “dogfighting”).In this video, team members discuss what makes the ACE program unlike other aerospace autonomy projects and how it represents a transformational moment in aerospace history, establishing a foundation for ethical, trusted, human-machine teaming for complex military and civilian applications.

[ DARPA ]

Sometimes robots that exist for one single purpose that they only do moderately successfully while trying really hard are the best of robots.

[ CMU ]

Boston Dynamics’ Robert Playter on the New Atlas



Boston Dynamics has just introduced a new Atlas humanoid robot, replacing the legendary hydraulic Atlas and intended to be a commercial product. This is huge news from the company that has spent the last decade building the most dynamic humanoids that the world has ever seen, and if you haven’t read our article about the announcement (and seen the video!), you should do that right now.

We’ve had about a decade of pent-up questions about an all-electric productized version of Atlas, and we were lucky enough to speak with Boston Dynamics CEO Robert Playter to learn more about where this robot came from and how it’s going to make commercial humanoid robots (finally) happen.


Robert Playter was the Vice President of Engineering at Boston Dynamics starting in 1994, which I’m pretty sure was back when Boston Dynamics still intended to be a modeling and simulation company rather than a robotics company. Playter became the CEO in 2019, helping the company make the difficult transition from R&D to commercial products with Spot, Stretch, and now (or very soon) Atlas.

We talked with Playter about what the heck took Boston Dynamics so long to make this robot, what the vision is for Atlas as a product, all that extreme flexibility, and what comes next.

Robert Playter on:

IEEE Spectrum: So what’s going on?

Robert Playter: Boston Dynamics has built an all-electric humanoid. It’s our newest generation of what’s been an almost 15-year effort in developing humanoids. We’re going to launch it as a product, targeting industrial applications, logistics, and places that are much more diverse than where you see Stretch—heavy objects with complex geometry, probably in manufacturing type environments. We’ve built our first robot, and we believe that’s really going to set the bar for the next generation of capabilities for this whole industry.

What took you so long?!

Playter: Well, we wanted to convince ourselves that we knew how to make a humanoid product that can handle a great diversity of tasks—much more so than our previous generations of robots—including at-pace bimanual manipulation of the types of heavy objects with complex geometry that we expect to find in industry. We also really wanted to understand the use cases, so we’ve done a lot of background work on making sure that we see where we can apply these robots fruitfully in industry.

We’ve obviously been working on this machine for a while, as we’ve been doing parallel development with our legacy Atlas. You’ve probably seen some of the videos of Atlas moving struts around—that’s the technical part of proving to ourselves that we can make this work. And then really designing a next generation machine that’s going to be an order of magnitude better than anything the world has seen.

“We’re not anxious to just show some whiz-bang tech, and we didn’t really want to indicate our intent to go here until we were convinced that there is a path to a product.” —Robert Playter, Boston Dynamics

With Spot, it felt like Boston Dynamics developed the product first, without having a specific use case in mind: you put the robot out there and let people discover what it was good for. Is your approach different with Atlas?

Playter: You’re absolutely right. Spot was a technology looking for a product, and it’s taken time for us to really figure out the product market fit that we have in industrial inspection. But the challenge of that experience has left us wiser about really identifying the target applications before you say you’re going to build these things at scale.

Stretch is very different, because it had a clear target market. Atlas is going to be more like Stretch, although it’s going to be way more than a single task robot, which is kind of what Stretch is. Convincing ourselves that we could really generalize with Atlas has taken a little bit of time. This is going to be our third product in about four years. We’ve learned so much, and the world is different from that experience.

[back to top]

Is your vision for Atlas one of a general purpose robot?

Playter: It definitely needs to be a multi-use case robot. I believe that because I don’t think there’s very many examples where a single repetitive task is going to warrant these complex robots. I also think, though, that the practical matter is that you’re going to have to focus on a class of use cases, and really making them useful for the end customer. The lesson we’ve learned with both Spot and Stretch is that it’s critical to get out there and actually understand what makes this robot valuable to customers while making sure you’re building that into your development cycle. And if you can start that before you’ve even launched the product, then you’ll be better off.

[back to top]

How does thinking of this new Atlas as a product rather than a research platform change things?

Playter: I think the research that we’ve done over the past 10 or 15 years has been essential to making a humanoid useful in the first place. We focused on dynamic balancing and mobility and being able to pick something up and still maintain that mobility—those were research topics of the past that we’ve now figured out how to manage and are essential, I think, to doing useful work. There’s still a lot of work to be done on generality, so that humanoids can pick up any one of a thousand different parts and deal with them in a reasonable way. That level of generality hasn’t been proven yet; we think there’s promise, and that AI will be one of the tools that helps solve that. And there’s still a lot of product prototyping and iteration that will come out before we start building massive numbers of these things and shipping them to customers.

“This robot will be stronger at most of its joints than a person, and even an elite athlete, and will have a range of motion that exceeds anything a person can ever do.” —Robert Playter, Boston Dynamics

For a long time, it seemed like hydraulics were the best way of producing powerful dynamic motions for robots like Atlas. Has that now changed?

Playter: We first experimented with that with the launch of Spot. We had the same issue years ago, and discovered that we could build powerful lightweight electric motors that had the same kind of responsiveness and strength, or let’s say sufficient responsiveness and strength, to really make that work. We’ve designed an even newer set of really compact actuators into our electric Atlas, which pack the strength of essentially an elite human athlete into these tiny packages that make an electric humanoid feasible for us. So, this robot will be stronger at most of its joints than a person, and even an elite athlete, and will have a range of motion that exceeds anything a person can ever do. We’ve also compared the strength of our new electric Atlas to our hydraulic Atlas, and the electric Atlas is stronger.

[back to top]

In the context of Atlas’ range of motion, that introductory video was slightly uncomfortable to watch, which I’m sure was deliberate. Why introduce the new Atlas in that way?

Playter: These high range of motion actuators are going to enable a unique set of movements that ultimately will let the robot be very efficient. Imagine being able to turn around without having to take a bunch of steps to turn your whole body instead. The motions we showed [in the video] are ones where our engineers were like, “hey, with these joints, we could get up like this!” And it just wasn’t something we had that really thought about before. This flexibility creates a palette that you can design new stuff on, and we’re already having fun with it and we decided we wanted to share that excitement with the world.

[back to top]

“Everybody will buy one robot—we learned that with Spot. But they won’t start by buying fleets, and you don’t have a business until you can sell multiple robots to the same customer.” —Robert Playter, Boston Dynamics

This does seem like a way of making Atlas more efficient, but I’ve heard from other folks working on humanoids that it’s important for robots to move in familiar and predictable ways for people to be comfortable working around them. What’s your perspective on that?

Playter: I do think that people are going to have to become familiar with our robot; I don’t think that means limiting yourself to human motions. I believe that ultimately, if your robot is stronger or more flexible, it will be able to do things that humans can’t do, or don’t want to do.

One of the real challenges of making a product useful is that you’ve got to have sufficient productivity to satisfy a customer. If you’re slow, that’s hard. We learned that with Stretch. We had two generations of Stretch, and the first generation did not have a joint that let it pivot 180 degrees, so it had to ponderously turn around between picking up a box and dropping it off. That was a killer. And so we decided “nope, gotta have that rotational joint.” It lets Stretch be so much faster and more efficient. At the end of the day, that’s what counts. And people will get used to it.

What can you tell me about the head?

Humanoid robot with circular light in the location of the head Boston Dynamics CEO Robert Playter said the head on the new Atlas robot has been designed not to mimic the human form but rather “to project something else: a friendly place to look to gain some understanding about the intent of the robot.”Boston Dynamics

Playter: The old Atlas did not have an articulated head. But having an articulated head gives you a tool that you can use to indicate intent, and there are integrated lights which will be able to communicate to users. Some of our original concepts had more of a [human] head shape, but for us they always looked a little bit threatening or dystopian somehow, and we wanted to get away from that. So we made a very purposeful decision about the head shape, and our explicit intent was for it not to be human-like. We’re trying to project something else: a friendly place to look to gain some understanding about the intent of the robot.

The design borrows from some friendly shapes that we’d seen in the past. For example, there’s the old Pixar lamp that everybody fell in love with decades ago, and that informed some of the design for us.

[back to top]

How do you think the decade(s) of experience working on humanoids as well as your experience commercializing Spot will benefit you when it comes to making Atlas into a product?

Playter: This is our third product, and one of the things we’ve learned is that it takes way more than some interesting technology to make a product work. You have to have a real use case, and you have to have real productivity around that use case that a customer cares about. Everybody will buy one robot—we learned that with Spot. But they won’t start by buying fleets, and you don’t have a business until you can sell multiple robots to the same customer. And you don’t get there without all this other stuff—the reliability, the service, the integration.

When we launched Spot as a product several years ago, it was really about transforming the whole company. We had to take on all of these new disciplines: manufacturing, service, measuring the quality and reliability of our robots and then building systems and tools to make them steadily better. That transformation is not easy, but the fact that we’ve successfully navigated through that as an organization means that we can easily bring that mindset and skill set to bear as a company. Honestly, that transition takes two or three years to get through, so all of the brand new startup companies out there who have a prototype of a humanoid working—they haven’t even begun that journey.

There’s also cost. Building something effectively at a reasonable cost so that you can sell it at a reasonable cost and ultimately make some money out of it, that’s not easy either. And frankly, without the support of Hyundai which is of course a world-class manufacturing expert, it would be really challenging to do it on our own.

So yeah, we’re much more sober about what it takes to succeed now. We’re not anxious to just show some whiz-bang tech, and we didn’t really want to indicate our intent to go here until we were convinced that there is a path to a product. And I think ultimately, that will win the day.

[back to top]

What will you be working on in the near future, and what will you be able to share?

Playter: We’ll start showing more of the dexterous manipulation on the new Atlas that we’ve already shown on our legacy Atlas. And we’re targeting proof of technology testing in factories at Hyundai Motor Group [HMG] as early as next year. HMG is really excited about this venture; they want to transform their manufacturing and they see Atlas as a big part of that, and so we’re going to get on that soon.

[back to top]

What do you think other robotics folks will find most exciting about the new Atlas?

Playter: Having a robot with so much power and agility packed into a relatively small and lightweight package. I’ve felt honored in the past that most of these other companies compare themselves to us. They say, “well, where are we on the Boston Dynamics bar?” I think we just raised the bar. And that’s ultimately good for the industry, right? People will go, “oh, wow, that’s possible!” And frankly, they’ll start chasing us as fast as they can—that’s what we’ve seen so far. I think it’ll end up pulling the whole industry forward.

Hello, Electric Atlas



Yesterday, Boston Dynamics bid farewell to the iconic Atlas humanoid robot. Or, the hydraulically-powered version of Atlas, anyway—if you read between the lines of the video description (or even just read the actual lines of the video description), it was pretty clear that although hydraulic Atlas was retiring, it wasn’t the end of the Atlas humanoid program at Boston Dynamics. In fact, Atlas is already back, and better than ever.

Today, Boston Dynamics is introducing a new version of Atlas that’s all-electric. It’s powered by batteries and electric actuators, no more messy hydraulics. It exceeds human performance in terms of both strength and flexibility. And for the first time, Boston Dynamics is calling this humanoid robot a product. We’ll take a look at everything that Boston Dynamics is announcing today, and have even more detail in this Q&A with Boston Dynamics CEO Robert Playter.


Boston Dynamics’ new electric humanoid has been simultaneously one of the worst and best kept secrets in robotics over the last year or so. What I mean is that it seemed obvious, or even inevitable, that Boston Dynamics would take the expertise in humanoids that it developed with Atlas and combine that with its experience productizing a fully electric system like Spot. But just because something seems inevitable doesn’t mean it actually is inevitable, and Boston Dynamics has done an admirable job of carrying on as normal while building a fully electric humanoid from scratch. And here it is:


It’s all new, it’s all electric, and some of those movements make me slightly uncomfortable (we’ll get into that in a bit). The blog post accompanying the video is sparse on technical detail, but let’s go through the most interesting parts:

A decade ago, we were one of the only companies putting real R&D effort into humanoid robots. Now the landscape in the robotics industry is very different.

In 2010, we took a look at all the humanoid robots then in existence. You could, I suppose, argue that Honda was putting real R&D effort into ASIMO back then, but yeah, pretty much all those other humanoid robots came from research rather than industry. Now, it feels like we’re up to our eyeballs in commercial humanoids, but over the past couple of years, as startups have appeared out of nowhere with brand new humanoid robots, Boston Dynamics (to most outward appearances) was just keepin’ on with that R&D. Today’s announcement certainly changes that.

We are confident in our plan to not just create an impressive R&D project, but to deliver a valuable solution. This journey will start with Hyundai—in addition to investing in us, the Hyundai team is building the next generation of automotive manufacturing capabilities, and it will serve as a perfect testing ground for new Atlas applications.

Boston Dynamics

This is a significant advantage for Boston Dynamics—through Hyundai, they can essentially be their own first customer for humanoid robots, offering an immediate use case in a very friendly transitional environment. Tesla has a similar advantage with Optimus, but Boston Dynamics also has experience sourcing and selling and supporting Spot, which are those business-y things that seem like they’re not the hard part until they turn out to actually be the hard part.

In the months and years ahead, we’re excited to show what the world’s most dynamic humanoid robot can really do—in the lab, in the factory, and in our lives.

World’s most dynamic humanoid, you say? Awesome! Prove it! On video! With outtakes!

The electric version of Atlas will be stronger, with a broader range of motion than any of our previous generations. For example, our last generation hydraulic Atlas (HD Atlas) could already lift and maneuver a wide variety of heavy, irregular objects; we are continuing to build on those existing capabilities and are exploring several new gripper variations to meet a diverse set of expected manipulation needs in customer environments.

Now we’re getting to the good bits. It’s especially notable here that the electric version of Atlas will be “stronger” than the previous hydraulic version, because for a long time hydraulics were really the only way to get the kind of explosively powerful repetitive dynamic motions that enabled Atlas to do jumps and flips. And the switch away from hydraulics enables that extra range of motion now that there aren’t hoses and stuff to deal with.

It’s also pretty clear that the new Atlas is built to continue the kind of work that hydraulic Atlas has been doing, manipulating big and heavy car parts. This is in sharp contrast to most other humanoid robots that we’ve seen, which have primarily focused on moving small objects or bins around in warehouse environments.


We are not just delivering industry-leading hardware. Some of our most exciting progress over the past couple of years has been in software. In addition to our decades of expertise in simulation and model predictive control, we have equipped our robots with new AI and machine learning tools, like reinforcement learning and computer vision to ensure they can operate and adapt efficiently to complex real-world situations.

This is all par for the course now, but it’s also not particularly meaningful without more information. “We will give our robots new capabilities through machine learning and AI” is what every humanoid robotics company (and most other robotics companies) are saying, but I’m not sure that we’re there yet, because there’s an “okay but how?” that needs to happen first. I’m not saying that it won’t happen, just pointing out that until it does happen, it hasn’t happened.

The humanoid form factor is a useful design for robots working in a world designed for people. However, that form factor doesn’t limit our vision of how a bipedal robot can move, what tools it needs to succeed, and how it can help people accomplish more.

Agility Robotics has a similar philosophy with Digit, which has a mostly humanoid form factor to operate in human environments but also uses a non-human leg design because Agility believes that it works better. Atlas is a bit more human-like with its overall design, but there are some striking differences, including both range of motion and the head, both of which we’ll be talking more about.

We designed the electric version of Atlas to be stronger, more dexterous, and more agile. Atlas may resemble a human form factor, but we are equipping the robot to move in the most efficient way possible to complete a task, rather than being constrained by a human range of motion. Atlas will move in ways that exceed human capabilities.

The introductory video with the new Atlas really punches you in the face with this: Atlas is not constrained by human range of motion and will leverage its extra degrees of freedom to operate faster and more efficiently, even if you personally might find some of those motions a little bit unsettling.

Boston Dynamics

Combining decades of practical experience with first principles thinking, we are confident in our ability to deliver a robot uniquely capable of tackling dull, dirty, and dangerous tasks in real applications.

As Marco Hutter pointed out, most commercial robots (humanoids included) are really only targeting tasks that are dull, because dull usually means repetitive, and robots are very good at repetitive. Dirty is a little more complicated, and dangerous is a lot more complicated than that. I appreciate that Boston Dynamics is targeting those other categories of tasks from the outset.

Commercialization takes great engineering, but it also takes patience, imagination, and collaboration. Boston Dynamics has proven that we can deliver the full package with both industry-leading robotics and a complete ecosystem of software, services, and support to make robotics useful in the real world.

There’s a lot more to building a successful robotics company than building a successful robot. Arguably, building a successful robot is not even the hardest part, long term. Having over 1500 Spot robots deployed with customers gives them a well-established product infrastructure baseline to expand from with the new Atlas.

Taking a step back, let’s consider the position that Boston Dynamics is in when it comes to the humanoid space right now.

The new Atlas appears to be a reasonably mature platform with explicit commercial potential, but it’s not yet clear if this particular version of Atlas is truly commercially viable, in terms of being manufacturable and supportable at scale—it’s Atlas 001, after all. There’s likely a huge amount of work that still needs to be done, but it’s a process that the company has already gone through with Spot. My guess is that Boston Dynamics has some catching up to do with respect to other humanoid companies that are already entering pilot projects.

In terms of capabilities, even though the new Atlas hardware is new, it’s not like Boston Dynamics is starting from scratch, since they’re already transferring skills from hydraulic Atlas onto the new platform. But, we haven’t seen the new Atlas doing any practical tasks yet, so it’s hard to tell how far along that is, and it would be premature to assume that hydraulic Atlas doing all kinds of amazing things in YouTube videos implies that electric Atlas can do similar things safely and reliably in a product context. There’s a gap there, possibly an enormous gap, and we’ll need to see more from the new Atlas to understand where it’s at.

And obviously, there’s a lot of competition in humanoids right now, although I’d like to think that the potential for practical humanoid robots to be useful in society is significant enough that there will be room for lots of different approaches. Boston Dynamics was very early to humanoids in general, but they’re somewhat late to this recent (and rather abrupt) humanoid commercialization push. This may not be a problem, especially if Atlas is targeting applications where its strength and flexibility sets it apart from other robots in the space, and if their depth of experience deploying commercial robotic platforms helps them to scale quickly.

Boston Dynamics

An electric Atlas may indeed have been inevitable, and it’s incredibly exciting to (finally!) see Boston Dynamics take this next step towards a commercial humanoid, which would deliver on more than a decade of ambition stretching back through the DARPA Robotics Challenge to PETMAN. We’ve been promised more manipulation footage soon, and Boston Dynamics expects that Atlas will be in the technology demonstration phase in Hyundai factories as early as next year.

We have a lot more questions, but we have a lot more answers, too: you’ll find a Q&A with Boston Dynamics CEO Robert Playter right here.

Boston Dynamics Retires Its Legendary Humanoid Robot



In a new video posted today, Boston Dynamics is sending off its hydraulic Atlas humanoid robot. “For almost a decade,” the video description reads, “Atlas has sparked our imagination, inspired the next generations of roboticists, and leapt over technical barriers in the field. Now it’s time for our hydraulic Atlas robot to kick back and relax.”


Hydraulic Atlas has certainly earned some relaxation; Boston Dynamics has been absolutely merciless with its humanoid research program. This isn’t a criticism—sometimes being merciless to your hardware is necessary to push the envelope of what’s possible. And as spectators, we just just get to enjoy it, and this highlight reel includes unseen footage of Atlas doing things well along with unseen footage of Atlas doing things not so well. Which, let’s be honest, is what we’re all really here for.

There’s so much more to the history of Atlas than this video shows. Atlas traces its history back to a DARPA project called PETMAN (Protection Ensemble Test Mannequin), which we first wrote about in 2009, so long ago that we had to dig up our own article on the Wayback Machine. As contributor Mikell Taylor wrote back then:

PETMAN is designed to test the suits used by soldiers to protect themselves against chemical warfare agents. It has to be capable of moving just like a soldier—walking, running, bending, reaching, army crawling—to test the suit’s durability in a full range of motion. To really simulate humans as accurately as possible, PETMAN will even be able to “sweat”.

Relative to the other humanoid robots out there at the time (the most famous of which, by far, was Honda’s ASIMO), PETMAN’s movement and balance were very, very impressive. Also impressive was the presumably unintentional way in which this PETMAN video synced up with the music video to Stayin’ Alive by the Bee Gees. Anyway, DARPA was suitably impressed by all this impressiveness, and chose Boston Dynamics to build another humanoid robot to be used for the DARPA Robotics Challenge. That robot was unveiled ten years ago.

The DRC featured a [still looking for a collective noun for humanoid robots] of Atlases, and it seemed like Boston Dynamics was hooked on the form factor, because less than a year after the DRC Finals the company announced the next generation of Atlas, which could do some useful things like move boxes around. Every six months or so, Boston Dynamics put out a new Atlas video, with the robot running or jumping or dancing or doing parkour, leveraging its powerful hydraulics to impress us every single time. There was really nothing like hydraulic Atlas in terms of dynamic performance, and you could argue that there still isn’t. This is a robot that will be missed.

A film strip of images showing a series of humanoid robots gradually getting sleeker and  more polished. The original rendering of Atlas, followed by four generations of the robot.Boston Dynamics/IEEE Spectrum

Now, if you’re wondering why Boston Dynamics is saying “it’s time for our hydraulic Atlas robot to kick back and relax,” rather than just “our Atlas robot,” and if you’re also wondering why the video description ends with “take a look back at everything we’ve accomplished with the Atlas platform “to date,” well, I can’t help you. Some people might attempt to draw some inferences and conclusions from that very specific and deliberate language, but I would certainly not be one of them, because I’m well known for never speculating about anything.

I would, however, point out a few things that have been obvious for a while now. Namely, that:

  • Boston Dynamics has been focusing fairly explicitly on commercialization over the past several years
  • Complex hydraulic robots are not product friendly because (among other things) they tend to leave puddles of hydraulic fluid on the carpet
  • Boston Dynamics has been very successful with Spot as a productized electric platform based on earlier hydraulic research platforms
  • Fully electric commercial humanoids really seems to be where robotics is at right now
There’s nothing at all new in any of this; the only additional piece of information we have is that the hydraulic Atlas is, as of today, retiring. And I’m just going to leave things there.

Video Friday: Robot Dog Can’t Fall



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

RoboCup German Open: 17–21 April 2024, KASSEL, GERMANY
AUVSI XPONENTIAL 2024: 22–25 April 2024, SAN DIEGO
Eurobot Open 2024: 8–11 May 2024, LA ROCHE-SUR-YON, FRANCE
ICRA 2024: 13–17 May 2024, YOKOHAMA, JAPAN
RoboCup 2024: 17–22 July 2024, EINDHOVEN, NETHERLANDS
Cybathlon 2024: 25–27 October 2024, ZURICH

Enjoy today’s videos!

I think suggesting that robots can’t fall is much less useful than instead suggesting that robots can fall and get quickly and easily get back up again.

[ Deep Robotics ]

Sanctuary AI says that this video shows Phoenix operating at “human-equivalent speed,” but they don’t specify which human or under which conditions. Though it’s faster than I would be, that’s for sure.

[ Sanctuary AI ]

“Suzume” is an animated film by Makoto Shinkai, in which one of the characters gets turned into a three-legged chair:

Shintaro Inoue from JSK Lab at the University of Tokyo has managed to build a robotic version of that same chair, which is pretty impressive:


[ Github ]

Thanks, Shintaro!

Humanoid robot EVE training for home assistance like putting groceries into the kitchen cabinets.

[ 1X ]

This is the RAM—robotic autonomous mower. It can be dropped anywhere in the world and will wake up with a mission to make tall grass around it shorter. Here is a quick clip of it working on the Presidio in SF.

[ Electric Sheep ]

This year, our robots braved a Finnish winter for the first time. As the snow clears and the days get longer, we’re looking back on how our robots made thousands of deliveries to S Group customers during the colder months.

[ Starship ]

Agility Robotics is doing its best to answer the (very common) question of “Okay, but what can humanoid robots actually do?”


[ Agility Robotics ]

Digit is great and everything, but Cassie will always be one of my favorite robots.

[ CoRIS ]

Adopting omnidirectional Field of View (FoV) cameras in aerial robots vastly improves perception ability, significantly advancing aerial robotics’s capabilities in inspection, reconstruction, and rescue tasks. We propose OmniNxt, a fully open-source aerial robotics platform with omnidirectional perception.

[ OmniNxt ]

The MAkEable framework enhances mobile manipulation in settings designed around humans by streamlining the process of sharing learned skills and experiences among different robots and contexts. Practical tests confirm its efficiency in a range of scenarios, involving different robots, in tasks such as object grasping, coordinated use of both hands in tasks, and the exchange of skills among humanoid robots.

[ Paper ]

We conducted trials of Ringbot outdoors on a 400 meter track. With a power source of 2300 milliamp-hours and 11.1 Volts, Ringbot managed to cover approximately 3 kilometers in 37 minutes. We commanded its target speed and direction using a remote joystick controller (Steam Deck), and Ringbot experienced five falls during this trial.

[ Paper ]

There is a notable lack of consistency about where exactly Boston Dynamics wants you to think Spot’s eyes are.

[ Boston Dynamics ]

As with every single cooking video, there’s a lot of background prep that’s required for this robot to cook an entire meal, but I would utterly demolish those fries.

[ Dino Robotics ]

Here’s everything you need to know about Wing delivery drones, except for how much human time they actually require and the true cost of making deliveries by drone, because those things aren’t fun to talk about.

[ Wing ]

This CMU Teruko Yata Memorial Lecture is by Agility Robotics’ Jonathan Hurst, on “Human-Centric Robots and How Learning Enables Generality.”

Humans have dreamt of robot helpers forever. What’s new is that this dream is becoming real. New developments in AI, building on foundations of hardware and passive dynamics, enable vastly improved generality. Robots can step out of highly structured environments and become more human-centric: operating in human spaces, interacting with people, and doing some basic human workflows. By connecting a Large Language Model, Digit can convert natural language high-level requests into complex robot instructions, composing the library of skills together, using human context to achieve real work in the human world. All of this is new—and it is never going back: AI will drive a fast-following robot revolution that is going to change the way we live.

[ CMU ]

Pogo Stick Microcopter Bounces off Floors and Walls



We tend to think about hopping robots from the ground up. That is, they start on the ground, and then, by hopping, incorporate a aerial phase into their locomotion. But there’s no reason why aerial robots can’t approach hopping from the other direction, by adding a hopping ground phase to flight. Hopcopter is the first robot that I’ve ever seen give this a try, and it’s remarkably effective, combining a tiny quadrotor with a springy leg to hop hop hop all over the place.


Songnan Bai, Runze Ding, Song Li, and Bingxuan Pu

So why in the air is it worth adding a pogo stick to an otherwise perfectly functional quadrotor? Well, flying is certainly a valuable ability to have, but does take a lot of energy. If you pay close attention to birds (acknowledged experts in the space), they tend to spend a substantial amount of time doing their level best not to fly, often by walking on the ground or jumping around in trees. Not flying most of the time is arguably one of the things that makes birds so successful—it’s that multimodal locomotion capability that has helped them to adapt to so many different environments and situations.

Hopcopter is multimodal as well, although in a slightly more restrictive sense: Its two modes are flying and intermittent flying. But the intermittent flying is very important, because cutting down on that flight phase gives Hopcopter some of the same efficiency benefits that birds experience. By itself, a quadrotor of hopcopter’s size can stay airborne for about 400 seconds, while Hopcopter can hop continuously for more than 20 minutes. If your objective is to cover as much distance as possible, Hopcopter might not be as effective as a legless quadrotor. But if your objective is instead something like inspection or search and rescue, where you need to spend a fair amount of time not moving very much, hopping could be significantly more effective.

A diagram of the Hopcopter system, and a closeup of the Hopcopter leg. Hopcopter is a small quadcopter (specifically a Crazyflie) attached to a springy pogo-stick leg.Songnan Bai, Runze Ding, Song Li, and Bingxuan Pu

Hopcopter can reposition itself on the fly to hop off of different surfaces.Songnan Bai, Runze Ding, Song Li, and Bingxuan Pu

The actual hopping is mostly passive. Hopcopter’s leg is two rigid pieces connected by rubber bands, with a Crazyflie microcopter stapled to the top. During a hop, the Crazyflie can add directional thrust to keep the hops hopping and alter its direction as well as its height, from 0.6 meters to 1.6 meters. There isn’t a lot of room for extra sensors on Hopcopter, but the addition of some stabilizing fins allow for continuous hopping without any positional feedback.

Besides vertical hopping, Hopcopter can also position itself in midair to hop off of surfaces at other orientations, allowing it to almost instantaneously change direction, which is a neat trick.

And it can even do mid air somersaults, because why not?

Hopcopter’s repertoire of tricks includes somersaults.Songnan Bai, Runze Ding, Song Li, and Bingxuan Pu

The researchers, based at the City University of Hong Kong, say that the Hopcopter technology (namely, the elastic leg) could be easily applied to most other quadcopter platforms, turning them into Hopcopters as well. And if you’re more interested in extra payload rather than extra endurance, it’s possible to use hopping in situations where a payload would be too heavy for continuous flight.

The researchers published their work 10 April in Science Robotics.

Marco Hutter Wants to Solve Robotics’ Hard Problems



Last December, the AI Institute announced that it was opening an office in Zurich as a European counterpart to its Boston headquarters and recruited Marco Hutter to helm the office. Hutter also runs the Robotic Systems Lab at ETH Zurich, arguably best known as the origin of the ANYmal quadruped robot (but it also does tons of other cool stuff).

We’re doing our best to keep close tabs on the institute, because it’s one of a vanishingly small number of places that currently exist where roboticists have the kind of long-term resources and vision necessary to make substantial progress on really hard problems that aren’t quite right for either industry or academia. The institute is still scaling up (and the branch in Zurich has only just kicked things off), but we did spot some projects that the Boston folks have been working on, and as you can see from the clips at the top of this page, they’re looking pretty cool.

Meanwhile, we had a chance to check in with Marco Hutter to get a sense of what the Zurich office will be working on and how he’s going to be solving all of the hard problems in robotics. All of them!

How much can you tell us about what you’ll be working on at the AI Institute?

Marco Hutter: If you know the research that I’ve been doing in the past at ETH and with our startups, there’s an overlap on making systems more mobile, making systems more able to interact with the world, making systems in general more capable on the hardware and software side. And that’s what the institute strives for.

The institute describes itself as a research organization that aims to solve the most important and fundamental problems in robotics and AI. What do you think those problems are?

a man wearing a gray jacket and jeans sits in a chair. Marco Hutter is the head of the AI Institute’s new Zurich branch.Swiss Robotics Day

Hutter: There are lots of problems. If you’re looking at robots today, we have to admit that they’re still pretty stupid. The way they move, their capability of understanding their environment, the way they’re able to interact with unstructured environments—I think we’re still lacking a lot of skills on the robotic side to make robots useful in all of the tasks we wish them to do. So we have the ambition of having these robots taking over all these dull, dirty, and dangerous jobs. But if we’re honest, today the biggest impact is really only for the dull part. And I think these dirty and dangerous jobs, where we really need support from robots, that’s still going to take a lot of fundamental work on the robotics and AI side to make enough progress for robots to become useful tools.

What is it about the institute that you think will help robotics make more progress in these areas?

Hutter: I think the institute is one of these unique places where we are trying to bring the benefits of the academic world and the benefits from this corporate world together. In academia, we have all kinds of crazy ideas and we try to develop them in all different directions, but at the same time, we have limited engineering support, and we can only go so far. Making robust and reliable hardware systems is a massive effort, and that kind of engineering is much better done in a corporate lab.

You’ve seen this a little bit with the type of work my lab has been doing in the past. We built simple quadrupeds with a little bit of mobility, but in order to make them robust, we eventually had to spin it out. We had to bring it to the corporate world, because for a research group, a pure academic group, it would have been impossible. But at the same time, you’re losing something, right? Once you go into your corporate world and you’re running a business, you have to be very focused; you can’t be that explorative and free anymore.

So if you bring these two things together through the institute, with long-term planning, enough financial support, and brilliant people both in the U.S. and Europe working together, I think that’s what will hopefully help us make significant progress in the next couple of years.

“We’re very different from a traditional company, where at some point you need to have a product that makes money. Here, it’s really about solving problems and taking the next step.” —Marco Hutter, AI Institute

And what will that actually mean in the context of dynamically mobile robots?

Hutter: If you look at Boston Dynamics’ Atlas doing parkour, or ANYmal doing parkour, these are still demonstrations. You don’t see robots running around in the forests or robots working in mines and doing all kinds of crazy maintenance operations, or in industrial facilities, or construction sites, you name it. We need to not only be able to do this once as a prototype demonstration, but to have all the capabilities that bring that together with environmental perception and understanding to make this athletic intelligence more capable and more adaptable to all kinds of different environments. This is not something that from today to tomorrow we’re going to see it being revolutionized—it will be gradual, steady progress because I think there’s still a lot of fundamental work that needs to be done.

I feel like the mobility of legged robots has improved a lot over the last five years or so, and a lot of that progress has come from Boston Dynamics and also from your lab. Do you feel the same?

Hutter: There has always been progress; the question is how much you can zoom in or zoom out. I think one thing has changed quite a bit, and that’s the availability of robotic systems to all kinds of different research groups. If you look back a decade, people had to build their own robots, they had to do the control for the robots, they had to work on the perception for the robots, and putting everything together like that makes it extremely fragile and very challenging to make something that works more than once. That has changed, which allows us to make faster progress.

Marc Raibert (founder of the AI Institute) likes to show videos of mountain goats to illustrate what robots should be (or will be?) capable of. Does that kind of thing inspire you as well?

Hutter: If you look at the animal kingdom, there’s so many things you can draw inspiration from. And a lot of this stuff is not only the cognitive side; it’s really about pairing the cognitive side with the mechanical intelligence of things like the simple-seeming hooves of mountain goats. But they’re really not that simple, they’re pretty complex in how they interact with the environment. Having one of these things and not the other won’t allow the animal to move across its challenging environment. It’s the same thing with the robots.

It’s always been like this in robotics, where you push on the hardware side, and your controls become better, so you hit a hardware limitation. So both things have to evolve hand in hand. Otherwise, you have an over-dimensioned hardware system that you can’t use because you don’t have the right controls, or you have very sophisticated controls and your hardware system can’t keep up.

How do you feel about all of the investment into humanoids right now, when quadrupedal robots with arms have been around for quite a while?

Hutter: There’s a lot of ongoing research on quadrupeds with arms, and the nice thing is that these technologies that are developed for mobile systems with arms are the same technologies that are used in humanoids. It’s not different from a research point of view, it’s just a different form factor for the system. I think from an application point of view, the story from all of these companies making humanoids is that our environment has been adapted to humans quite a bit. A lot of tasks are at the height of a human standing, right? A quadruped doesn’t have the height to see things or to manipulate things on a table. It’s really application dependent, and I wouldn’t say that one system is better than the other.

Video Friday: LASSIE On the Moon



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

RoboCup German Open: 17–21 April 2024, KASSEL, GERMANY
AUVSI XPONENTIAL 2024: 22–25 April 2024, SAN DIEGO
Eurobot Open 2024: 8–11 May 2024, LA ROCHE-SUR-YON, FRANCE
ICRA 2024: 13–17 May 2024, YOKOHAMA, JAPAN
RoboCup 2024: 17–22 July 2024, EINDHOVEN, NETHERLANDS

Enjoy today’s videos!

USC, UPenn, Texas A&M, Oregon State, Georgia Tech, Temple University, and NASA Johnson Space Center are teaching dog-like robots to navigate craters of the moon and other challenging planetary surfaces in research funded by NASA.

[ USC ]

AMBIDEX is a revolutionary robot that is fast, lightweight, and capable of human-like manipulation. We have added a sensor head and the torso and the waist to greatly expand the range of movement. Compared to the previous arm-centered version, the overall impression and balance has completely changed.

[ Naver Labs ]

It still needs a lot of work, but the six-armed pollinator, Stickbug, can autonomously navigate and pollinate flowers in a greenhouse now.

I think “needs a lot of work” really means “needs a couple more arms.”

[ Paper ]

Experience the future of robotics as UBTECH’s humanoid robot integrates with Baidu’s ERNIE through AppBuilder! Witness robots [that] understand language and autonomously perform tasks like folding clothes and object sorting.

[ UBTECH ]

I know the fins on this robot are for walking underwater rather than on land, but watching it move, I feel like it’s destined to evolve into something a little more terrestrial.

[ Paper ] via [ HERO Lab ]

iRobot has a new Roomba that vacuums and mops—and at $275, it’s a pretty good deal.

Also, if you are a robot vacuum owner, please, please remember to clean the poor thing out from time to time. Here’s how to do it with a Roomba:

[ iRobot ]

The video demonstrates the wave-basin testing of a 43 kg (95 lb) amphibious cycloidal propeller unmanned underwater vehicle (Cyclo-UUV) developed at the Advanced Vertical Flight Laboratory, Texas A&M University. The use of cyclo-propellers allows for 360 degree thrust vectoring for more robust dynamic controllability compared to UUVs with conventional screw propellers.

[ AVFL ]

Sony is still upgrading Aibo with new features, like the ability to listen to your terrible music and dance along.

[ Aibo ]

Operating robots precisely and at high speeds has been a long-standing goal of robotics research. To enable precise and safe dynamic motions, we introduce a four degree-of-freedom (DoF) tendon-driven robot arm. Tendons allow placing the actuation at the base to reduce the robot’s inertia, which we show significantly reduces peak collision forces compared to conventional motor-driven systems. Pairing our robot with pneumatic muscles allows generating high forces and highly accelerated motions, while benefiting from impact resilience through passive compliance.

[ Max Planck Institute ]

Rovers on Mars have previously been caught in loose soils, and turning the wheels dug them deeper, just like a car stuck in sand. To avoid this, Rosalind Franklin has a unique wheel-walking locomotion mode to overcome difficult terrain, as well as autonomous navigation software.

[ ESA ]

Cassie is able to walk on sand, gravel, and rocks inside the Robot Playground at the University of Michigan.

Aww, they stopped before they got to the fun rocks.

[ Paper ] via [ Michigan Robotics ]

Not bad for 2016, right?

[ Namiki Lab ]

MOMO has learned the Bam Yang Gang dance moves with its hand dexterity. :) By analyzing 2D dance videos, we extract detailed hand skeleton data, allowing us to recreate the moves in 3D using a hand model. With this information, MOMO replicates the dance motions with its arm and hand joints.

[ RILAB ] via [ KIMLAB ]

This UPenn GRASP SFI Seminar is from Eric Jang at 1X Technologies, on “Data Engines for Humanoid Robots.”

1X’s mission is to create an abundant supply of physical labor through androids that work alongside humans. I will share some of the progress 1X has been making towards general-purpose mobile manipulation. We have scaled up the number of tasks our androids can do by combining an end-to-end learning strategy with a no-code system to add new robotic capabilities. Our Android Operations team trains their own models on the data they gather themselves, producing an extremely high-quality “farm-to-table” dataset that can be used to learn extremely capable behaviors. I’ll also share an early preview of the progress we’ve been making towards a generalist “World Model” for humanoid robots.

[ UPenn ]

This Microsoft Future Leaders in Robotics and AI Seminar is from Chahat Deep Singh at the University of Maryland, on “Minimal Perception: Enabling Autonomy in Palm-Sized Robots.”

The solution to robot autonomy lies at the intersection of AI, computer vision, computational imaging, and robotics—resulting in minimal robots. This talk explores the challenge of developing a minimal perception framework for tiny robots (less than 6 inches) used in field operations such as space inspections in confined spaces and robot pollination. Furthermore, we will delve into the realm of selective perception, embodied AI, and the future of robot autonomy in the palm of your hands.

[ UMD ]

Video Friday: Human to Humanoid



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

HRI 2024: 11–15 March 2024, BOULDER, COLO.
Eurobot Open 2024: 8–11 May 2024, LA ROCHE-SUR-YON, FRANCE
ICRA 2024: 13–17 May 2024, YOKOHAMA, JAPAN
RoboCup 2024: 17–22 July 2024, EINDHOVEN, NETHERLANDS

Enjoy today’s videos!

We present Human to Humanoid (H2O), a reinforcement learning (RL) based framework that enables real-time, whole-body teleoperation of a full-sized humanoid robot with only an RGB camera. We successfully achieve teleoperation of dynamic, whole-body motions in real-world scenarios, including walking, back jumping, kicking, turning, waving, pushing, boxing, etc. To the best of our knowledge, this is the first demonstration to achieve learning-based, real-time, whole-body humanoid teleoperation.

[ CMU ]

Legged robots have the potential to traverse complex terrain and access confined spaces beyond the reach of traditional platforms thanks to their ability to carefully select footholds and flexibly adapt their body posture while walking. However, robust deployment in real-world applications is still an open challenge. In this paper, we present a method for legged locomotion control using reinforcement learning and 3D volumetric representations to enable robust and versatile locomotion in confined and unstructured environments.

[ Takahiro Miki ]

Sure, 3.3 meters per second is fast for a humanoid, but I’m more impressed by the spinning around while walking downstairs.

[ Unitree ]

Improving the safety of collaborative manipulators necessitates the reduction of inertia in the moving part. We introduce a novel approach in the form of a passive, 3D wire aligner, serving as a lightweight and low-friction power transmission mechanism, thus achieving the desired low inertia in the manipulator’s operation.

[ SAQIEL ]

Thanks, Temma!

Robot Era just launched Humanoid-Gym, an open-source reinforcement learning framework for bipedal humanoids. As you can see from the video, RL algorithms have given the robot, called Xiao Xing, or XBot, the ability to climb up and down haphazardly stacked boxes with relative stability and ease.

[ Robot Era ]

“Impact-Aware Bimanual Catching of Large-Momentum Objects.” Need I say more?

[ SLMC ]

More than 80% of stroke survivors experience walking difficulty, significantly impacting their daily lives, independence, and overall quality of life. Now, new research from the University of Massachusetts Amherst pushes forward the bounds of stroke recovery with a unique robotic hip exoskeleton, designed as a training tool to improve walking function. This invites the possibility of new therapies that are more accessible and easier to translate from practice to daily life, compared to current rehabilitation methods.

[ UMass Amherst ]

Thanks, Julia!

The manipulation here is pretty impressive, but it’s hard to know how impressive without also knowing how much the video was sped up.

[ Somatic ]

DJI drones work to make the world a better place and one of the ways that we do this is through conservation work. We partnered with Halo Robotics and the OFI Orangutan Foundation International to showcase just how these drones can make an impact.

[ DJI ]

The aim of the test is to demonstrate the removal and replacement of satellite modules into a 27U CubeSat format using augmented reality control of a robot. In this use case, the “client” satellite is being upgraded and refueled using modular componentry. The robot will then remove the failed computer module and place it in a fixture. It will then do the same with the propellant tank. The robot will then place these correctly back into the satellite.

[ Extend Robotics ]

This video features some of the highlights and favorite moments from the CYBATHLON Challenges 2024 that took place on 2 February, showing so many diverse types of assistive technology taking on discipline tasks and displaying pilots’ tenacity and determination. The Challenges saw new teams, new tasks, and new formats for many of the CYBATHLON disciplines.

[ Cybathlon ]

It’s been a long road to electrically powered robots.

[ ABB ]

Small drones for catastrophic wildfires (ones covering more than [40,470 hectares]) are like bringing a flashlight to light up a football field. This short video describes the major uses for drones of all sizes and why and when they are used, or why not.

[ CRASAR ]

It probably will not surprise you that there are a lot of robots involved in building Rivian trucks and vans.

[ Kawasaki Robotics ]

DARPA’s Learning Introspective Control (LINC) program is developing machine learning methods that show promise in making that scenario closer to reality. LINC aims to fundamentally improve the safety of mechanical systems—specifically in ground vehicles, ships, drone swarms, and robotics—using various methods that require minimal computing power. The result is an AI-powered controller the size of a cell phone.

[ DARPA ]

Anyware Robotics’ Pixmo Takes Unique Approach to Trailer Unloading



You’ve seen this before: a truck-unloading robot that’s made up of a mobile base with an arm on it that drives up into the back of a trailer and then uses suction to grab stacked boxes and put them onto a conveyor belt. We’ve written about a couple of the companies doing this, and there are even more out there. It’s easy to understand why—trailer unloading involves a fairly structured and controlled environment with a very repetitive task, it’s a hard job that sucks for humans, and there’s an enormous amount of demand.

While it’s likely true that there’s enough room for a whole bunch of different robotics companies in the trailer-unloading space, a given customer is probably going to only pick one, and they’re going to pick the one that offers the right combination of safety, capability, and cost. Anyware Robotics thinks they have that mix, aided by a box-handling solution that is both very clever and so obvious that I’m wondering why I didn’t think of it myself.


The overall design of Pixmo itself is fairly standard as far as trailer-unloading robots go, but some of the details are interesting. We’re told that Pixmo is the only trailer-unloading system that integrates a heavy-payload collaborative arm, actually a fairly new commercial arm from Fanuc. This means that Anyware Robotics doesn’t have to faff about with their own hardware, and also that their robot is arguably safer, being ISO-certified safe to work directly with people. The base is custom, but Anyware is contracting it out to a big robotics original equipment manufacturer.

“We’ve put a lot of effort into making sure that most of the components of our robot are off-the-shelf,” cofounder and CEO Thomas Tang tells us. “There are already so many mature and cost-efficient suppliers that we want to offload the supply chain, the certification, the reliability testing onto someone else’s shoulders.” And while there are a selection of automated mobile robots (AMRs) out there that seem like they could get the job done, the problem is that they’re all designed for flat surfaces, and getting into and out of the back of the trailer often involves a short, steep ramp, hence the need for a design just for them. Even with the custom base, Tang says that Pixmo is very cost efficient, and the company predicts that it will be approximately one-third the cost of other solutions with a payback of about 24 months.

But here’s the really clever bit:

Anyware Robotics Pixmo Trailer Unloading

That conveyor system in front of the boxes is an add-on that’s used in support of Pixmo. There are two benefits here: First, having the conveyor add-on aligned with the base of a box minimizes the amount of lifting that Pixmo has to do. This allows Pixmo to handle boxes of up to 65 pounds with a lift-and-slide technique, putting it at the top end of a trailer-unloading robot payload. And the second benefit is that the add-on system decreases the distance that Pixmo has to move the box to just about as small as it can possibly be, eliminating the need for the arm to rotate around to place a box on a conveyor next to or behind itself. Lowering this cycle time means that Pixmo can achieve a throughput of up to 1,000 boxes per hour—about one box every 4 seconds, which the Internet suggests is quite fast, even for a professional human. Anyware Robotics is introducing this add-on system at the MODEX manufacturing and supply-chain show next week, and the company has a patent pending on the idea.

This seems like such a simple, useful idea that I asked Tang why they were the first ones to come up with it. “In robotics startups, there tends to be a legacy mind-set issue,” Tang told me. “When people have been working on robot arms for so many years, we just think about how to use robot arms to solve everything. That’s maybe the reason why other companies didn’t come up with this solution.” Tang says that Anyware started with much more complicated add-on designs before finding this solution. “Usually it’s the most simple solution that has the most trial and error behind it.”

Anyware Robotics is focused on trailer unloading for now, but Pixmo could easily be adapted for palletizing and depalletizing or somewhat less easily for other warehouse tasks like order picking or machine tending. But why stop there? A mobile manipulator can (theoretically) do it all (almost), and that’s exactly what Tang wants:

In our long-term vision, we believe that the future will have two different types of general-purpose robots. In one direction is the humanoid form, which is a really flexible solution for jobs where you want to replace a human. But there are so many jobs that are just not reasonable for a human body to do. So we believe there should be another form of general-purpose robot, which is designed for industrial tasks. Our design philosophy is in that direction—it’s also general purpose, but for industrial applications.

At just over one year old, Anyware has already managed to complete a pilot program (and convert it to a purchase order). They’re currently in the middle of several other pilot programs with leading third-party logistics providers, and they expect to spend the next several months focusing on productization with the goal of releasing the first commercial version of Pixmo by July of this year.

Video Friday: $2.6 Billion



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

HRI 2024: 11–15 March 2024, BOULDER, COLORADO, USA
Eurobot Open 2024: 8–11 May 2024, LA ROCHE-SUR-YON, FRANCE
ICRA 2024: 13–17 May 2024, YOKOHAMA, JAPAN
RoboCup 2024: 17–22 July 2024, EINDHOVEN, NETHERLANDS

Enjoy today’s videos!

Figure has raised a US $675 million Series B, valuing the company at $2.6 billion.

[ Figure ]

Meanwhile, here’s how things are going at Agility Robotics, whose last raise was a $150 million Series B in April of 2022.

[ Agility Robotics ]

Also meanwhile, here’s how things are going at Sanctuary AI, whose last raise was a $58.5 million Series A in March of 2022.

[ Sanctuary AI ]

The time has come for humanoid robots to enter industrial production lines and learn how to assist humans by undertaking repetitive, tedious, and potentially dangerous tasks for them. Recently, UBTECH’s humanoid robot Walker S was introduced into the assembly line of NIO’s advanced vehicle-manufacturing center, as an “intern” assisting in the car production. Walker S is the first bipedal humanoid robot to complete a specific workstation’s tasks on a mobile EV production line.

[ UBTECH ]

Henry Evans keeps working hard to make robots better, this time with the assistance of researchers from Carnegie Mellon University.

Henry said he preferred using head-worn assistive teleoperation (HAT) with a robot for certain tasks rather than depending on a caregiver. “Definitely scratching itches,” he said. “I would be happy to have it stand next to me all day, ready to do that or hold a towel to my mouth. Also, feeding me soft foods, operating the blinds, and doing odd jobs around the room.”
One innovation in particular, software called Driver Assistance that helps align the robot’s gripper with an object the user wants to pick up, was “awesome,” Henry said. Driver Assistance leaves the user in control while it makes the fine adjustments and corrections that can make controlling a robot both tedious and demanding. “That’s better than anything I have tried for grasping,” Henry said, adding that he would like to see Driver Assistance used for every interface that controls Stretch robots.

[ HAT2 ] via [ CMU ]

Watch this video for the three glorious seconds at the end.

[ Tech United ]

Get ready to rip, shear, mow, and tear, as DOOM is back! This April, we’re making the legendary game playable on our robotic mowers as a tribute to 30 years of mowing down demons.

Oh, it’s HOOSKvarna, not HUSKvarna.

[ Husqvarna ] via [ Engadget ]

Latest developments demonstrated on the Ameca Desktop platform. Having fun with vision- and voice-cloning capabilities.

[ Engineered Arts ]

Could an artificial-intelligence system learn language from a child? New York University researchers supported by the National Science Foundation, using first-person video from a head-mounted camera, trained AI models to learn language through the eyes and ears of a child.

[ NYU ]

The world’s leaders in manufacturing, natural resources, power, and utilities are using our autonomous robots to gather data of higher quality and higher quantities of data than ever before. Thousands of Spots have been deployed around the world—more than any other walking robot—to tackle this challenge. This release helps maintenance teams tap into the power of AI with new software capabilities and Spot enhancements.

[ Boston Dynamics ]

Modular self-reconfigurable robotic systems are more adaptive than conventional systems. This article proposes a novel free-form and truss-structured modular self-reconfigurable robot called FreeSN, containing node and strut modules. This article presents a novel configuration identification system for FreeSN, including connection point magnetic localization, module identification, module orientation fusion, and system-configuration fusion.

[ Freeform Robotics ]

The OOS-SIM (On-Orbit Servicing Simulator) is a simulator for on-orbit servicing tasks such as repair, maintenance and assembly that have to be carried out on satellites orbiting the earth. It simulates the operational conditions in orbit, such as the felt weightlessness and the harsh illumination.

[ DLR ]

The next CYBATHLON competition, which will take place again in 2024, breaks down barriers between the public, people with disabilities, researchers and technology developers. From 25 to 27 October 2024, the CYBATHLON will take place in a global format in the Arena Schluefweg in Kloten near Zurich and in local hubs all around the world.

[ CYBATHLON ]

George’s story is a testament to the incredible journey that unfolds when passion, opportunity and community converge. His journey from a drone enthusiast to someone actively contributing to making a difference not only to his local community but also globally; serves as a beacon of hope for all who dare to dream and pursue their passions.

[ WeRobotics ]

In case you’d forgotten, Amazon has a lot of robots.

[ Amazon Robotics ]

ABB’s fifty-year story of robotic innovation that began in 1974 with the sale of the world’s first commercial all-electric robot, the IRB 6. Björn Weichbrodt was a key figure in the development of the IRB 6.

[ ABB ]

Robotics Debate of the Ingenuity Labs Robotics and AI Symposium (RAIS2023) from October 12, 2023: Is robotics helping or hindering our progress on UN Sustainable Development Goals?

[ Ingenuity Labs ]

Figure Raises $675M for Its Humanoid Robot Development



Today, Figure is announcing an astonishing US $675 million Series B raise, which values the company at an even more astonishing $2.6 billion. Figure is one of the companies working toward a multipurpose or general-purpose (depending on whom you ask) bipedal or humanoid (depending on whom you ask) robot. The astonishing thing about this valuation is that Figure’s robot is still very much in the development phase—although they’re making rapid progress, which they demonstrate in a new video posted this week.


This round of funding comes from Microsoft, OpenAI Startup Fund, Nvidia, Jeff Bezos (through Bezos Expeditions), Parkway Venture Capital, Intel Capital, Align Ventures, and ARK Invest. Figure says that they’re going to use this new capital “for scaling up AI training, robot manufacturing, expanding engineering head count, and advancing commercial deployment efforts.” In addition, Figure and OpenAI will be collaborating on the development of “next-generation AI models for humanoid robots” which will “help accelerate Figure’s commercial timeline by enhancing the capabilities of humanoid robots to process and reason from language.”

As far as that commercial timeline goes, here’s the most recent update:

Figure

And to understand everything that’s going on here, we sent a whole bunch of questions to Jenna Reher, senior robotics/AI engineer at Figure.

What does “fully autonomous” mean, exactly?

Jenna Reher: In this case, we simply put the robot on the ground and hit go on the task with no other user input. What you see is using a learned vision model for bin detection that allows us to localize the robot relative to the target bin and get the bin pose. The robot can then navigate itself to within reach of the bin, determine grasp points based on the bin pose, and detect grasp success through the measured forces on the hands. Once the robot turns and sees the conveyor, the rest of the task rolls out in a similar manner. By doing things in this way we can move the bins and conveyor around in the test space or start the robot from a different position and still complete the task successfully.

How many takes did it take to get this take?

Reher: We’ve been running this use case consistently for some time now as part of our work in the lab, so we didn’t really have to change much for the filming here. We did two or three practice runs in the morning and then three filming takes. All of the takes were successful, so the extras were to make sure we got the cleanest one to show.

What’s back in the Advanced Actuator Lab?

Reher: We have an awesome team of folks working on some exciting custom actuator designs for our future robots, as well as supporting and characterizing the actuators that went into our current robots.

That’s a very specific number for “speed vs. human.” Which human did you measure the robot’s speed against?

Reher: We timed Brett [Adcock, founder of Figure] and a few poor engineers doing the task and took the average to get a rough baseline. If you are observant, that seemingly overspecific number is just saying we’re at 1/6 human speed. The main point that we’re trying to make here is that we are aware we are currently below human speed, and it’s an important metric to track as we improve.

What’s the tether for?

Reher: For this task we currently process the camera data off-robot while all of the behavior planning and control happens on board in the computer that’s in the torso. Our robots should be fully tetherless in the near future as we finish packaging all of that on board. We’ve been developing behaviors quickly in the lab here at Figure in parallel to all of the other systems engineering and integration efforts happening, so hopefully folks notice all of these subtle parallel threads converging as we try to release regular updates.

How the heck do you keep your robotics lab so clean?

Reher: Everything we’ve filmed so far is in our large robot test lab, so it’s a lot easier to keep the area clean when people’s desks aren’t intruding in the space. Definitely no guarantees on that level of cleanliness if the camera were pointed in the other direction!

Is the robot in the background doing okay?

Reher: Yes! The other robot was patiently standing there in the background, waiting for the filming to finish up so that our manipulation team could get back to training it to do more manipulation tasks. We hope we can share some more developments with that robot as the main star in the near future.

What would happen if I put a single bowling ball into that tote?

Reher: A bowling ball is particularly menacing to this task primarily due to the moving mass, in addition to the impact if you are throwing it in. The robot would in all likelihood end up dropping the tote, stay standing, and abort the task. With what you see here, we assume that the mass of the tote is known a priori so that our whole-body controller can compensate for the external forces while tracking the manipulation task. Reacting to and estimating larger unknown disturbances such as this is a challenging problem, but we’re definitely working on it.

Tell me more about that very Zen arm and hand pose that the robot adopts after putting the tote on the conveyor.

Reher: It does look kind of Zen! If you rewatch our coffee video, you’ll notice the same pose after the robot gets things brewing. This is a reset pose that our controller will go into between manipulation tasks while the robot is awaiting commands to execute either an engineered behavior or a learned policy.

Are the fingers less fragile than they look?

Reher: They are more robust than they look, but not impervious to damage by any means. The design is pretty modular, which is great, meaning that if we damage one or two fingers, there is a small number of parts to swap to get everything back up and running. The current fingers won’t necessarily survive a direct impact from a bad fall, but can pick up totes and do manipulation tasks all day without issues.

Is the Figure logo footsteps?

Reher: One of the reasons I really like the Figure logo is that it has a bunch of different interpretations depending on how you look at it. In some cases it’s just an F that looks like a footstep plan rollout, while some of the logo animations we have look like active stepping. One other possible interpretation could be an occupancy grid.

Video Friday: Pedipulate



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

HRI 2024: 11–15 March 2024, BOULDER, COLO.
Eurobot Open 2024: 8–11 May 2024, LA ROCHE-SUR-YON, FRANCE
ICRA 2024: 13–17 May 2024, YOKOHAMA, JAPAN
RoboCup 2024: 17–22 July 2024, EINDHOVEN, NETHERLANDS

Enjoy today’s videos!

Legged robots have the potential to become vital in maintenance, home support, and exploration scenarios. In order to interact with and manipulate their environments, most legged robots are equipped with a dedicated robot arm, which means additional mass and mechanical complexity compared to standard legged robots. In this work, we explore pedipulation—using the legs of a legged robot for manipulation.

This work, by Philip Arm, Mayank Mittal, Hendrik Kolvenbach, and Marco Hutter from ETH Zurich’s Robotic Systems Lab, will be presented at the IEEE International Conference on Robotics and Automation (ICRA 2024) in May, in Japan (see events calendar above).

[ Pedipulate ]

I learned a new word today: “stigmergy.” Stigmergy is a kind of group coordination that’s based on environmental modification. Like, when insects leave pheromone trails, they’re not directly sending messages to other individuals. But as a group, ants are able to manifest surprisingly complex coordinated behaviors. Cool, right? Researchers at IRIDIA are exploring the possibilities for robots using stigmergy with a cool “artificial pheromone” system using a UV-sensitive surface.

“Automatic Design of Stigmergy-Based Behaviors for Robot Swarms,” by Muhammad Salman, David Garzón Ramos, and Mauro Birattari, is published in the journal Communications Engineering.

[ Nature ] via [ IRIDIA ]

Thanks, David!

Filmed in July 2017, this video shows Atlas walking through a “hatch” on a pitching surface. This skill uses autonomous behaviors, with the robot not knowing about the rocking world. Robot built by Boston Dynamics for the DARPA Robotics Challenge in 2013. Software by IHMC Robotics.

[ IHMC ]

That IHMC video reminded me of the SAFFiR program for Shipboard Autonomous Firefighting Robots, which is responsible for a bunch of really cool research in partnership with the U.S. Naval Research Laboratory. NRL did some interesting stuff with Nexi robots from MIT and made their own videos. That effort I think didn’t get nearly enough credit for being very entertaining while communicating important robotics research.

[ NRL ]

I want more robot videos with this energy.

[ MIT CSAIL ]

Large industrial-asset operators increasingly use robotics to automate hazardous work at their facilities. This has led to soaring demand for autonomous inspection solutions like ANYmal. Series production by our partner Zollner enables ANYbotics to supply our customers with the required quantities of robots.

[ ANYbotics ]

This week is Grain Bin Safety Week, and Grain Weevil is here to help.

[ Grain Weevil ]

Oof, this is some heavy, heavy deep-time stuff.

[ Onkalo ]

And now, this.

[ RozenZebet ]

Hawkeye is a real-time multimodal conversation-and-interaction agent for the Boston Dynamics’ mobile robot Spot. Leveraging OpenAI’s experimental GPT-4 Turbo and Vision AI models, Hawkeye aims to empower everyone, from seniors to health care professionals in forming new and unique interactions with the world around them.

That moment at 1:07 is so relatable.

[ Hawkeye ]

Wing would really prefer that if you find one of their drones on the ground, you don’t run off with it.

[ Wing ]

The rover Artemis, developed at the DFKI Robotics Innovation Center, has been equipped with a penetrometer that measures the soil’s penetration resistance to obtain precise information about soil strength. The video showcases an initial test run with the device mounted on the robot. During this test, the robot was remotely controlled, and the maximum penetration depth was limited to 15 millimeters.

[ DFKI ]

To efficiently achieve complex humanoid loco-manipulation tasks in industrial contexts, we propose a combined vision-based tracker-localization interplay integrated as part of a task-space whole-body-optimization control. Our approach allows humanoid robots, targeted for industrial manufacturing, to manipulate and assemble large-scale objects while walking.

[ Paper ]

We developed a novel multibody robot (called the Two-Body Bot) consisting of two small-footprint mobile bases connected by a four-bar linkage where handlebars are mounted. Each base measures only 29.2 centimeters wide, making the robot likely the slimmest ever developed for mobile postural assistance.

[ MIT ]

Lex Fridman interviews Marc Raibert.

[ Lex Fridman ]

Video Friday: Acrobot Error



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

Cybathlon Challenges: 2 February 2024, ZURICH
HRI 2024: 11–15 March 2024, BOULDER, COLO.
Eurobot Open 2024: 8–11 May 2024, LA ROCHE-SUR-YON, FRANCE
ICRA 2024: 13–17 May 2024, YOKOHAMA, JAPAN

Enjoy today’s videos!

Just like a real human, Acrobot will sometimes kick you in the face.

[ Acrobotics ]

Thanks, Elizabeth!

You had me at “wormlike, limbless robots.”

[ GitHub ] via [ Georgia Tech ]

Filmed in July 2017, this video shows us using Atlas to put out a “fire” on our loading dock. This uses a combination of teleoperation and autonomous behaviors through a single, remote computer. Robot built by Boston Dynamics for the DARPA Robotics Challenge in 2013. Software by IHMC Robotics.

I would say that in the middle of a rainstorm is probably the best time to start a fire that you expect to be extinguished by a robot.

[ IHMC ]

We’re hard at work, but Atlas still has time for a dance break.

[ Boston Dynamics ]

This is pretty cool: BruBotics is testing its self-healing robotics gripper technology on commercial grippers from Festo.

[ Paper ] via [ BruBotics ]

Thanks, Bram!

You should read our in-depth article on Stretch 3, so if you haven’t yet, consider this as just a teaser.

[ Hello Robot ]

Inspired by caregiving experts, we proposed a bimanual interactive robotic dressing assistance scheme, which is unprecedented in previous research. In the scheme, an interactive robot joins hands with the human thus supporting/guiding the human in the dressing process, while the dressing robot performs the dressing task. This work represents a paradigm shift of thinking of the dressing assistance task from one-robot-to-one-arm to two-robot-to-one-arm.

[ Project ]

Thanks, Jihong!

Tony Punnoose Valayil from the Bulgarian Academy of Sciences Institute of Robotics wrote in to share some very low-cost hand-rehabilitation robots for home use.

In this video, we present a robot-assisted rehabilitation of the wrist joint which can aid in restoring the strength that has been lost across the upper limb due to stroke. This robot is very cost-effective and can be used for home rehabilitation.

In this video, we present an exoskeleton robot which can be used at home for rehabilitating the index and middle fingers of stroke-affected patients. This robot is built at a cost of 50 euros for patients who are not financially independent to get better treatment.

[ BAS ]

Some very impressive work here from the Norwegian University of Science and Technology (NTNU), showing a drone tracking its position using radar and lidar-based odometry in some nightmare (for robots) environments, including a long tunnel that looks the same everywhere and a hallway full of smoke.

[ Paper ] via [ GitHub ]

I’m sorry, but people should really know better than to make videos like this for social robot crowdfunding by now.

It’s on Kickstarter for about $300, and the fact that it’s been funded so quickly tells me that people have already forgotten about the social robotpocalypse.

[ Kickstarter ]

Introducing Orbit, your portal for managing asset-intensive facilities through real-time and predictive intelligence. Orbit brings a whole new suite of fleet management capabilities and will unify your ecosystem of Boston Dynamics robots, starting with Spot.

[ Boston Dynamics ]

Stretch 3 Brings Us Closer to Realistic Home Robots



A lot has happened in robotics over the last year. Everyone is wondering how AI will transform robotics, and everyone else is wondering whether humanoids are going to blow it or not, and the rest of us are busy trying not to get completely run over as things shake out however they’re going to shake out.

Meanwhile, over at Hello Robot, they’ve been focused on making their Stretch robot do useful things while also being affordable and reliable and affordable and expandable and affordable and community-friendly and affordable. Which are some really hard and important problems that can sometimes get overwhelmed by flashier things.

Today, Hello Robot is announcing Stretch 3, which provides a suite of upgrades to what they (quite accurately) call “the world’s only lightweight, capable, developer-friendly mobile manipulator.” And impressively, they’ve managed to do it without forgetting about that whole “affordable” part.


Hello Robot

Stretch 3 looks about the same as the previous versions, but there are important upgrades that are worth highlighting. The most impactful: Stretch 3 now comes with the dexterous wrist kit that used to be an add-on, and it now also includes an Intel Realsense D405 camera mounted right behind the gripper, which is a huge help for both autonomy and remote teleoperation—a useful new feature shipping with Stretch 3 that’s based on research out of Maya Cakmak’s lab at the University of Washington, in Seattle. This is an example of turning innovation from the community of Stretch users into product features, a product-development approach that seems to be working well for Hello Robot.

“We’ve really been learning from our community,” says Hello Robot cofounder and CEO Aaron Edsinger. “In the past year, we’ve seen a real uptick in publications, and it feels like we’re getting to this critical-mass moment with Stretch. So with Stretch 3, it’s about implementing features that our community has been asking us for.”

“When we launched, we didn’t have a dexterous wrist at the end as standard, because we were trying to start with truly the minimum viable product,” says Hello Robot cofounder and CTO Charlie Kemp. “And what we found is that almost every order was adding the dexterous wrist, and by actually having it come in standard, we’ve been able to devote more attention to it and make it a much more robust and capable system.”

Kemp says that having Stretch do everything right out of the box (with Hello Robot support) makes a big difference for their research customers. “Making it easier for people to try things—we’ve learned to really value that, because the more steps that people have to go through to experience it, the less likely they are to build on it.” In a research context, this is important because what you’re really talking about is time: The more time people spend just trying to make the robot function, the less time they’ll spend getting the robot to do useful things.

Hello Robot

At this point, you may be thinking of Stretch as a research platform. Or you may be thinking of Stretch as a robot for people with disabilities, if you read our November 2023 cover story about Stretch and Henry and Jane Evans. And the robot is definitely both of those things. But Hello Robot stresses that these specific markets are not their end goal—they see Stretch as a generalist mobile manipulator with a future in the home, as suggested by this Stretch 3 promo video:

Hello Robot

Dishes, laundry, bubble cannons: All of these are critical to the functionality of any normal household. “Stretch is an inclusive robot,” says Kemp. “It’s not just for older adults or people with disabilities. We want a robot that can be beneficial for everyone. Our vision, and what we believe will really happen, whether it’s us or someone else, is that there is going to be a versatile, general-purpose home robot. Right now, clearly, our market is not yet consumers in the home. But that’s where we want to go.”

Robots in the home have been promised for decades, and with the notable exception of the Roomba, there has not been a lot of success. The idea of a robot that could handle dishes or laundry is tempting, but is it near-term or medium-term realistic? Edsinger, who has been at this whole robots thing for a very long time, is an optimist about this, and about the role that Stretch will play. “There are so many places where you can see the progress happening—in sensing, in manipulation,” Edsinger says. “I can imagine those things coming together now in a way that I could not have 5 to 10 years ago, when it seemed so incredibly hard.”

“We’re very pragmatic about what is possible. And I think that we do believe that things are changing faster than we anticipated—10 years ago, I had a pretty clear linear path in mind for robotics, but it’s hard to really imagine where we’ll be in terms of robot capabilities 10 years from now.” —Aaron Edsinger, Hello Robot

I’d say that it’s still incredibly hard, but Edsinger is right that a lot of the pieces do seem to be coming together. Arguably, the hardware is the biggest challenge here, because working in a home puts heavy constraints on what kind of hardware you’re able to use. You’re not likely to see a humanoid in a home anytime soon, because they’d actually be dangerous, and even a quadruped is likely to be more trouble than it’s worth in a home environment. Hello Robot is conscious of this, and that’s been one of the main drivers of the design of Stretch.

“I think the portability of Stretch is really worth highlighting because there’s just so much value in that which is maybe not obvious,” Edsinger tells us. Being able to just pick up and move a mobile manipulator is not normal. Stretch’s weight (24.5 kilograms) is almost trivial to work with, in sharp contrast with virtually every other mobile robot with an arm: Stretch fits into places that humans fit into, and manages to have a similar workspace as well, and its bottom-heavy design makes it safe for humans to be around. It can’t climb stairs, but it can be carried upstairs, which is a bigger deal than it may seem. It’ll fit in the back of a car, too. Stretch is built to explore the world—not just some facsimile of the world in a research lab.

NYU students have been taking Stretch into tens of homes around New York,” says Edsinger. “They carried one up a four-story walk-up. This enables real in-home data collection. And this is where home robots will start to happen—when you can have hundreds of these out there in homes collecting data for machine learning.”

“That’s where the opportunity is,” adds Kemp. “It’s that engagement with the world about where to apply the technology beneficially. And if you’re in a lab, you’re not going to find it.”

We’ve seen some compelling examples of this recently, with Mobile ALOHA. These are robots learning to be autonomous by having humans teleoperate them through common household skills. But the system isn’t particularly portable, and it costs nearly US $32,000 in parts alone. Don’t get me wrong: I love the research. It’s just going to be difficult to scale, and in order to collect enough data to effectively tackle the world, scale is critical. Stretch is much easier to scale, because you can just straight up buy one.

Or two! You may have noticed that some of the Stretch 3 videos have two robots in them, collaborating with each other. This is not yet autonomous, but with two robots, a single human (or a pair of humans) can teleoperate them as if they were effectively a single two-armed robot:

Hello Robot

Essentially, what you’ve got here is a two-armed robot that (very intentionally) has nothing to do with humanoids. As Kemp explains: “We’re trying to help our community and the world see that there is a different path from the human model. We humans tend to think of the preexisting solution: People have two arms, so we think, well, I’m going to need to have two arms on my robot or it’s going to have all these issues.” Kemp points out that robots like Stretch have shown that really quite a lot of things can be done with only one arm, but two arms can still be helpful for a substantial subset of common tasks. “The challenge for us, which I had just never been able to find a solution for, was how you get two arms into a portable, compact, affordable lightweight mobile manipulator. You can’t!”

But with two Stretches, you have not only two arms but also two shoulders that you can put wherever you want. Washing a dish? You’ll probably want two arms close together for collaborative manipulation. Making a bed? Put the two arms far apart to handle both sides of a sheet at once. It’s a sort of distributed on-demand bimanual manipulation, which certainly adds a little bit of complexity but also solves a bunch of problems when it comes to practical in-home manipulation. Oh—and if those teleop tools look like modified kitchen tongs, that’s because they’re modified kitchen tongs.

Of course, buying two Stretch robots is twice as expensive as buying a single Stretch robot, and even though Stretch 3’s cost of just under $25,000 is very inexpensive for a mobile manipulator and very affordable in a research or education context, we’re still pretty far from something that most people would be able to afford for themselves. Hello Robot says that producing robots at scale is the answer here, which I’m sure is true, but it can be a difficult thing for a small company to achieve.

Moving slowly toward scale is at least partly intentional, Kemp tells us. “We’re still in the process of discovering Stretch’s true form—what the robot really should be. If we tried to scale to make lots and lots of robots at a much lower cost before we fundamentally understood what the needs and challenges were going to be, I think it would be a mistake. And there are many gravestones out there for various home-robotics companies, some of which I truly loved. We don’t want to become one of those.”

This is not to say that Hello Robot isn’t actively trying to make Stretch more affordable, and Edsinger suggests that the next iteration of the robot will be more focused on that. But—and this is super important—Kemp tells us that Stretch has been, is, and will continue to be sustainable for Hello Robot: “We actually charge what we should be charging to be able to have a sustainable business.” In other words, Hello Robot is not relying on some nebulous scale-defined future to transition into a business model that can develop, sell, and support robots. They can do that right now while keeping the lights on. “Our sales have enough margin to make our business work,” says Kemp. “That’s part of our discipline.”

Stretch 3 is available now for $24,950, which is just about the same as the cost of Stretch 2 with the optional add-ons included. There are lots and lots of other new features that we couldn’t squeeze into this article, including FCC certification, a more durable arm, and off-board GPU support. You’ll find a handy list of all the upgrades here.

What It’s Like to Eat a Robot



Odorigui is a type of Japanese cuisine in which people consume live seafood while it’s still moving, making movement part of the experience. You may have some feelings about this (I definitely do), but from a research perspective, getting into what those feelings are and what they mean isn’t really practical. To do so in a controlled way would be both morally and technically complicated, which is why Japanese researchers have started developing robots that can be eaten as they move, wriggling around in your mouth as you chomp down on them. Welcome to HERI: Human-Edible Robot Interaction.


The happy little robot that got its head ripped off by a hungry human (who, we have to say, was exceptionally polite about it) is made primarily of gelatin, along with sugar and apple juice for taste. After all the ingredients were mixed, it was poured into a mold and refrigerated for 12 hours to set, with the resulting texture ending up like a chewy gummy candy. The mold incorporated a couple of air chambers into the structure of the robot, which were hooked up to pneumatics that got the robot to wiggle back and forth.

Sixteen students at Osaka University got the chance to eat one of these wiggly little robots. The process was to put your mouth around the robot, let the robot move around in there for 10 seconds for the full experience, and then bite it off, chew, and swallow. Japanese people were chosen partly because this research was done in Japan, but also because, according to the paper, “of the cultural influences on the use of onomatopoeic terms.” In Japanese, there are terms that are useful in communicating specific kinds of textures that can’t easily be quantified.

The participants were asked a series of questions about their experience, including some heavy ones:

  • Did you think what you just ate had animateness?
  • Did you feel an emotion in what you just ate?
  • Did you think what you just ate had intelligence?
  • Did you feel guilty about what you just ate?

Oof.

Compared with a control group of students who ate the robot when it was not moving, the students who ate the moving robot were more likely to interpret it as having a “munya-munya” or “mumbly” texture, showing that movement can influence the eating experience. Analysis of question responses showed that the moving robot also caused people to perceive it as emotive and intelligent, and caused more feelings of guilt when it was consumed. The paper summarizes it pretty well: “In the stationary condition, participants perceived the robot as ‘food,’ whereas in the movement condition, they perceived it as a ‘creature.’”

The good news here is that since these robots are more like living things than nonrobots, they could potentially stand in for live critters eaten in a research context, say the researchers: “The utilization of edible robots in this study enabled us to examine the effects of subtle movement variations in human eating behavior under controlled conditions, a task that would be challenging to accomplish with real organisms.” There’s still more work to do to make the robots more like specific living things, but that’s the plan going forward:

Our proposed edible robot design does not specifically mimic any particular biological form. To address these limitations, we will focus on the field by designing edible robots that imitate forms relevant to ongoing discussions on food shortages and cultural delicacies. Specifically, in future studies, we will emulate creatures consumed in contexts such as insect-based diets, which are being considered as a solution to food scarcity issues, and traditional Japanese dishes like “Odorigui” or “Ikizukuri (live fish sashimi).” These imitations are expected to provide deep insights into the psychological and cognitive responses elicited when consuming moving robots, merging technology with necessities and culinary traditions.

“Exploring the Eating Experience of a Pneumatically Driven Edible Robot: Perception, Taste, and Texture,” by Yoshihiro NakataI, Midori Ban, Ren Yamaki, Kazuya Horibe, Hideyuki Takahashi, and Hiroshi Ishiguro from the University of Electro-Communications and Osaka University, was published in PLOS One.

Everything You Wanted to Know About 1X’s Latest Video



Just last month, Oslo-based 1X (formerly Halodi Robotics) announced a massive US $100 million Series B, and clearly it has been putting the work in. A new video posted last week shows a [insert collective noun for humanoid robots here] of EVE android-ish mobile manipulators doing a wide variety of tasks leveraging end-to-end neural networks (pixels to actions). And best of all, the video seems to be more or less an honest one: a single take, at (appropriately) 1X speed, and full autonomy. But we still had questions! And 1X has answers.


If, like me, you had some very important questions after watching this video, including whether that plant is actually dead and the fate of the weighted companion cube, you’ll want to read this Q&A with Eric Jang, vice president of artificial intelligence at 1X.

How many takes did it take to get this take?

Eric Jang: About 10 takes that lasted more than a minute; this was our first time doing a video like this, so it was more about learning how to coordinate the film crew and set up the shoot to look impressive.

Did you train your robots specifically on floppy things and transparent things?

Jang: Nope! We train our neural network to pick up all kinds of objects—both rigid and deformable and transparent things. Because we train manipulation end-to-end from pixels, picking up deformables and transparent objects is much easier than a classical grasping pipeline, where you have to figure out the exact geometry of what you are trying to grasp.

What keeps your robots from doing these tasks faster?

Jang: Our robots learn from demonstrations, so they go at exactly the same speed the human teleoperators demonstrate the task at. If we gathered demonstrations where we move faster, so would the robots.

How many weighted companion cubes were harmed in the making of this video?

Jang: At 1X, weighted companion cubes do not have rights.

That’s a very cool method for charging, but it seems a lot more complicated than some kind of drive-on interface directly with the base. Why use manipulation instead?

Jang: You’re right that this isn’t the simplest way to charge the robot, but if we are going to succeed at our mission to build generally capable and reliable robots that can manipulate all kinds of objects, our neural nets have to be able to do this task at the very least. Plus, it reduces costs quite a bit and simplifies the system!

What animal is that blue plush supposed to be?

Jang: It’s an obese shark, I think.

How many different robots are in this video?

Jang: 17? And more that are stationary.

How do you tell the robots apart?

Jang: They have little numbers printed on the base.

Is that plant dead?

Jang: Yes, we put it there because no CGI/3D-rendered video would ever go through the trouble of adding a dead plant.

What sort of existential crisis is the robot at the window having?

Jang: It was supposed to be opening and closing the window repeatedly (good for testing statistical significance).

If one of the robots was actually a human in a helmet and a suit holding grippers and standing on a mobile base, would I be able to tell?

Jang: I was super flattered by this comment on the Youtube video:

But if you look at the area where the upper arm tapers at the shoulder, it’s too thin for a human to fit inside while still having such broad shoulders:

Why are your robots so happy all the time? Are you planning to do more complex HRI (human-robot interaction) stuff with their faces?

Jang: Yes, more complex HRI stuff is in the pipeline!

Are your robots able to autonomously collaborate with each other?

Jang: Stay tuned!

Is the skew tetromino the most difficult tetromino for robotic manipulation?

Jang: Good catch! Yes, the green one is the worst of them all because there are many valid ways to pinch it with the gripper and lift it up. In robotic learning, if there are multiple ways to pick something up, it can actually confuse the machine learning model. Kind of like asking a car to turn left and right at the same time to avoid a tree.

Everyone else’s robots are making coffee. Can your robots make coffee?

Jang: Yep! We were planning to throw in some coffee making on this video as an Easter egg, but the coffee machine broke right before the film shoot and it turns out it’s impossible to get a Keurig K-Slim in Norway via next-day shipping.

1X is currently hiring both AI researchers (specialties include imitation learning, reinforcement learning, and large-scale training) and android operators (!) which actually sounds like a super fun and interesting job. More here.

Video Friday: Monocycle Robot With Legs



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

Cybathlon Challenges: 02 February 2024, ZURICH
HRI 2024: 11–15 March 2024, BOULDER, COLO.
Eurobot Open 2024: 8–11 May 2024, LA ROCHE-SUR-YON, FRANCE
ICRA 2024: 13–17 May 2024, YOKOHAMA, JAPAN

Enjoy today’s videos!

In this video, we present Ringbot, a novel leg-wheel transformer robot incorporating a monocycle mechanism with legs. Ringbot aims to provide versatile mobility by replacing the driver and driving components of a conventional monocycle vehicle with legs mounted on compact driving modules inside the wheel.

[ Paper ] via [ KIMLAB ]

Making money with robots has always been a struggle, but I think ALOHA 2 has figured it out.

Seriously, though, that is some impressive manipulation capability. I don’t know what that freakish panda thing is, but getting a contact lens from the package onto its bizarre eyeball was some wild dexterity.

[ ALOHA 2 ]

Highlights from testing our new arms built by Boardwalk Robotics. Installed in October of 2023, these new arms are not just for boxing and provide much greater speed and power. This matches the mobility and manipulation goals we have for Nadia!

The least dramatic but possibly most important bit of that video is when Nadia uses her arms to help her balance against a wall, which is one of those things that humans do all the time without thinking about it. And we always appreciate being shown things that don’t go perfectly alongside things that do. The bit at the end there was Nadia not quite managing to do lateral arm raises. I can relate; that’s my reaction when I lift weights, too.

[ IHMC ]

Thanks, Robert!

The recent progress in commercial humanoids is just exhausting.

[ Unitree ]

We present an avatar system designed to facilitate the embodiment of humanoid robots by human operators, validated through iCub3, a humanoid developed at the Istituto Italiano di Tecnologia.

[ Science Robotics ]

Have you ever seen a robot skiing?! Ascento robot enjoying a day in the ski slopes of Davos.

[ Ascento ]

Can’t trip Atlas up! Our humanoid robot gets ready for real work combining strength, perception, and mobility.

Notable that Boston Dynamics is now saying that Atlas “gets ready for real work.” Wonder how much to read into that?

[ Boston Dynamics ]

You deserve to be free from endless chores! YOU! DESERVE! CHORE! FREEDOM!

Pretty sure this is teleoperated, so someone is still doing the chores, sadly.

[ MagicLab ]

Multimodal UAVs (unmanned aerial vehicles) are rarely capable of more than two modalities—that is, flying and walking or flying and perching. However, being able to fly, perch, and walk could further improve their usefulness by expanding their operating envelope. For instance, an aerial robot could fly a long distance, perch in a high place to survey the surroundings, then walk to avoid obstacles that could potentially inhibit flight. Birds are capable of these three tasks, and so offer a practical example of how a robot might be developed to do the same.

[ Paper ] via [ EPFL LIS ]

Nissan announces the concept model of “Iruyo,” a robot that supports babysitting while driving. Ilyo relieves the anxiety of the mother, father, and baby in the driver’s seat. We support safe and secure driving for parents and children. Nissan and Akachan Honpo are working on a project to make life better with cars and babies. Iruyo was born out of the voices of mothers and fathers who said, “I can’t hold my baby while driving alone.”

[ Nissan ]

Building 937 houses the coolest robots at CERN. This is where the action happens to build and program robots that can tackle the unconventional challenges presented by the laboratory’s unique facilities. Recently, a new type of robot called CERNquadbot has entered CERN’s robot pool and successfully completed its first radiation protection test in the North Area.

[ CERN ]

Congrats to Starship, the OG robotic delivery service, on their US $90 million raise.

[ Starship ]

By blending 2D images with foundation models to build 3D feature fields, a new MIT method helps robots understand and manipulate nearby objects with open-ended language prompts.

[ GitHub ] via [ MIT ]

This is one of those things that’s far more difficult than it might look.

[ ROAM Lab ]

Our current care system does not scale, and our populations are aging fast. Robodies are multipliers for care staff, allowing them to work together with local helpers to provide protection and assistance around the clock while maintaining personal contact with people in the community.

[ DEVANTHRO ]

It’s the world’s smallest humanoid robot, until someone comes out with slightly smaller servos!

[ Guinness ]

Deep Robotics wishes you a happy year of the dragon!

[ Deep Robotics ]

SEAS researchers are helping develop resilient and autonomous deep-space and extraterrestrial habitations by developing technologies to let autonomous robots repair or replace damaged components in a habitat. The research is part of the Resilient ExtraTerrestrial Habitats institute (RETHi), led by Purdue University in partnership with SEAS, the University of Connecticut, and the University of Texas at San Antonio. Its goal is to “design and operate resilient deep-space habitats that can adapt, absorb, and rapidly recover from expected and unexpected disruptions.”

[ Harvard ]

Find out how a bold vision became a success story! The DLR Institute of Robotics and Mechatronics has been researching robotic arms since the 1990s, originally for use in space. It was a long and ambitious journey before these lightweight robotic arms could be used on Earth and finally in operating theaters, a journey that required concentrated robotics expertise, interdisciplinary cooperation, and ultimately a successful technology transfer.

[ DLR MIRO ]

Robotics is changing the world, driven by focused teams of diverse experts. Willow Garage operated with the mantra “Impact first, return on capital second” and through ROS and the PR2 had enormous impact. Autonomous mobile robots are finally being accepted in the service industry, and Savioke (now Relay Robotics) was created to drive that impact. This talk will trace the evolution of Relay robots and their deployment in hotels, hospitals, and other service industries, starting with roots at Willow Garage. As robotics technology is poised for the next round of advances, how do we create and maintain the organizations that continue to drive progress?

[ Northwestern ]

Tiny Quadrotor Learns to Fly in 18 Seconds



It’s kind of astonishing how quadrotors have scaled over the past decade. Like, we’re now at the point where they’re verging on disposable, at least from a commercial or research perspective—for a bit over US $200, you can buy a little 27-gram, completely open-source drone, and all you have to do is teach it to fly. That’s where things do get a bit more challenging, though, because teaching drones to fly is not a straightforward process. Thanks to good simulation and techniques like reinforcement learning, it’s much easier to imbue drones with autonomy than it used to be. But it’s not typically a fast process, and it can be finicky to make a smooth transition from simulation to reality.

New York University’s Agile Robotics and Perception Lab in collaboration with the Technology Innovation Institute (TII) have managed to streamline the process of getting basic autonomy to work on drones, and streamline it by a lot: The lab’s system is able to train a drone in simulation from nothing up to stable and controllable flying in 18 seconds flat on a MacBook Pro. And it actually takes longer to compile and flash the firmware onto the drone itself than it does for the entire training process.


ARPL NYU

So not only is the drone able to keep a stable hover while rejecting pokes and nudges and wind, but it’s also able to fly specific trajectories. Not bad for 18 seconds, right?

One of the things that typically slows down training times is the need to keep refining exactly what you’re training for, without refining it so much that you’re only training your system to fly in your specific simulation rather than the real world. The strategy used here is what the researchers call a curriculum (you can also think of it as a sort of lesson plan) to adjust the reward function used to train the system through reinforcement learning. The curriculum starts things off being more forgiving and gradually increasing the penalties to emphasize robustness and reliability. This is all about efficiency: Doing that training that you need to do in the way that it needs to be done to get the results you want, and no more.

There are other, more straightforward, tricks that optimize this technique for speed as well. The deep-reinforcement learning algorithms are particularly efficient, and leverage the hardware acceleration that comes along with Apple’s M-series processors. The simulator efficiency multiplies the benefits of the curriculum-driven sample efficiency of the reinforcement-learning pipeline, leading to that wicked-fast training time.

This approach isn’t limited to simple tiny drones—it’ll work on pretty much any drone, including bigger and more expensive ones, or even a drone that you yourself build from scratch.

Jonas Eschmann

We’re told that it took minutes rather than seconds to train a policy for the drone in the video above, although the researchers expect that 18 seconds is achievable even for a more complex drone like this in the near future. And it’s all open source, so you can, in fact, build a drone and teach it to fly with this system. But if you wait a little bit, it’s only going to get better: The researchers tell us that they’re working on integrating with the PX4 open source drone autopilot. Longer term, the idea is to have a single policy that can adapt to different environmental conditions, as well as different vehicle configurations, meaning that this could work on all kinds of flying robots rather than just quadrotors.

Everything you need to run this yourself is available on GitHub, and the paper is on ArXiv here.

Robotic Tongue Licks Gecko Gripper Clean



About a decade ago, there was a lot of excitement in the robotics world around gecko-inspired directional adhesives, which are materials that stick without being sticky using the same van der Waals forces that allow geckos to scamper around on vertical panes of glass. They were used extensively in different sorts of climbing robots, some of them quite lovely. Gecko adhesives are uniquely able to stick to very smooth things where your only other option might be suction, which requires all kinds of extra infrastructure to work.

We haven’t seen gecko adhesives around as much of late, for a couple of reasons. First, the ability to only stick to smooth surfaces (which is what gecko adhesives are best at) is a bit of a limitation for mobile robots. And second, the gap between research and useful application is wide and deep and full of crocodiles. I’m talking about the mean kind of crocodiles, not the cuddly kind. But Flexiv Robotics has made gecko adhesives practical for robotic grasping in a commercial environment, thanks in part to a sort of robotic tongue that licks the gecko tape clean.


If you zoom way, way in on a gecko’s foot, you’ll see that each toe is covered in millions of hairlike nanostructures called setae. Each seta branches out at the end into hundreds of more hairs with flat bits at the end called spatulas. The result of this complex arrangement of setae and spatulas is that gecko toes have a ridiculous amount of surface area, meaning that they can leverage the extremely weak van der Waals forces between molecules to stick themselves to perfectly flat and smooth surfaces. This technique works exceptionally well: Geckos can hang from glass by a single toe, and a fully adhered gecko can hold something like 140 kilograms (which, unfortunately, seems to be an extrapolation rather than an experimental result). And luckily for the gecko, the structure of the spatulas makes the adhesion directional, so that when its toes are no longer being loaded, they can be easily peeled off of whatever they’re attached to.

A set of black and white images of gecko toes at different magnifcations. Natural gecko adhesive structure, along with a synthetic adhesive (f).Gecko adhesion: evolutionary nanotechnology, by Kellar Autumn and Nick Gravish

Since geckos don’t “stick” to things in the sense that we typically use the word “sticky,” a better way of characterizing what geckos can do is as “dry adhesion,” as opposed to something that involves some sort of glue. You can also think about gecko toes as just being very, very high friction, and it’s this perspective that is particularly interesting in the context of robotic grippers.

This is Flexiv’s “Grav Enhanced” gripper, which uses a combination of pinch grasping and high friction gecko adhesive to lift heavy and delicate objects without having to squeeze them. When you think about a traditional robotic grasping system trying to lift something like a water balloon, you have to squeeze that balloon until the friction between the side of the gripper and the side of the balloon overcomes the weight of the balloon itself. The higher the friction, the lower the squeeze required, and although a water balloon might be an extreme example, maximizing gripper friction can make a huge difference when it comes to fragile or deformable objects.

There are a couple of problems with dry adhesive, however. The tiny structures that make the adhesive adhere can be prone to damage, and the fact that dry adhesive will stick to just about anything it can make good contact with means that it’ll rapidly accumulate dirt outside of a carefully controlled environment. In research contexts, these problems aren’t all that significant, but for a commercial system, you can’t have something that requires constant attention.

Flexiv says that the microstructure material that makes up its gecko adhesive was able to sustain 2 million gripping cycles without any visible degradation in performance, suggesting that as long as you use the stuff within the tolerances that it’s designed for, it should keep on adhering to things indefinitely—although trying to lift too much weight will tear the microstructures, ruining the adhesive properties after just a few cycles. And to keep the adhesive from getting clogged up with debris, Flexiv came up with this clever little cleaning station that acts like a little robotic tongue of sorts:

Interestingly, geckos themselves don’t seem to use their own tongues to clean their toes. They lick their eyeballs on the regular, like all normal humans do, but gecko toes appear to be self-cleaning, which is a pretty neat trick. It’s certainly possible to make self-cleaning synthetic gecko adhesive, but Flexiv tells us that “due to technical and practical limitations, replicating this process in our own gecko adhesive material is not possible. Essentially, we replicate the microstructure of a gecko’s footpad, but not its self-cleaning process.” This likely goes back to that whole thing about what works in a research context versus what works in a commercial context, and Flexiv needs its gecko adhesive to handle all those millions of cycles.

Flexiv says that it was made aware of the need for a system like this when one of its clients started using the gripper for the extra-dirty task of sorting trash from recycling, and that the solution was inspired by a lint roller. And I have to say, I appreciate the simplicity of the system that Flexiv came up with to solve the problem directly and efficiently. Maybe one day, it will be able to replicate a real gecko’s natural self-cleaning toes with a durable and affordable artificial dry adhesive, but until that happens, an artificial tongue does the trick.

Video Friday: Agile but Safe



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

Cybathlon Challenges: 2 February 2024, ZURICH
Eurobot Open 2024: 8–11 May 2024, LA ROCHE-SUR-YON, FRANCE
ICRA 2024: 13–17 May 2024, YOKOHAMA, JAPAN

Enjoy today’s videos!

Is “scamperiest” a word? If not, it should be, because this is the scamperiest robot I’ve ever seen.

[ ABS ]

GITAI is pleased to announce that its 1.5-meter-long autonomous dual robotic arm system (S2) has successfully arrived at the International Space Station (ISS) aboard the SpaceX Falcon 9 rocket (NG-20) to conduct an external demonstration of in-space servicing, assembly, and manufacturing (ISAM) while onboard the ISS. The success of the S2 tech demo will be a major milestone for GITAI, confirming the feasibility of this technology as a fully operational system in space.

[ GITAI ]

This work presents a comprehensive study on using deep reinforcement learning (RL) to create dynamic locomotion controllers for bipedal robots. Going beyond focusing on a single locomotion skill, we develop a general control solution that can be used for a range of dynamic bipedal skills, from periodic walking and running to aperiodic jumping and standing.

And if you want to get exhausted on behalf of a robot, the full 400-meter dash is below.

[ Hybrid Robotics ]

NASA’s Ingenuity Mars Helicopter pushed aerodynamic limits during the final months of its mission, setting new records for speed, distance, and altitude. Hear from Ingenuity chief engineer Travis Brown on how the data the team collected could eventually be used in future rotorcraft designs.

[ NASA ]

BigDog: 15 years of solving mobility problems its own way.

[ Boston Dynamics ]

[Harvard School of Engineering and Applied Sciences] researchers are helping develop resilient and autonomous deep space and extraterrestrial habitations by developing technologies to let autonomous robots repair or replace damaged components in a habitat. The research is part of the Resilient ExtraTerrestrial Habitats institute (RETHi) led by Purdue University, in partnership with [Harvard] SEAS, the University of Connecticut and the University of Texas at San Antonio. Its goal is to “design and operate resilient deep space habitats that can adapt, absorb and rapidly recover from expected and unexpected disruptions.”

[ Harvard SEAS ]

Researchers from Huazhong University of Science and Technology (HUST) in a recent T-RO paper describe and construct a novel variable stiffness spherical joint motor that enables dexterous motion and joint compliance in omni-directions.

[ Paper ]

Thanks, Ram!

We are told that this new robot from HEBI is called “Mark Suckerberg” and that they’ve got a pretty cool application in mind for it, to be revealed later this year.

[ HEBI Robotics ]

Thanks, Dave!

Dive into the first edition of our new Real-World-Robotics class at ETH Zürich! Our students embarked on an incredible journey, creating their human-like robotic hands from scratch. In just three months, the teams designed, built, and programmed their tendon-driven robotic hands, mastering dexterous manipulation with reinforcement learning! The result? A spectacular display of innovation and skill during our grand final.

[ SRL ETHZ ]

Carnegie Mellon researchers have built a system with a robotic arm atop a RangerMini 2.0 robotic cart from AgileX robotics to make what they’re calling a platform for “intelligent movement and processing.”

[ CMU ] via [ AgileX ]

Picassnake is our custom-made robot that paints pictures from music. Picassnake consists of an arm and a head, embedded in a plush snake doll. The robot is connected to a laptop for control and music processing, which can be fed through a microphone or an MP3 file. To open the media source, an operator can use the graphical user interface or place a text QR code in front of a webcam. Once the media source is opened, Picassnake generates unique strokes based on the music and translates the strokes to physical movement to paint them on canvas.

[ Picassnake ]

In April 2021, NASA’s Ingenuity Mars Helicopter became the first spacecraft to achieve powered, controlled flight on another world. With 72 successful flights, Ingenuity has far surpassed its originally planned technology demonstration of up to five flights. On Jan. 18, Ingenuity flew for the final time on the Red Planet. Join Tiffany Morgan, NASA’s Mars Exploration Program Deputy Director, and Teddy Tzanetos, Ingenuity Project Manager, as they discuss these historic flights and what they could mean for future extraterrestrial aerial exploration.

[ NASA ]

❌