Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.ICRA@40: 23–26 September 2024, ROTTERDAM, NETHERLANDSIROS 2024: 14–18 October 2024, ABU DHABI, UNITED ARAB EMIRATESICSR 2024: 23–26 October 2024, ODENSE, DENMARKCybathlon 2024: 25–27 October 2024, ZURICHEnjoy today’s videos! We introduce Berkeley Humanoid,
Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.
ICRA@40: 23–26 September 2024, ROTTERDAM, NETHERLANDS
IROS 2024: 14–18 October 2024, ABU DHABI, UNITED ARAB EMIRATES
We introduce Berkeley Humanoid, a reliable and low-cost mid-scale humanoid research platform for learning-based control. Our lightweight, in-house-built robot is designed specifically for learning algorithms with low simulation complexity, anthropomorphic motion, and high reliability against falls. Capable of omnidirectional locomotion and withstanding large perturbations with a compact setup, our system aims for scalable, sim-to-real deployment of learning-based humanoid systems.
This article presents Ray, a new type of audio-animatronic robot head. All the mechanical structure of the robot is built in one step by 3-D printing... This simple, lightweight structure and the separate tendon-based actuation system underneath allow for smooth, fast motions of the robot. We also develop an audio-driven motion generation module that automatically synthesizes natural and rhythmic motions of the head and mouth based on the given audio.
CSAIL researchers introduce a novel approach allowing robots to be trained in simulations of scanned home environments, paving the way for customized household automation accessible to anyone.
NVIDIA CEO Jensen Huang presented a major breakthrough on Project GR00T with WIRED’s Lauren Goode at SIGGRAPH 2024. In a two-minute demonstration video, NVIDIA explained a systematic approach they discovered to scale up robot data, addressing one of the most challenging issues in robotics.
In this research, we investigated the innovative use of a manipulator as a tail in quadruped robots to augment their physical capabilities. Previous studies have primarily focused on enhancing various abilities by attaching robotic tails that function solely as tails on quadruped robots. While these tails improve the performance of the robots, they come with several disadvantages, such as increased overall weight and higher costs. To mitigate these limitations, we propose the use of a 6-DoF manipulator as a tail, allowing it to serve both as a tail and as a manipulator.
In this end-to-end demo, we showcase how MenteeBot transforms the shopping experience for individuals, particularly those using wheelchairs. Through discussions with a global retailer, MenteeBot has been designed to act as the ultimate shopping companion, offering a seamless, natural experience.
Nature Fresh Farms, based in Leamington, Ontario, is one of North America’s largest greenhouse farms growing high-quality organics, berries, peppers, tomatoes, and cucumbers. In 2022, Nature Fresh partnered with Four Growers, a FANUC Authorized System Integrator, to develop a robotic system equipped with AI to harvest tomatoes in the greenhouse environment.
Honeybee Robotics, a Blue Origin company, is developing Lunar Utility Navigation with Advanced Remote Sensing and Autonomous Beaming for Energy Redistribution, also known as LUNARSABER. In July 2024, Honeybee Robotics captured LUNARSABER’s capabilities during a demonstration of a scaled prototype.
In this video we present results of our lab from the latest field deployments conducted in the scope of the Digiforest EU project, in Stein am Rhein, Switzerland. Digiforest brings together various partners working on aerial and legged robots, autonomous harvesters, and forestry decision-makers. The goal of the project is to enable autonomous robot navigation, exploration, and mapping, both below and above the canopy, to create a data pipeline that can support and enhance foresters’ decision-making systems.
This is a sponsored article brought to you by Elephant Robotics.Elephant Robotics has gone through years of research and development to accelerate its mission of bringing robots to millions of homes and a vision of “Enjoy Robots World”. From the collaborative industrial robots P-series and C-series, which have been on the drawing board since its establishment in 2016, to the lightweight desktop 6 DOF collaborative robot myCobot 280 in 2020, to the dual-armed, semi-humanoid robot myBuddy, which
Elephant Robotics has gone through years of research and development to accelerate its mission of bringing robots to millions of homes and a vision of “Enjoy Robots World”. From the collaborative industrial robots P-series and C-series, which have been on the drawing board since its establishment in 2016, to the lightweight desktop 6 DOF collaborative robot myCobot 280 in 2020, to the dual-armed, semi-humanoid robot myBuddy, which was launched in 2022, Elephant Robotics is launching 3-5 robots per year, and this year’s full-body humanoid robot, the Mercury series, promises to reshape the landscape of non-human workers, introducing intelligent robots like Mercury into research and education and even everyday home environments.
A Commitment to Practical Robotics
Elephant Robotics proudly introduces the Mercury Series, a suite of humanoid robots that not only push the boundaries of innovation but also embody a deep commitment to practical applications. Designed with the future of robotics in mind, the Mercury Series is poised to become the go-to choice for researchers and industry professionals seeking reliable, scalable, and robust solutions.
Elephant Robotics
The Genesis of Mercury Series: Bridging Vision With Practicality
From the outset, the Mercury Series has been envisioned as more than just a collection of advanced prototypes. It is a testament to Elephant Robotics’ dedication to creating humanoid robots that are not only groundbreaking in their capabilities but also practical for mass production and consistent, reliable use in real-world applications.
Mercury X1: Wheeled Humanoid Robot
The Mercury X1 is a versatile wheeled humanoid robot that combines advanced functionalities with mobility. Equipped with dual NVIDIA Jetson controllers, lidar, ultrasonic sensors, and an 8-hour battery life, the X1 is perfect for a wide range of applications, from exploratory studies to commercial tasks requiring mobility and adaptability.
Mercury B1: Dual-Arm Semi-Humanoid Robot
The Mercury B1 is a semi-humanoid robot tailored for sophisticated research. It features 17 degrees of freedom, dual robotic arms, a 9-inch touchscreen, a NVIDIA Xavier control chip, and an integrated 3D camera. The B1 excels in machine vision and VR-assisted teleoperation, and its AI voice interaction and LLM integration mark significant advancements in human-robot communication.
These two advanced models exemplify Elephant Robotics’ commitment to practical robotics. The wheeled humanoid robot Mercury X1 integrates advanced technology with a state-of-the-art mobile platform, ensuring not only versatility but also the feasibility of large-scale production and deployment.
Embracing the Power of Reliable Embodied AI
The Mercury Series is engineered as the ideal hardware platform for embodied AI research, providing robust support for sophisticated AI algorithms and real-world applications. Elephant Robotics demonstrates its commitment to innovation through the Mercury series’ compatibility with NVIDIA’s ISSACSIM, a state-of-the-art simulation platform that facilitates sim2real learning, bridging the gap between virtual environments and physical robot interaction.
The Mercury Series is perfectly suited for the study and experimentation of mainstream large language models in embodied AI. Its advanced capabilities allow seamless integration with the latest AI research. This provides a reliable and scalable platform for exploring the frontiers of machine learning and robotics.
Furthermore, the Mercury Series is complemented by the myArm C650, a teleoperation robotic arm that enables rapid acquisition of physical data. This feature supports secondary learning and adaptation, allowing for immediate feedback and iterative improvements in real-time. These features, combined with the Mercury Series’ reliability and practicality, make it the preferred hardware platform for researchers and institutions looking to advance the field of embodied AI.
The Mercury Series is supported by a rich software ecosystem, compatible with major programming languages, and integrates seamlessly with industry-standard simulation software. This comprehensive development environment is enhanced by a range of auxiliary hardware, all designed with mass production practicality in mind.
Elephant Robotics
Drive to Innovate: Mass Production and Global Benchmarks
The “Power Spring” harmonic drive modules, a hallmark of the Elephant Robotics’ commitment to innovation for mass production, have been meticulously engineered to offer an unparalleled torque-to-weight ratio. These components are a testament to the company’s foresight in addressing the practicalities of large-scale manufacturing. The incorporation of carbon fiber in the design of these modules not only optimizes agility and power but also ensures that the robots are well-prepared for the rigors of the production line and real-world applications. The Mercury Series, with its spirit of innovation, is making a significant global impact, setting a new benchmark for what practical robotics can achieve.
Elephant Robotics is consistently delivering mass-produced robots to a range of renowned institutions and industry leaders, thereby redefining the industry standards for reliability and scalability. The company’s dedication to providing more than mere prototypes is evident in the active role its robots play in various sectors, transforming industries that are in search of dependable and efficient robotic solutions.
Conclusion: The Mercury Series—A Beacon for the Future of Practical Robotics
The Mercury Series represents more than a product; it is a beacon for the future of practical robotics. Elephant Robotics’ dedication to affordability, accessibility, and technological advancement ensures that the Mercury Series is not just a research tool but a platform for real-world impact.
Mercury Usecases | Explore the Capabilities of the Wheeled Humanoid Robot and Discover Its Precision
youtu.be
Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please
send us your events for inclusion.
RoboCup 2024: 17–22 July 2024, EINDHOVEN, NETHERLANDSICSR 2024: 23–26 October 2024, ODENSE, DENMARKCybathlon 2024: 25–27 October 2024, ZURICH
Enjoy today’s videos!
NAVER 1784 is the world’s largest robotics testbed. The Starbucks on the second f
Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please
send us your events for inclusion.
RoboCup 2024: 17–22 July 2024, EINDHOVEN, NETHERLANDS
NAVER 1784 is the world’s largest robotics testbed. The Starbucks on the second floor of 1784 is the world’s most unique Starbucks, with more than 100 service robots called “Rookie” delivering Starbucks drinks to meeting rooms and private seats, and various experiments with a dual-arm robot.
This video is presented at the ICRA 2024 conference and summarizes recent results of our Learning AI for Dextrous Manipulation Lab. It demonstrates how our learning AI methods allowed for breakthroughs in dextrous manipulation with the mobile humanoid robot DLR Agile Justin. Although the core of the mechatronic hardware is almost 20 years old, only the advent of learning AI methods enabled a level of dexterity, flexibility and autonomy coming close to human capabilities.
With all the humanoid stuff going on, there really should be more emphasis on intentional contact—humans lean and balance on things all the time, and robots should too!
LimX Dynamics W1 is now more than a wheeled quadruped. By evolving into a biped robot, W1 maneuvers slickly on two legs in different ways: non-stop 360° rotation, upright free gliding, slick maneuvering, random collision and self-recovery, and step walking.
Animal brains use less data and energy compared to current deep neural networks running on Graphics Processing Units (GPUs). This makes it hard to develop tiny autonomous drones, which are too small and light for heavy hardware and big batteries. Recently, the emergence of neuromorphic processors that mimic how brains function has made it possible for researchers from Delft University of Technology to develop a drone that uses neuromorphic vision and control for autonomous flight.
In the beginning of the universe, all was darkness — until the first organisms developed sight, which ushered in an explosion of life, learning and progress. AI pioneer Fei-Fei Li says a similar moment is about to happen for computers and robots. She shows how machines are gaining “spatial intelligence” — the ability to process visual data, make predictions and act upon those predictions — and shares how this could enable AI to interact with humans in the real world.
Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.Eurobot Open 2024: 8–11 May 2024, LA ROCHE-SUR-YON, FRANCEICRA 2024: 13–17 May 2024, YOKOHAMA, JAPANRoboCup 2024: 17–22 July 2024, EINDHOVEN, NETHERLANDSCybathlon 2024: 25–27 October 2024, ZURICHEnjoy today’s videos! In this work, we present LocoMan, a dex
Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.
In this work, we present LocoMan, a dexterous quadrupedal robot with a novel morphology to perform versatile manipulation in diverse constrained environments. By equipping a Unitree Go1 robot with two low-cost and lightweight modular 3-DoF loco-manipulators on its front calves, LocoMan leverages the combined mobility and functionality of the legs and grippers for complex manipulation tasks that require precise 6D positioning of the end effector in a wide workspace.
Object manipulation has been extensively studied in the context of fixed-base and mobile manipulators. However, the overactuated locomotion modality employed by snake robots allows for a unique blend of object manipulation through locomotion, referred to as loco-manipulation. In this paper, we present an optimization approach to solving the loco-manipulation problem based on nonimpulsive implicit-contact path planning for our snake robot COBRA.
Okay, but where that costume has eyes is not where Spot has eyes, so the Spot in the costume can’t see, right? And now I’m skeptical of the authenticity of the mutual snoot-boop.
Here’s some video of Field AI’s robots operating in relatively complex and unstructured environments without prior maps. Make sure to read our article from this week for details!
Is it just me, or is it kind of wild that researchers are now publishing papers comparing their humanoid controller to the “manufacturer’s” humanoid controller? It’s like humanoids are a commodity now or something.
Honey Badger 4.0 is our latest robotic platform, created specifically for traversing hostile environments and difficult terrains. Equipped with multiple cameras and sensors, it will make sure no defect is omitted during inspection.
Have an automation task that calls for the precision and torque of an industrial robot arm…but you need something that is more rugged or a nonconventional form factor? Meet the HEBI Robotics H-Series Actuator! With 9x the torque of our X-Series and seamless compatibility with the HEBI ecosystem for robot development, the H-Series opens a new world of possibilities for robots.
EPFL’s team, led by Ph.D. student Milad Shafiee along with coauthors Guillaume Bellegarda and BioRobotics Lab head Auke Ijspeert, have trained a four-legged robot using deep-reinforcement learning to navigate challenging terrain, achieving a milestone in both robotics and biology.
At Agility, we make robots that are made for work. Our robot Digit works alongside us in spaces designed for people. Digit handles the tedious and repetitive tasks meant for a machine, allowing companies and their people to focus on the work that requires the human element.
With a wealth of incredible figures and outstanding facts, here’s Jan Jonsson, ABB Robotics veteran, sharing his knowledge and passion for some of our robots and controllers from the past.
I have it on good authority that getting robots to mow a lawn (like, any lawn) is much harder than it looks, but Electric Sheep has built a business around it.
The AI Index, currently in its seventh year, tracks, collates, distills, and visualizes data relating to artificial intelligence. The Index provides unbiased, rigorously vetted, and globally sourced data for policymakers, researchers, journalists, executives, and the general public to develop a deeper understanding of the complex field of AI. Led by a steering committee of influential AI thought leaders, the Index is the world’s most comprehensive report on trends in AI. In this seminar, HAI Research Manager Nestor Maslej offers highlights from the 2024 report, explaining trends related to research and development, technical performance, technical AI ethics, the economy, education, policy and governance, diversity, and public opinion.
This week’s CMU Robotics Institute seminar, from Dieter Fox at Nvidia and the University of Washington, is “Where’s RobotGPT?”
In this talk, I will discuss approaches to generating large datasets for training robot-manipulation capabilities, with a focus on the role simulation can play in this context. I will show some of our prior work, where we demonstrated robust sim-to-real transfer of manipulation skills trained in simulation, and then present a path toward generating large-scale demonstration sets that could help train robust, open-world robot-manipulation models.
Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.Eurobot Open 2024: 8–11 May 2024, LA ROCHE-SUR-YON, FRANCEICRA 2024: 13–17 May 2024, YOKOHAMA, JAPANRoboCup 2024: 17–22 July 2024, EINDHOVEN, NETHERLANDSCybathlon 2024: 25–27 October 2024, ZURICHEnjoy today’s videos! DARPA’s Robotic Autonomy in Complex Env
Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.
DARPA’s Robotic Autonomy in Complex Environments with Resiliency (RACER) program recently conducted its fourth experiment (E4) to assess the performance of off-road unmanned vehicles. These tests, conducted in Texas in late 2023, were the first time the program tested its new vehicle, the RACER Heavy Platform (RHP). The video shows autonomous route following for mobility testing and demonstration, including sensor point cloud visualizations.
The 12-ton RHP is significantly larger than the 2-ton RACER Fleet Vehicles (RFVs) already in use in the program. Using the algorithms on a very different platform helps RACER toward its goal of platform agnostic autonomy of combat-scale vehicles in complex, mission-relevant off-road environments that are significantly more unpredictable than on-road conditions.
In our new Science Roboticspaper, we introduce an autonomous navigation system developed for our wheeled-legged quadrupeds, designed for fast and efficient navigation within large urban environments. Driven by neural network policies, our simple, unified control system enables smooth gait transitions, smart navigation planning, and highly responsive obstacle avoidance in populated urban environments.
Generation 7 of “Phoenix” robots include improved human-like range of motion. Improvements in uptime, visual perception, and tactile sensing increase the capability of the system to perform complex tasks over longer periods. Design iteration significantly decreases build time. The speed at which new tasks can be automated has increased 50x, marking a major inflection point in task automation speed.
We’re proud to celebrate our one millionth commercial delivery—that’s a million deliveries of lifesaving blood, critical vaccines, last-minute groceries, and so much more. But the best part? This is just the beginning.
We propose a novel humanoid TWIMP, which combines a human mimetic musculoskeletal upper limb with a two-wheel inverted pendulum. By combining the benefit of a musculoskeletal humanoid, which can achieve soft contact with the external environment, and the benefit of a two-wheel inverted pendulum with a small footprint and high mobility, we can easily investigate learning control systems in environments with contact and sudden impact.
Ballbots are uniquely capable of pushing wheelchairs—arguably better than legged platforms, because they can move in any direction without having to reposition themselves.
Charge Robotics is building robots that automate the most labor-intensive parts of solar construction. Solar has rapidly become the cheapest form of power generation in many regions. Demand has skyrocketed, and now the primary barrier to getting it installed is labor logistics and bandwidth. Our robots remove the labor bottleneck, allowing construction companies to meet the rising demand for solar, and enabling the world to switch to renewables faster.
The QUT CGRAS project’s robot prototype captures images of baby corals, destined for the Great Barrier Reef, monitoring and counting them in grow tanks. The team uses state-of-the-art AI algorithms to automatically detect and count these coral babies and track their growth over time – saving human counting time and money.
We are conducting research to develop Unmanned Aerial Systems to aid in wildfire monitoring. The hazardous, dynamic, and visually degraded environment of wildfire gives rise to many unsolved fundamental research challenges.
In March 2024, Northwestern University’s Center for Robotics and Biosystems demonstrated the Omnid mobile collaborative robots (mocobots) at MARS, a conference in Ojai, California on Machine learning, Automation, Robotics, and Space, hosted by Jeff Bezos. The “swarm” of mocobots is designed to collaborate with humans, allowing a human to easily manipulate large, heavy, or awkward payloads. In this case, the mocobots cancel the effect of gravity, so the human can easily manipulate the mock airplane wing in six degrees of freedom. In general, human-cobot systems combine the best of human capabilities with the best of robot capabilities.
EELS, or Exobiology Extant Life Surveyor, is a versatile, snake-like robot designed for exploration of previously inaccessible terrain. This talk on EELS was presented at the 2024 Amazon MARS conference.
The convergence of AI and robotics will unlock a wonderful new world of possibilities in everyday life, says robotics and AI pioneer Daniela Rus. Diving into the way machines think, she reveals how “liquid networks”—a revolutionary class of AI that mimics the neural processes of simple organisms—could help intelligent machines process information more efficiently and give rise to “physical intelligence” that will enable AI to operate beyond digital confines and engage dynamically in the real world.
Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.RoboCup German Open: 17–21 April 2024, KASSEL, GERMANYAUVSI XPONENTIAL 2024: 22–25 April 2024, SAN DIEGOEurobot Open 2024: 8–11 May 2024, LA ROCHE-SUR-YON, FRANCEICRA 2024: 13–17 May 2024, YOKOHAMA, JAPANRoboCup 2024: 17–22 July 2024, EINDHOVEN, NETHERLAND
Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.
RoboCup 2024: 17–22 July 2024, EINDHOVEN, NETHERLANDS
Enjoy today’s videos!
In the SpaceHopper project, students at ETH Zurich developed a robot capable of moving in low gravity environments through hopping motions. It is intended to be used in future space missions to explore small celestial bodies.
The exploration of asteroids and moons could provide insights into the formation of the universe, and they may contain valuable minerals that humanity could use in the future.The project began in 2021 as an ETH focus project for bachelor’s students. Now, it is being continued as a regular research project. A particular challenge in developing exploration robots for asteroids is that, unlike larger celestial bodies like Earth, there is low gravity on asteroids and moons. The students have therefore tested their robot’s functionality in zero gravity during a parabolic flight. The parabolic flight was conducted in collaboration with the European Space Agency as part of the ESA Academy Experiments Programme.
It’s still kind of wild to me that it’s now possible to just build a robot like Menteebot. Having said that, at present it looks to be a fairly long way from being able to usefully do tasks in a reliable way.
We are glad to announce the latest updates with our humanoid robot CL-1. In the test, it demonstrates stair climbing in a single stride based on real-time terrain perception. For the very first time, CL-1 accomplishes back and forth running, in a stable and dynamic way!
EEWOC [Extended-reach Enhanced Wheeled Orb for Climbing] uses a unique locomotion scheme to climb complex steel structures with its magnetic grippers. Its lightweight and highly extendable tape spring limb can reach over 1.2 meters, allowing it to traverse gaps and obstacles much larger than other existing climbing robots. Its ability to bend allows it to reach around corners and over ledges, and it can transition between surfaces easily thanks to assistance from its wheels. The wheels also let it to drive more quickly and efficiently on the ground. These features make EEWOC well-suited for climbing the complex steel structures seen in real-world environments.
NASA’s Ingenuity Mars helicopter became the first vehicle to achieve powered, controlled flight on another planet when it took to the Martian skies on 19 April 2021. This video maps the location of the 72 flights that the helicopter took over the course of nearly three years. Ingenuity far surpassed expectations—soaring higher and faster than previously imagined.
MERL introduces a new autonomous robotic assembly technology, offering an initial glimpse into how robots will work in future factories. Unlike conventional approaches where humans set pre-conditions for assembly, our technology empowers robots to adapt to diverse scenarios. We showcase the autonomous assembly of a gear box that was demonstrated live at CES2024.
In November, 2023 Digit was deployed in a distribution center unloading totes from an AMR as part of regular facility operations, including a shift during Cyber Monday.
DARPA’s Air Combat Evolution (ACE) program has achieved the first-ever in-air tests of AI algorithms autonomously flying a fighter jet against a human-piloted fighter jet in within-visual-range combat scenarios (sometimes referred to as “dogfighting”).In this video, team members discuss what makes the ACE program unlike other aerospace autonomy projects and how it represents a transformational moment in aerospace history, establishing a foundation for ethical, trusted, human-machine teaming for complex military and civilian applications.
Yesterday, Boston Dynamics bid farewell to the iconic Atlas humanoid robot. Or, the hydraulically-powered version of Atlas, anyway—if you read between the lines of the video description (or even just read the actual lines of the video description), it was pretty clear that although hydraulic Atlas was retiring, it wasn’t the end of the Atlas humanoid program at Boston Dynamics. In fact, Atlas is already back, and better than ever.Today, Boston Dynamics is introducing a new version of Atlas that’
Yesterday, Boston Dynamics bid farewell to the iconic Atlas humanoid robot. Or, the hydraulically-powered version of Atlas, anyway—if you read between the lines of the video description (or even just read the actual lines of the video description), it was pretty clear that although hydraulicAtlas was retiring, it wasn’t the end of the Atlas humanoid program at Boston Dynamics. In fact, Atlas is already back, and better than ever.
Boston Dynamics’ new electric humanoid has been simultaneously one of the worst and best kept secrets in robotics over the last year or so. What I mean is that it seemed obvious, or even inevitable, that Boston Dynamics would take the expertise in humanoids that it developed with Atlas and combine that with its experience productizing a fully electric system like Spot. But just because something seems inevitable doesn’t mean it actually is inevitable, and Boston Dynamics has done an admirable job of carrying on as normal while building a fully electric humanoid from scratch. And here it is:
It’s all new, it’s all electric, and some of those movements make me slightly uncomfortable (we’ll get into that in a bit). The blog post accompanying the video is sparse on technical detail, but let’s go through the most interesting parts:
A decade ago, we were one of the only companies putting real R&D effort into humanoid robots. Now the landscape in the robotics industry is very different.
In 2010, we took a look at all the humanoid robots then in existence. You could, I suppose, argue that Honda was putting real R&D effort into ASIMO back then, but yeah, pretty much all those other humanoid robots came from research rather than industry. Now, it feels like we’re up to our eyeballs in commercial humanoids, but over the past couple of years, as startups have appeared out of nowhere with brand new humanoid robots, Boston Dynamics (to most outward appearances) was just keepin’ on with that R&D. Today’s announcement certainly changes that.
We are confident in our plan to not just create an impressive R&D project, but to deliver a valuable solution. This journey will start with Hyundai—in addition to investing in us, the Hyundai team is building the next generation of automotive manufacturing capabilities, and it will serve as a perfect testing ground for new Atlas applications.
Boston Dynamics
This is a significant advantage for Boston Dynamics—through Hyundai, they can essentially be their own first customer for humanoid robots, offering an immediate use case in a very friendly transitional environment. Tesla has a similar advantage with Optimus, but Boston Dynamics also has experience sourcing and selling and supporting Spot, which are those business-y things that seem like they’re not the hard part until they turn out to actually be the hard part.
In the months and years ahead, we’re excited to show what the world’s most dynamic humanoid robot can really do—in the lab, in the factory, and in our lives.
World’s most dynamic humanoid, you say? Awesome! Prove it! On video! With outtakes!
The electric version of Atlas will be stronger, with a broader range of motion than any of our previous generations. For example, our last generation hydraulic Atlas (HD Atlas) could already lift and maneuver a wide variety of heavy, irregular objects; we are continuing to build on those existing capabilities and are exploring several new gripper variations to meet a diverse set of expected manipulation needs in customer environments.
Now we’re getting to the good bits. It’s especially notable here that the electric version of Atlas will be “stronger” than the previous hydraulic version, because for a long time hydraulics were really the only way to get the kind of explosively powerful repetitive dynamic motions that enabled Atlas to do jumps and flips. And the switch away from hydraulics enables that extra range of motion now that there aren’t hoses and stuff to deal with.
It’s also pretty clear that the new Atlas is built to continue the kind of work that hydraulic Atlas has been doing, manipulating big and heavy car parts. This is in sharp contrast to most other humanoid robots that we’ve seen, which have primarily focused on moving small objects or bins around in warehouse environments.
We are not just delivering industry-leading hardware. Some of our most exciting progress over the past couple of years has been in software. In addition to our decades of expertise in simulation and model predictive control, we have equipped our robots with new AI and machine learning tools, like reinforcement learning and computer vision to ensure they can operate and adapt efficiently to complex real-world situations.
This is all par for the course now, but it’s also not particularly meaningful without more information. “We will give our robots new capabilities through machine learning and AI” is what every humanoid robotics company (and most other robotics companies) are saying, but I’m not sure that we’re there yet, because there’s an “okay but how?” that needs to happen first. I’m not saying that it won’t happen, just pointing out that until it does happen, it hasn’t happened.
The humanoid form factor is a useful design for robots working in a world designed for people. However, that form factor doesn’t limit our vision of how a bipedal robot can move, what tools it needs to succeed, and how it can help people accomplish more.
Agility Robotics has a similar philosophy with Digit, which has a mostly humanoid form factor to operate in human environments but also uses a non-human leg design because Agility believes that it works better. Atlas is a bit more human-like with its overall design, but there are some striking differences, including both range of motion and the head, both of which we’ll be talking more about.
We designed the electric version of Atlas to be stronger, more dexterous, and more agile. Atlas may resemble a human form factor, but we are equipping the robot to move in the most efficient way possible to complete a task, rather than being constrained by a human range of motion. Atlas will move in ways that exceed human capabilities.
The introductory video with the new Atlas really punches you in the face with this: Atlas is not constrained by human range of motion and will leverage its extra degrees of freedom to operate faster and more efficiently, even if you personally might find some of those motions a little bit unsettling.
Boston Dynamics
Combining decades of practical experience with first principles thinking, we are confident in our ability to deliver a robot uniquely capable of tackling dull, dirty, and dangerous tasks in real applications.
As Marco Hutter pointed out, most commercial robots (humanoids included) are really only targeting tasks that are dull, because dull usually means repetitive, and robots are very good at repetitive. Dirty is a little more complicated, and dangerous is a lot more complicated than that. I appreciate that Boston Dynamics is targeting those other categories of tasks from the outset.
Commercialization takes great engineering, but it also takes patience, imagination, and collaboration. Boston Dynamics has proven that we can deliver the full package with both industry-leading robotics and a complete ecosystem of software, services, and support to make robotics useful in the real world.
There’s a lot more to building a successful robotics company than building a successful robot. Arguably, building a successful robot is not even the hardest part, long term. Having over 1500 Spot robots deployed with customers gives them a well-established product infrastructure baseline to expand from with the new Atlas.
Taking a step back, let’s consider the position that Boston Dynamics is in when it comes to the humanoid space right now.
The new Atlas appears to be a reasonably mature platform with explicit commercial potential, but it’s not yet clear if this particular version of Atlas is truly commercially viable, in terms of being manufacturable and supportable at scale—it’s Atlas 001, after all. There’s likely a huge amount of work that still needs to be done, but it’s a process that the company has already gone through with Spot. My guess is that Boston Dynamics has some catching up to do with respect to other humanoid companies that are already entering pilot projects.
In terms of capabilities, even though the new Atlas hardware is new, it’s not like Boston Dynamics is starting from scratch, since they’re already transferring skills from hydraulic Atlas onto the new platform. But, we haven’t seen the new Atlas doing any practical tasks yet, so it’s hard to tell how far along that is, and it would be premature to assume that hydraulic Atlas doing all kinds of amazing things in YouTube videos implies that electric Atlas can do similar things safely and reliably in a product context. There’s a gap there, possibly an enormous gap, and we’ll need to see more from the new Atlas to understand where it’s at.
And obviously, there’s a lot of competition in humanoids right now, although I’d like to think that the potential for practical humanoid robots to be useful in society is significant enough that there will be room for lots of different approaches. Boston Dynamics was very early to humanoids in general, but they’re somewhat late to this recent (and rather abrupt) humanoid commercialization push. This may not be a problem, especially if Atlas is targeting applications where its strength and flexibility sets it apart from other robots in the space, and if their depth of experience deploying commercial robotic platforms helps them to scale quickly.
Boston Dynamics
An electric Atlas may indeed have been inevitable, and it’s incredibly exciting to (finally!) see Boston Dynamics take this next step towards a commercial humanoid, which would deliver on more than a decade of ambition stretching back through the DARPA Robotics Challenge to PETMAN. We’ve been promised more manipulation footage soon, and Boston Dynamics expects that Atlas will be in the technology demonstration phase in Hyundai factories as early as next year.
Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.RoboCup German Open: 17–21 April 2024, KASSEL, GERMANYAUVSI XPONENTIAL 2024: 22–25 April 2024, SAN DIEGOEurobot Open 2024: 8–11 May 2024, LA ROCHE-SUR-YON, FRANCEICRA 2024: 13–17 May 2024, YOKOHAMA, JAPANRoboCup 2024: 17–22 July 2024, EINDHOVEN, NETHERLAND
Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.
I think suggesting that robots can’t fall is much less useful than instead suggesting that robots can fall and get quickly and easily get back up again.
Sanctuary AI says that this video shows Phoenix operating at “human-equivalent speed,” but they don’t specify which human or under which conditions. Though it’s faster than I would be, that’s for sure.
This is the RAM—robotic autonomous mower. It can be dropped anywhere in the world and will wake up with a mission to make tall grass around it shorter. Here is a quick clip of it working on the Presidio in SF.
This year, our robots braved a Finnish winter for the first time. As the snow clears and the days get longer, we’re looking back on how our robots made thousands of deliveries to S Group customers during the colder months.
Adopting omnidirectional Field of View (FoV) cameras in aerial robots vastly improves perception ability, significantly advancing aerial robotics’s capabilities in inspection, reconstruction, and rescue tasks. We propose OmniNxt, a fully open-source aerial robotics platform with omnidirectional perception.
The MAkEable framework enhances mobile manipulation in settings designed around humans by streamlining the process of sharing learned skills and experiences among different robots and contexts. Practical tests confirm its efficiency in a range of scenarios, involving different robots, in tasks such as object grasping, coordinated use of both hands in tasks, and the exchange of skills among humanoid robots.
We conducted trials of Ringbot outdoors on a 400 meter track. With a power source of 2300 milliamp-hours and 11.1 Volts, Ringbot managed to cover approximately 3 kilometers in 37 minutes. We commanded its target speed and direction using a remote joystick controller (Steam Deck), and Ringbot experienced five falls during this trial.
As with every single cooking video, there’s a lot of background prep that’s required for this robot to cook an entire meal, but I would utterly demolish those fries.
Here’s everything you need to know about Wing delivery drones, except for how much human time they actually require and the true cost of making deliveries by drone, because those things aren’t fun to talk about.
This CMU Teruko Yata Memorial Lecture is by Agility Robotics’ Jonathan Hurst, on “Human-Centric Robots and How Learning Enables Generality.”
Humans have dreamt of robot helpers forever. What’s new is that this dream is becoming real. New developments in AI, building on foundations of hardware and passive dynamics, enable vastly improved generality. Robots can step out of highly structured environments and become more human-centric: operating in human spaces, interacting with people, and doing some basic human workflows. By connecting a Large Language Model, Digit can convert natural language high-level requests into complex robot instructions, composing the library of skills together, using human context to achieve real work in the human world. All of this is new—and it is never going back: AI will drive a fast-following robot revolution that is going to change the way we live.
Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.RoboCup German Open: 17–21 April 2024, KASSEL, GERMANYAUVSI XPONENTIAL 2024: 22–25 April 2024, SAN DIEGOEurobot Open 2024: 8–11 May 2024, LA ROCHE-SUR-YON, FRANCEICRA 2024: 13–17 May 2024, YOKOHAMA, JAPANRoboCup 2024: 17–22 July 2024, EINDHOVEN, NETHERLAND
Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.
RoboCup 2024: 17–22 July 2024, EINDHOVEN, NETHERLANDS
Enjoy today’s videos!
USC, UPenn, Texas A&M, Oregon State, Georgia Tech, Temple University, and NASA Johnson Space Center are teaching dog-like robots to navigate craters of the moon and other challenging planetary surfaces in research funded by NASA.
AMBIDEX is a revolutionary robot that is fast, lightweight, and capable of human-like manipulation. We have added a sensor head and the torso and the waist to greatly expand the range of movement. Compared to the previous arm-centered version, the overall impression and balance has completely changed.
Experience the future of robotics as UBTECH’s humanoid robot integrates with Baidu’s ERNIE through AppBuilder! Witness robots [that] understand language and autonomously perform tasks like folding clothes and object sorting.
I know the fins on this robot are for walking underwater rather than on land, but watching it move, I feel like it’s destined to evolve into something a little more terrestrial.
The video demonstrates the wave-basin testing of a 43 kg (95 lb) amphibious cycloidal propeller unmanned underwater vehicle (Cyclo-UUV) developed at the Advanced Vertical Flight Laboratory, Texas A&M University. The use of cyclo-propellers allows for 360 degree thrust vectoring for more robust dynamic controllability compared to UUVs with conventional screw propellers.
Operating robots precisely and at high speeds has been a long-standing goal of robotics research. To enable precise and safe dynamic motions, we introduce a four degree-of-freedom (DoF) tendon-driven robot arm. Tendons allow placing the actuation at the base to reduce the robot’s inertia, which we show significantly reduces peak collision forces compared to conventional motor-driven systems. Pairing our robot with pneumatic muscles allows generating high forces and highly accelerated motions, while benefiting from impact resilience through passive compliance.
Rovers on Mars have previously been caught in loose soils, and turning the wheels dug them deeper, just like a car stuck in sand. To avoid this, Rosalind Franklin has a unique wheel-walking locomotion mode to overcome difficult terrain, as well as autonomous navigation software.
MOMO has learned the Bam Yang Gang dance moves with its hand dexterity. :) By analyzing 2D dance videos, we extract detailed hand skeleton data, allowing us to recreate the moves in 3D using a hand model. With this information, MOMO replicates the dance motions with its arm and hand joints.
This UPenn GRASP SFI Seminar is from Eric Jang at 1X Technologies, on “Data Engines for Humanoid Robots.”
1X’s mission is to create an abundant supply of physical labor through androids that work alongside humans. I will share some of the progress 1X has been making towards general-purpose mobile manipulation. We have scaled up the number of tasks our androids can do by combining an end-to-end learning strategy with a no-code system to add new robotic capabilities. Our Android Operations team trains their own models on the data they gather themselves, producing an extremely high-quality “farm-to-table” dataset that can be used to learn extremely capable behaviors. I’ll also share an early preview of the progress we’ve been making towards a generalist “World Model” for humanoid robots.
This Microsoft Future Leaders in Robotics and AI Seminar is from Chahat Deep Singh at the University of Maryland, on “Minimal Perception: Enabling Autonomy in Palm-Sized Robots.”
The solution to robot autonomy lies at the intersection of AI, computer vision, computational imaging, and robotics—resulting in minimal robots. This talk explores the challenge of developing a minimal perception framework for tiny robots (less than 6 inches) used in field operations such as space inspections in confined spaces and robot pollination. Furthermore, we will delve into the realm of selective perception, embodied AI, and the future of robot autonomy in the palm of your hands.
Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.HRI 2024: 11–15 March 2024, BOULDER, COLO.Eurobot Open 2024: 8–11 May 2024, LA ROCHE-SUR-YON, FRANCEICRA 2024: 13–17 May 2024, YOKOHAMA, JAPANRoboCup 2024: 17–22 July 2024, EINDHOVEN, NETHERLANDSEnjoy today’s videos! We present Human to Humanoid (H2O), a r
Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.
RoboCup 2024: 17–22 July 2024, EINDHOVEN, NETHERLANDS
Enjoy today’s videos!
We present Human to Humanoid (H2O), a reinforcement learning (RL) based framework that enables real-time, whole-body teleoperation of a full-sized humanoid robot with only an RGB camera. We successfully achieve teleoperation of dynamic, whole-body motions in real-world scenarios, including walking, back jumping, kicking, turning, waving, pushing, boxing, etc. To the best of our knowledge, this is the first demonstration to achieve learning-based, real-time, whole-body humanoid teleoperation.
Legged robots have the potential to traverse complex terrain and access confined spaces beyond the reach of traditional platforms thanks to their ability to carefully select footholds and flexibly adapt their body posture while walking. However, robust deployment in real-world applications is still an open challenge. In this paper, we present a method for legged locomotion control using reinforcement learning and 3D volumetric representations to enable robust and versatile locomotion in confined and unstructured environments.
Improving the safety of collaborative manipulators necessitates the reduction of inertia in the moving part. We introduce a novel approach in the form of a passive, 3D wire aligner, serving as a lightweight and low-friction power transmission mechanism, thus achieving the desired low inertia in the manipulator’s operation.
Robot Era just launched Humanoid-Gym, an open-source reinforcement learning framework for bipedal humanoids. As you can see from the video, RL algorithms have given the robot, called Xiao Xing, or XBot, the ability to climb up and down haphazardly stacked boxes with relative stability and ease.
More than 80% of stroke survivors experience walking difficulty, significantly impacting their daily lives, independence, and overall quality of life. Now, new research from the University of Massachusetts Amherst pushes forward the bounds of stroke recovery with a unique robotic hip exoskeleton, designed as a training tool to improve walking function. This invites the possibility of new therapies that are more accessible and easier to translate from practice to daily life, compared to current rehabilitation methods.
DJI drones work to make the world a better place and one of the ways that we do this is through conservation work. We partnered with Halo Robotics and the OFI Orangutan Foundation International to showcase just how these drones can make an impact.
The aim of the test is to demonstrate the removal and replacement of satellite modules into a 27U CubeSat format using augmented reality control of a robot. In this use case, the “client” satellite is being upgraded and refueled using modular componentry. The robot will then remove the failed computer module and place it in a fixture. It will then do the same with the propellant tank. The robot will then place these correctly back into the satellite.
This video features some of the highlights and favorite moments from the CYBATHLON Challenges 2024 that took place on 2 February, showing so many diverse types of assistive technology taking on discipline tasks and displaying pilots’ tenacity and determination. The Challenges saw new teams, new tasks, and new formats for many of the CYBATHLON disciplines.
Small drones for catastrophic wildfires (ones covering more than [40,470 hectares]) are like bringing a flashlight to light up a football field. This short video describes the major uses for drones of all sizes and why and when they are used, or why not.
DARPA’s Learning Introspective Control (LINC) program is developing machine learning methods that show promise in making that scenario closer to reality. LINC aims to fundamentally improve the safety of mechanical systems—specifically in ground vehicles, ships, drone swarms, and robotics—using various methods that require minimal computing power. The result is an AI-powered controller the size of a cell phone.
Today, Figure is announcing an astonishing US $675 million Series B raise, which values the company at an even more astonishing $2.6 billion. Figure is one of the companies working toward a multipurpose or general-purpose (depending on whom you ask) bipedal or humanoid (depending on whom you ask) robot. The astonishing thing about this valuation is that Figure’s robot is still very much in the development phase—although they’re making rapid progress, which they demonstrate in a new video posted
Today, Figure is announcing an astonishing US $675 million Series B raise, which values the company at an even more astonishing $2.6 billion. Figure is one of the companies working toward a multipurpose or general-purpose (depending on whom you ask) bipedal or humanoid (depending on whom you ask) robot. The astonishing thing about this valuation is that Figure’s robot is still very much in the development phase—although they’re making rapid progress, which they demonstrate in a new video posted this week.
This round of funding comes from Microsoft, OpenAI Startup Fund, Nvidia, Jeff Bezos (through Bezos Expeditions), Parkway Venture Capital, Intel Capital, Align Ventures, and ARK Invest. Figure says that they’re going to use this new capital “for scaling up AI training, robot manufacturing, expanding engineering head count, and advancing commercial deployment efforts.” In addition, Figure and OpenAI will be collaborating on the development of “next-generation AI models for humanoid robots” which will “help accelerate Figure’s commercial timeline by enhancing the capabilities of humanoid robots to process and reason from language.”
As far as that commercial timeline goes, here’s the most recent update:
Figure
And to understand everything that’s going on here, we sent a whole bunch of questions to Jenna Reher, senior robotics/AI engineer at Figure.
What does “fully autonomous” mean, exactly?
Jenna Reher: In this case, we simply put the robot on the ground and hit go on the task with no other user input. What you see is using a learned vision model for bin detection that allows us to localize the robot relative to the target bin and get the bin pose. The robot can then navigate itself to within reach of the bin, determine grasp points based on the bin pose, and detect grasp success through the measured forces on the hands. Once the robot turns and sees the conveyor, the rest of the task rolls out in a similar manner. By doing things in this way we can move the bins and conveyor around in the test space or start the robot from a different position and still complete the task successfully.
How many takes did it take to get this take?
Reher: We’ve been running this use case consistently for some time now as part of our work in the lab, so we didn’t really have to change much for the filming here. We did two or three practice runs in the morning and then three filming takes. All of the takes were successful, so the extras were to make sure we got the cleanest one to show.
What’s back in the Advanced Actuator Lab?
Reher: We have an awesome team of folks working on some exciting custom actuator designs for our future robots, as well as supporting and characterizing the actuators that went into our current robots.
That’s a very specific number for “speed vs. human.” Which human did you measure the robot’s speed against?
Reher: We timed Brett [Adcock, founder of Figure] and a few poor engineers doing the task and took the average to get a rough baseline. If you are observant, that seemingly overspecific number is just saying we’re at 1/6 human speed. The main point that we’re trying to make here is that we are aware we are currently below human speed, and it’s an important metric to track as we improve.
What’s the tether for?
Reher: For this task we currently process the camera data off-robot while all of the behavior planning and control happens on board in the computer that’s in the torso. Our robots should be fully tetherless in the near future as we finish packaging all of that on board. We’ve been developing behaviors quickly in the lab here at Figure in parallel to all of the other systems engineering and integration efforts happening, so hopefully folks notice all of these subtle parallel threads converging as we try to release regular updates.
How the heck do you keep your robotics lab so clean?
Reher: Everything we’ve filmed so far is in our large robot test lab, so it’s a lot easier to keep the area clean when people’s desks aren’t intruding in the space. Definitely no guarantees on that level of cleanliness if the camera were pointed in the other direction!
Is the robot in the background doing okay?
Reher: Yes! The other robot was patiently standing there in the background, waiting for the filming to finish up so that our manipulation team could get back to training it to do more manipulation tasks. We hope we can share some more developments with that robot as the main star in the near future.
What would happen if I put a single bowling ball into that tote?
Reher: A bowling ball is particularly menacing to this task primarily due to the moving mass, in addition to the impact if you are throwing it in. The robot would in all likelihood end up dropping the tote, stay standing, and abort the task. With what you see here, we assume that the mass of the tote is known a priori so that our whole-body controller can compensate for the external forces while tracking the manipulation task. Reacting to and estimating larger unknown disturbances such as this is a challenging problem, but we’re definitely working on it.
Tell me more about that very Zen arm and hand pose that the robot adopts after putting the tote on the conveyor.
Reher: It does look kind of Zen! If you rewatch our coffee video, you’ll notice the same pose after the robot gets things brewing. This is a reset pose that our controller will go into between manipulation tasks while the robot is awaiting commands to execute either an engineered behavior or a learned policy.
Are the fingers less fragile than they look?
Reher: They are more robust than they look, but not impervious to damage by any means. The design is pretty modular, which is great, meaning that if we damage one or two fingers, there is a small number of parts to swap to get everything back up and running. The current fingers won’t necessarily survive a direct impact from a bad fall, but can pick up totes and do manipulation tasks all day without issues.
Is the Figure logo footsteps?
Reher: One of the reasons I really like the Figure logo is that it has a bunch of different interpretations depending on how you look at it. In some cases it’s just an F that looks like a footstep plan rollout, while some of the logo animations we have look like active stepping. One other possible interpretation could be an occupancy grid.
Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.HRI 2024: 11–15 March 2024, BOULDER, COLO.Eurobot Open 2024: 8–11 May 2024, LA ROCHE-SUR-YON, FRANCEICRA 2024: 13–17 May 2024, YOKOHAMA, JAPANRoboCup 2024: 17–22 July 2024, EINDHOVEN, NETHERLANDSEnjoy today’s videos! Legged robots have the potential to bec
Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.
RoboCup 2024: 17–22 July 2024, EINDHOVEN, NETHERLANDS
Enjoy today’s videos!
Legged robots have the potential to become vital in maintenance, home support, and exploration scenarios. In order to interact with and manipulate their environments, most legged robots are equipped with a dedicated robot arm, which means additional mass and mechanical complexity compared to standard legged robots. In this work, we explore pedipulation—using the legs of a legged robot for manipulation.
This work, by Philip Arm, Mayank Mittal, Hendrik Kolvenbach, and Marco Hutter from ETH Zurich’s Robotic Systems Lab, will be presented at the IEEE International Conference on Robotics and Automation (ICRA 2024) in May, in Japan (see events calendar above).
I learned a new word today: “stigmergy.” Stigmergy is a kind of group coordination that’s based on environmental modification. Like, when insects leave pheromone trails, they’re not directly sending messages to other individuals. But as a group, ants are able to manifest surprisingly complex coordinated behaviors. Cool, right? Researchers at IRIDIA are exploring the possibilities for robots using stigmergy with a cool “artificial pheromone” system using a UV-sensitive surface.
“Automatic Design of Stigmergy-Based Behaviors for Robot Swarms,” by Muhammad Salman, David Garzón Ramos, and Mauro Birattari, is published in the journal Communications Engineering.
Filmed in July 2017, this video shows Atlas walking through a “hatch” on a pitching surface. This skill uses autonomous behaviors, with the robot not knowing about the rocking world. Robot built by Boston Dynamics for the DARPA Robotics Challenge in 2013. Software by IHMC Robotics.
That IHMC video reminded me of the SAFFiR program for Shipboard Autonomous Firefighting Robots, which is responsible for a bunch of really cool research in partnership with the U.S. Naval Research Laboratory. NRL did some interesting stuff with Nexi robots from MIT and made their own videos. That effort I think didn’t get nearly enough credit for being very entertaining while communicating important robotics research.
Large industrial-asset operators increasingly use robotics to automate hazardous work at their facilities. This has led to soaring demand for autonomous inspection solutions like ANYmal. Series production by our partner Zollner enables ANYbotics to supply our customers with the required quantities of robots.
Hawkeye is a real-time multimodal conversation-and-interaction agent for the Boston Dynamics’ mobile robot Spot. Leveraging OpenAI’s experimental GPT-4 Turbo and Vision AI models, Hawkeye aims to empower everyone, from seniors to health care professionals in forming new and unique interactions with the world around them.
The rover Artemis, developed at the DFKI Robotics Innovation Center, has been equipped with a penetrometer that measures the soil’s penetration resistance to obtain precise information about soil strength. The video showcases an initial test run with the device mounted on the robot. During this test, the robot was remotely controlled, and the maximum penetration depth was limited to 15 millimeters.
To efficiently achieve complex humanoid loco-manipulation tasks in industrial contexts, we propose a combined vision-based tracker-localization interplay integrated as part of a task-space whole-body-optimization control. Our approach allows humanoid robots, targeted for industrial manufacturing, to manipulate and assemble large-scale objects while walking.
We developed a novel multibody robot (called the Two-Body Bot) consisting of two small-footprint mobile bases connected by a four-bar linkage where handlebars are mounted. Each base measures only 29.2 centimeters wide, making the robot likely the slimmest ever developed for mobile postural assistance.
Just last month, Oslo-based 1X (formerly Halodi Robotics) announced a massive US $100 million Series B, and clearly it has been putting the work in. A new video posted last week shows a [insert collective noun for humanoid robots here] of EVE android-ish mobile manipulators doing a wide variety of tasks leveraging end-to-end neural networks (pixels to actions). And best of all, the video seems to be more or less an honest one: a single take, at (appropriately) 1X speed, and full autonomy. But we
Just last month, Oslo-based 1X (formerly Halodi Robotics) announced a massive US $100 million Series B, and clearly it has been putting the work in. A new video posted last week shows a [insert collective noun for humanoid robots here] of EVE android-ish mobile manipulators doing a wide variety of tasks leveraging end-to-end neural networks (pixels to actions). And best of all, the video seems to be more or less an honest one: a single take, at (appropriately) 1X speed, and full autonomy. But we still had questions! And 1X has answers.
If, like me, you had some very important questions after watching this video, including whether that plant is actually dead and the fate of the weighted companion cube, you’ll want to read this Q&A with Eric Jang, vice president of artificial intelligence at 1X.
How many takes did it take to get this take?
Eric Jang: About 10 takes that lasted more than a minute; this was our first time doing a video like this, so it was more about learning how to coordinate the film crew and set up the shoot to look impressive.
Did you train your robots specifically on floppy things and transparent things?
Jang: Nope! We train our neural network to pick up all kinds of objects—both rigid and deformable and transparent things. Because we train manipulation end-to-end from pixels, picking up deformables and transparent objects is much easier than a classical grasping pipeline, where you have to figure out the exact geometry of what you are trying to grasp.
What keeps your robots from doing these tasks faster?
Jang: Our robots learn from demonstrations, so they go at exactly the same speed the human teleoperators demonstrate the task at. If we gathered demonstrations where we move faster, so would the robots.
Jang: At 1X, weighted companion cubes do not have rights.
That’s a very cool method for charging, but it seems a lot more complicated than some kind of drive-on interface directly with the base. Why use manipulation instead?
Jang: You’re right that this isn’t the simplest way to charge the robot, but if we are going to succeed at our mission to build generally capable and reliable robots that can manipulate all kinds of objects, our neural nets have to be able to do this task at the very least. Plus, it reduces costs quite a bit and simplifies the system!
What animal is that blue plush supposed to be?
Jang: It’s an obese shark, I think.
How many different robots are in this video?
Jang: 17? And more that are stationary.
How do you tell the robots apart?
Jang: They have little numbers printed on the base.
Is that plant dead?
Jang: Yes, we put it there because no CGI/3D-rendered video would ever go through the trouble of adding a dead plant.
What sort of existential crisis is the robot at the window having?
Jang: It was supposed to be opening and closing the window repeatedly (good for testing statistical significance).
If one of the robots was actually a human in a helmet and a suit holding grippers and standing on a mobile base, would I be able to tell?
Jang: I was super flattered by this comment on the Youtube video:
But if you look at the area where the upper arm tapers at the shoulder, it’s too thin for a human to fit inside while still having such broad shoulders:
Why are your robots so happy all the time? Are you planning to do more complex HRI (human-robot interaction) stuff with their faces?
Jang: Yes, more complex HRI stuff is in the pipeline!
Are your robots able to autonomously collaborate with each other?
Jang: Good catch! Yes, the green one is the worst of them all because there are many valid ways to pinch it with the gripper and lift it up. In robotic learning, if there are multiple ways to pick something up, it can actually confuse the machine learning model. Kind of like asking a car to turn left and right at the same time to avoid a tree.
Everyone else’s robots are making coffee. Can your robots make coffee?
Jang: Yep! We were planning to throw in some coffee making on this video as an Easter egg, but the coffee machine broke right before the film shoot and it turns out it’s impossible to get a Keurig K-Slim in Norway via next-day shipping.
1X is currently hiring both AI researchers (specialties include imitation learning, reinforcement learning, and large-scale training) and android operators (!) which actually sounds like a super fun and interesting job. More here.
Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.Cybathlon Challenges: 02 February 2024, ZURICHHRI 2024: 11–15 March 2024, BOULDER, COLO.Eurobot Open 2024: 8–11 May 2024, LA ROCHE-SUR-YON, FRANCEICRA 2024: 13–17 May 2024, YOKOHAMA, JAPANEnjoy today’s videos! In this video, we present Ringbot, a novel leg
Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.
In this video, we present Ringbot, a novel leg-wheel transformer robot incorporating a monocycle mechanism with legs. Ringbot aims to provide versatile mobility by replacing the driver and driving components of a conventional monocycle vehicle with legs mounted on compact driving modules inside the wheel.
Making money with robots has always been a struggle, but I think ALOHA 2 has figured it out.
Seriously, though, that is some impressive manipulation capability. I don’t know what that freakish panda thing is, but getting a contact lens from the package onto its bizarre eyeball was some wild dexterity.
Highlights from testing our new arms built by Boardwalk Robotics. Installed in October of 2023, these new arms are not just for boxing and provide much greater speed and power. This matches the mobility and manipulation goals we have for Nadia!
The least dramatic but possibly most important bit of that video is when Nadia uses her arms to help her balance against a wall, which is one of those things that humans do all the time without thinking about it. And we always appreciate being shown things that don’t go perfectly alongside things that do. The bit at the end there was Nadia not quite managing to do lateral arm raises. I can relate; that’s my reaction when I lift weights, too.
We present an avatar system designed to facilitate the embodiment of humanoid robots by human operators, validated through iCub3, a humanoid developed at the Istituto Italiano di Tecnologia.
Multimodal UAVs (unmanned aerial vehicles) are rarely capable of more than two modalities—that is, flying and walking or flying and perching. However, being able to fly, perch, and walk could further improve their usefulness by expanding their operating envelope. For instance, an aerial robot could fly a long distance, perch in a high place to survey the surroundings, then walk to avoid obstacles that could potentially inhibit flight. Birds are capable of these three tasks, and so offer a practical example of how a robot might be developed to do the same.
Nissan announces the concept model of “Iruyo,” a robot that supports babysitting while driving. Ilyo relieves the anxiety of the mother, father, and baby in the driver’s seat. We support safe and secure driving for parents and children. Nissan and Akachan Honpo are working on a project to make life better with cars and babies. Iruyo was born out of the voices of mothers and fathers who said, “I can’t hold my baby while driving alone.”
Building 937 houses the coolest robots at CERN. This is where the action happens to build and program robots that can tackle the unconventional challenges presented by the laboratory’s unique facilities. Recently, a new type of robot called CERNquadbot has entered CERN’s robot pool and successfully completed its first radiation protection test in the North Area.
By blending 2D images with foundation models to build 3D feature fields, a new MIT method helps robots understand and manipulate nearby objects with open-ended language prompts.
Our current care system does not scale, and our populations are aging fast. Robodies are multipliers for care staff, allowing them to work together with local helpers to provide protection and assistance around the clock while maintaining personal contact with people in the community.
SEAS researchers are helping develop resilient and autonomous deep-space and extraterrestrial habitations by developing technologies to let autonomous robots repair or replace damaged components in a habitat. The research is part of the Resilient ExtraTerrestrial Habitats institute (RETHi), led by Purdue University in partnership with SEAS, the University of Connecticut, and the University of Texas at San Antonio. Its goal is to “design and operate resilient deep-space habitats that can adapt, absorb, and rapidly recover from expected and unexpected disruptions.”
Find out how a bold vision became a success story! The DLR Institute of Robotics and Mechatronics has been researching robotic arms since the 1990s, originally for use in space. It was a long and ambitious journey before these lightweight robotic arms could be used on Earth and finally in operating theaters, a journey that required concentrated robotics expertise, interdisciplinary cooperation, and ultimately a successful technology transfer.
Robotics is changing the world, driven by focused teams of diverse experts. Willow Garage operated with the mantra “Impact first, return on capital second” and through ROS and the PR2 had enormous impact. Autonomous mobile robots are finally being accepted in the service industry, and Savioke (now Relay Robotics) was created to drive that impact. This talk will trace the evolution of Relay robots and their deployment in hotels, hospitals, and other service industries, starting with roots at Willow Garage. As robotics technology is poised for the next round of advances, how do we create and maintain the organizations that continue to drive progress?