FreshRSS

Normální zobrazení

Jsou dostupné nové články, klikněte pro obnovení stránky.
PředevčíremHlavní kanál
  • ✇MIT News - Nanoscience and nanotechnology | MIT.nano
  • Brain surgery training from an avatarBecky Ham | MIT.nano
    Benjamin Warf, a renowned neurosurgeon at Boston Children’s Hospital, stands in the MIT.nano Immersion Lab. More than 3,000 miles away, his virtual avatar stands next to Matheus Vasconcelos in Brazil as the resident practices delicate surgery on a doll-like model of a baby’s brain. With a pair of virtual-reality goggles, Vasconcelos is able to watch Warf’s avatar demonstrate a brain surgery procedure before replicating the technique himself and while asking questions of Warf’s digital twin. “I
     

Brain surgery training from an avatar

Benjamin Warf, a renowned neurosurgeon at Boston Children’s Hospital, stands in the MIT.nano Immersion Lab. More than 3,000 miles away, his virtual avatar stands next to Matheus Vasconcelos in Brazil as the resident practices delicate surgery on a doll-like model of a baby’s brain.

With a pair of virtual-reality goggles, Vasconcelos is able to watch Warf’s avatar demonstrate a brain surgery procedure before replicating the technique himself and while asking questions of Warf’s digital twin.

“It’s an almost out-of-body experience,” Warf says of watching his avatar interact with the residents. “Maybe it’s how it feels to have an identical twin?”

And that’s the goal: Warf’s digital twin bridged the distance, allowing him to be functionally in two places at once. “It was my first training using this model, and it had excellent performance,” says Vasconcelos, a neurosurgery resident at Santa Casa de São Paulo School of Medical Sciences in São Paulo, Brazil. “As a resident, I now feel more confident and comfortable applying the technique in a real patient under the guidance of a professor.”

Warf’s avatar arrived via a new project launched by medical simulator and augmented reality (AR) company EDUCSIM. The company is part of the 2023 cohort of START.nano, MIT.nano’s deep-tech accelerator that offers early-stage startups discounted access to MIT.nano’s laboratories.

In March 2023, Giselle Coelho, EDUCSIM’s scientific director and a pediatric neurosurgeon at Santa Casa de São Paulo and Sabará Children’s Hospital, began working with technical staff in the MIT.nano Immersion Lab to create Warf’s avatar. By November, the avatar was training future surgeons like Vasconcelos.

“I had this idea to create the avatar of Dr. Warf as a proof of concept, and asked, ‘What would be the place in the world where they are working on technologies like that?’” Coelho says. “Then I found MIT.nano.”

Capturing a Surgeon

As a neurosurgery resident, Coelho was so frustrated by the lack of practical training options for complex surgeries that she built her own model of a baby brain. The physical model contains all the structures of the brain and can even bleed, “simulating all the steps of a surgery, from incision to skin closure,” she says.

She soon found that simulators and virtual reality (VR) demonstrations reduced the learning curve for her own residents. Coelho launched EDUCSIM in 2017 to expand the variety and reach of the training for residents and experts looking to learn new techniques.

Those techniques include a procedure to treat infant hydrocephalus that was pioneered by Warf, the director of neonatal and congenital neurosurgery at Boston Children’s Hospital. Coelho had learned the technique directly from Warf and thought his avatar might be the way for surgeons who couldn’t travel to Boston to benefit from his expertise.

To create the avatar, Coelho worked with Talis Reks, the AR/VR/gaming/big data IT technologist in the Immersion Lab.

“A lot of technology and hardware can be very expensive for startups to access as they start their company journey,” Reks explains. “START.nano is one way of enabling them to utilize and afford the tools and technologies we have at MIT.nano’s Immersion Lab.”

Coelho and her colleagues needed high-fidelity and high-resolution motion-capture technology, volumetric video capture, and a range of other VR/AR technologies to capture Warf’s dexterous finger motions and facial expressions. Warf visited MIT.nano on several occasions to be digitally “captured,” including performing an operation on the physical baby model while wearing special gloves and clothing embedded with sensors.

“These technologies have mostly been used for entertainment or VFX [visual effects] or CGI [computer-generated imagery],” says Reks, “But this is a unique project, because we’re applying it now for real medical practice and real learning.”

One of the biggest challenges, Reks says, was helping to develop what Coelho calls “holoportation”— transmitting the 3D, volumetric video capture of Warf in real-time over the internet so that his avatar can appear in transcontinental medical training.

The Warf avatar has synchronous and asynchronous modes. The training that Vasconcelos received was in the asynchronous mode, where residents can observe the avatar’s demonstrations and ask it questions. The answers, delivered in a variety of languages, come from AI algorithms that draw from previous research and an extensive bank of questions and answers provided by Warf.

In the synchronous mode, Warf operates his avatar from a distance in real time, Coelho says. “He could walk around the room, he could talk to me, he could orient me. It’s amazing.”

Coelho, Warf, Reks, and other team members demonstrated a combination of the modes in a second session in late December. This demo consisted of volumetric live video capture between the Immersion Lab and Brazil, spatialized and visible in real-time through AR headsets. It significantly expanded upon the previous demo, which had only streamed volumetric data in one direction through a two-dimensional display.

Powerful impacts

Warf has a long history of training desperately needed pediatric neurosurgeons around the world, most recently through his nonprofit Neurokids. Remote and simulated training has been an increasingly large part of training since the pandemic, he says, although he doesn’t feel it will ever completely replace personal hands-on instruction and collaboration.

“But if in fact one day we could have avatars, like this one from Giselle, in remote places showing people how to do things and answering questions for them, without the cost of travel, without the time cost and so forth, I think it could be really powerful,” Warf says.

The avatar project is especially important for surgeons serving remote and underserved areas like the Amazon region of Brazil, Coelho says. “This is a way to give them the same level of education that they would get in other places, and the same opportunity to be in touch with Dr. Warf.”

One baby treated for hydrocephalus at a recent Amazon clinic had traveled by boat 30 hours for the surgery, according to Coelho.

Training surgeons with the avatar, she says, “can change reality for this baby and can change the future.”

© Photo courtesy of the MIT.nano Immersion Lab.

Benjamin Warf, the director of neonatal and congenital neurosurgery at Boston Children’s Hospital, uses a virtual reality environment to demonstrate a procedure that he pioneered to treat infant hydrocephalus. As Warf operates his avatar from a distance in real-time, medical residents in Brazil watch, interact, and learn in a 3D environment.
  • ✇MIT News - Nanoscience and nanotechnology | MIT.nano
  • Brain surgery training from an avatarBecky Ham | MIT.nano
    Benjamin Warf, a renowned neurosurgeon at Boston Children’s Hospital, stands in the MIT.nano Immersion Lab. More than 3,000 miles away, his virtual avatar stands next to Matheus Vasconcelos in Brazil as the resident practices delicate surgery on a doll-like model of a baby’s brain. With a pair of virtual-reality goggles, Vasconcelos is able to watch Warf’s avatar demonstrate a brain surgery procedure before replicating the technique himself and while asking questions of Warf’s digital twin. “I
     

Brain surgery training from an avatar

Benjamin Warf, a renowned neurosurgeon at Boston Children’s Hospital, stands in the MIT.nano Immersion Lab. More than 3,000 miles away, his virtual avatar stands next to Matheus Vasconcelos in Brazil as the resident practices delicate surgery on a doll-like model of a baby’s brain.

With a pair of virtual-reality goggles, Vasconcelos is able to watch Warf’s avatar demonstrate a brain surgery procedure before replicating the technique himself and while asking questions of Warf’s digital twin.

“It’s an almost out-of-body experience,” Warf says of watching his avatar interact with the residents. “Maybe it’s how it feels to have an identical twin?”

And that’s the goal: Warf’s digital twin bridged the distance, allowing him to be functionally in two places at once. “It was my first training using this model, and it had excellent performance,” says Vasconcelos, a neurosurgery resident at Santa Casa de São Paulo School of Medical Sciences in São Paulo, Brazil. “As a resident, I now feel more confident and comfortable applying the technique in a real patient under the guidance of a professor.”

Warf’s avatar arrived via a new project launched by medical simulator and augmented reality (AR) company EDUCSIM. The company is part of the 2023 cohort of START.nano, MIT.nano’s deep-tech accelerator that offers early-stage startups discounted access to MIT.nano’s laboratories.

In March 2023, Giselle Coelho, EDUCSIM’s scientific director and a pediatric neurosurgeon at Santa Casa de São Paulo and Sabará Children’s Hospital, began working with technical staff in the MIT.nano Immersion Lab to create Warf’s avatar. By November, the avatar was training future surgeons like Vasconcelos.

“I had this idea to create the avatar of Dr. Warf as a proof of concept, and asked, ‘What would be the place in the world where they are working on technologies like that?’” Coelho says. “Then I found MIT.nano.”

Capturing a Surgeon

As a neurosurgery resident, Coelho was so frustrated by the lack of practical training options for complex surgeries that she built her own model of a baby brain. The physical model contains all the structures of the brain and can even bleed, “simulating all the steps of a surgery, from incision to skin closure,” she says.

She soon found that simulators and virtual reality (VR) demonstrations reduced the learning curve for her own residents. Coelho launched EDUCSIM in 2017 to expand the variety and reach of the training for residents and experts looking to learn new techniques.

Those techniques include a procedure to treat infant hydrocephalus that was pioneered by Warf, the director of neonatal and congenital neurosurgery at Boston Children’s Hospital. Coelho had learned the technique directly from Warf and thought his avatar might be the way for surgeons who couldn’t travel to Boston to benefit from his expertise.

To create the avatar, Coelho worked with Talis Reks, the AR/VR/gaming/big data IT technologist in the Immersion Lab.

“A lot of technology and hardware can be very expensive for startups to access as they start their company journey,” Reks explains. “START.nano is one way of enabling them to utilize and afford the tools and technologies we have at MIT.nano’s Immersion Lab.”

Coelho and her colleagues needed high-fidelity and high-resolution motion-capture technology, volumetric video capture, and a range of other VR/AR technologies to capture Warf’s dexterous finger motions and facial expressions. Warf visited MIT.nano on several occasions to be digitally “captured,” including performing an operation on the physical baby model while wearing special gloves and clothing embedded with sensors.

“These technologies have mostly been used for entertainment or VFX [visual effects] or CGI [computer-generated imagery],” says Reks, “But this is a unique project, because we’re applying it now for real medical practice and real learning.”

One of the biggest challenges, Reks says, was helping to develop what Coelho calls “holoportation”— transmitting the 3D, volumetric video capture of Warf in real-time over the internet so that his avatar can appear in transcontinental medical training.

The Warf avatar has synchronous and asynchronous modes. The training that Vasconcelos received was in the asynchronous mode, where residents can observe the avatar’s demonstrations and ask it questions. The answers, delivered in a variety of languages, come from AI algorithms that draw from previous research and an extensive bank of questions and answers provided by Warf.

In the synchronous mode, Warf operates his avatar from a distance in real time, Coelho says. “He could walk around the room, he could talk to me, he could orient me. It’s amazing.”

Coelho, Warf, Reks, and other team members demonstrated a combination of the modes in a second session in late December. This demo consisted of volumetric live video capture between the Immersion Lab and Brazil, spatialized and visible in real-time through AR headsets. It significantly expanded upon the previous demo, which had only streamed volumetric data in one direction through a two-dimensional display.

Powerful impacts

Warf has a long history of training desperately needed pediatric neurosurgeons around the world, most recently through his nonprofit Neurokids. Remote and simulated training has been an increasingly large part of training since the pandemic, he says, although he doesn’t feel it will ever completely replace personal hands-on instruction and collaboration.

“But if in fact one day we could have avatars, like this one from Giselle, in remote places showing people how to do things and answering questions for them, without the cost of travel, without the time cost and so forth, I think it could be really powerful,” Warf says.

The avatar project is especially important for surgeons serving remote and underserved areas like the Amazon region of Brazil, Coelho says. “This is a way to give them the same level of education that they would get in other places, and the same opportunity to be in touch with Dr. Warf.”

One baby treated for hydrocephalus at a recent Amazon clinic had traveled by boat 30 hours for the surgery, according to Coelho.

Training surgeons with the avatar, she says, “can change reality for this baby and can change the future.”

© Photo courtesy of the MIT.nano Immersion Lab.

Benjamin Warf, the director of neonatal and congenital neurosurgery at Boston Children’s Hospital, uses a virtual reality environment to demonstrate a procedure that he pioneered to treat infant hydrocephalus. As Warf operates his avatar from a distance in real-time, medical residents in Brazil watch, interact, and learn in a 3D environment.
  • ✇MIT News - Nanoscience and nanotechnology | MIT.nano
  • Brain surgery training from an avatarBecky Ham | MIT.nano
    Benjamin Warf, a renowned neurosurgeon at Boston Children’s Hospital, stands in the MIT.nano Immersion Lab. More than 3,000 miles away, his virtual avatar stands next to Matheus Vasconcelos in Brazil as the resident practices delicate surgery on a doll-like model of a baby’s brain. With a pair of virtual-reality goggles, Vasconcelos is able to watch Warf’s avatar demonstrate a brain surgery procedure before replicating the technique himself and while asking questions of Warf’s digital twin. “I
     

Brain surgery training from an avatar

Benjamin Warf, a renowned neurosurgeon at Boston Children’s Hospital, stands in the MIT.nano Immersion Lab. More than 3,000 miles away, his virtual avatar stands next to Matheus Vasconcelos in Brazil as the resident practices delicate surgery on a doll-like model of a baby’s brain.

With a pair of virtual-reality goggles, Vasconcelos is able to watch Warf’s avatar demonstrate a brain surgery procedure before replicating the technique himself and while asking questions of Warf’s digital twin.

“It’s an almost out-of-body experience,” Warf says of watching his avatar interact with the residents. “Maybe it’s how it feels to have an identical twin?”

And that’s the goal: Warf’s digital twin bridged the distance, allowing him to be functionally in two places at once. “It was my first training using this model, and it had excellent performance,” says Vasconcelos, a neurosurgery resident at Santa Casa de São Paulo School of Medical Sciences in São Paulo, Brazil. “As a resident, I now feel more confident and comfortable applying the technique in a real patient under the guidance of a professor.”

Warf’s avatar arrived via a new project launched by medical simulator and augmented reality (AR) company EDUCSIM. The company is part of the 2023 cohort of START.nano, MIT.nano’s deep-tech accelerator that offers early-stage startups discounted access to MIT.nano’s laboratories.

In March 2023, Giselle Coelho, EDUCSIM’s scientific director and a pediatric neurosurgeon at Santa Casa de São Paulo and Sabará Children’s Hospital, began working with technical staff in the MIT.nano Immersion Lab to create Warf’s avatar. By November, the avatar was training future surgeons like Vasconcelos.

“I had this idea to create the avatar of Dr. Warf as a proof of concept, and asked, ‘What would be the place in the world where they are working on technologies like that?’” Coelho says. “Then I found MIT.nano.”

Capturing a Surgeon

As a neurosurgery resident, Coelho was so frustrated by the lack of practical training options for complex surgeries that she built her own model of a baby brain. The physical model contains all the structures of the brain and can even bleed, “simulating all the steps of a surgery, from incision to skin closure,” she says.

She soon found that simulators and virtual reality (VR) demonstrations reduced the learning curve for her own residents. Coelho launched EDUCSIM in 2017 to expand the variety and reach of the training for residents and experts looking to learn new techniques.

Those techniques include a procedure to treat infant hydrocephalus that was pioneered by Warf, the director of neonatal and congenital neurosurgery at Boston Children’s Hospital. Coelho had learned the technique directly from Warf and thought his avatar might be the way for surgeons who couldn’t travel to Boston to benefit from his expertise.

To create the avatar, Coelho worked with Talis Reks, the AR/VR/gaming/big data IT technologist in the Immersion Lab.

“A lot of technology and hardware can be very expensive for startups to access as they start their company journey,” Reks explains. “START.nano is one way of enabling them to utilize and afford the tools and technologies we have at MIT.nano’s Immersion Lab.”

Coelho and her colleagues needed high-fidelity and high-resolution motion-capture technology, volumetric video capture, and a range of other VR/AR technologies to capture Warf’s dexterous finger motions and facial expressions. Warf visited MIT.nano on several occasions to be digitally “captured,” including performing an operation on the physical baby model while wearing special gloves and clothing embedded with sensors.

“These technologies have mostly been used for entertainment or VFX [visual effects] or CGI [computer-generated imagery],” says Reks, “But this is a unique project, because we’re applying it now for real medical practice and real learning.”

One of the biggest challenges, Reks says, was helping to develop what Coelho calls “holoportation”— transmitting the 3D, volumetric video capture of Warf in real-time over the internet so that his avatar can appear in transcontinental medical training.

The Warf avatar has synchronous and asynchronous modes. The training that Vasconcelos received was in the asynchronous mode, where residents can observe the avatar’s demonstrations and ask it questions. The answers, delivered in a variety of languages, come from AI algorithms that draw from previous research and an extensive bank of questions and answers provided by Warf.

In the synchronous mode, Warf operates his avatar from a distance in real time, Coelho says. “He could walk around the room, he could talk to me, he could orient me. It’s amazing.”

Coelho, Warf, Reks, and other team members demonstrated a combination of the modes in a second session in late December. This demo consisted of volumetric live video capture between the Immersion Lab and Brazil, spatialized and visible in real-time through AR headsets. It significantly expanded upon the previous demo, which had only streamed volumetric data in one direction through a two-dimensional display.

Powerful impacts

Warf has a long history of training desperately needed pediatric neurosurgeons around the world, most recently through his nonprofit Neurokids. Remote and simulated training has been an increasingly large part of training since the pandemic, he says, although he doesn’t feel it will ever completely replace personal hands-on instruction and collaboration.

“But if in fact one day we could have avatars, like this one from Giselle, in remote places showing people how to do things and answering questions for them, without the cost of travel, without the time cost and so forth, I think it could be really powerful,” Warf says.

The avatar project is especially important for surgeons serving remote and underserved areas like the Amazon region of Brazil, Coelho says. “This is a way to give them the same level of education that they would get in other places, and the same opportunity to be in touch with Dr. Warf.”

One baby treated for hydrocephalus at a recent Amazon clinic had traveled by boat 30 hours for the surgery, according to Coelho.

Training surgeons with the avatar, she says, “can change reality for this baby and can change the future.”

© Photo courtesy of the MIT.nano Immersion Lab.

Benjamin Warf, the director of neonatal and congenital neurosurgery at Boston Children’s Hospital, uses a virtual reality environment to demonstrate a procedure that he pioneered to treat infant hydrocephalus. As Warf operates his avatar from a distance in real-time, medical residents in Brazil watch, interact, and learn in a 3D environment.
  • ✇MIT News - Nanoscience and nanotechnology | MIT.nano
  • Brain surgery training from an avatarBecky Ham | MIT.nano
    Benjamin Warf, a renowned neurosurgeon at Boston Children’s Hospital, stands in the MIT.nano Immersion Lab. More than 3,000 miles away, his virtual avatar stands next to Matheus Vasconcelos in Brazil as the resident practices delicate surgery on a doll-like model of a baby’s brain. With a pair of virtual-reality goggles, Vasconcelos is able to watch Warf’s avatar demonstrate a brain surgery procedure before replicating the technique himself and while asking questions of Warf’s digital twin. “I
     

Brain surgery training from an avatar

Benjamin Warf, a renowned neurosurgeon at Boston Children’s Hospital, stands in the MIT.nano Immersion Lab. More than 3,000 miles away, his virtual avatar stands next to Matheus Vasconcelos in Brazil as the resident practices delicate surgery on a doll-like model of a baby’s brain.

With a pair of virtual-reality goggles, Vasconcelos is able to watch Warf’s avatar demonstrate a brain surgery procedure before replicating the technique himself and while asking questions of Warf’s digital twin.

“It’s an almost out-of-body experience,” Warf says of watching his avatar interact with the residents. “Maybe it’s how it feels to have an identical twin?”

And that’s the goal: Warf’s digital twin bridged the distance, allowing him to be functionally in two places at once. “It was my first training using this model, and it had excellent performance,” says Vasconcelos, a neurosurgery resident at Santa Casa de São Paulo School of Medical Sciences in São Paulo, Brazil. “As a resident, I now feel more confident and comfortable applying the technique in a real patient under the guidance of a professor.”

Warf’s avatar arrived via a new project launched by medical simulator and augmented reality (AR) company EDUCSIM. The company is part of the 2023 cohort of START.nano, MIT.nano’s deep-tech accelerator that offers early-stage startups discounted access to MIT.nano’s laboratories.

In March 2023, Giselle Coelho, EDUCSIM’s scientific director and a pediatric neurosurgeon at Santa Casa de São Paulo and Sabará Children’s Hospital, began working with technical staff in the MIT.nano Immersion Lab to create Warf’s avatar. By November, the avatar was training future surgeons like Vasconcelos.

“I had this idea to create the avatar of Dr. Warf as a proof of concept, and asked, ‘What would be the place in the world where they are working on technologies like that?’” Coelho says. “Then I found MIT.nano.”

Capturing a Surgeon

As a neurosurgery resident, Coelho was so frustrated by the lack of practical training options for complex surgeries that she built her own model of a baby brain. The physical model contains all the structures of the brain and can even bleed, “simulating all the steps of a surgery, from incision to skin closure,” she says.

She soon found that simulators and virtual reality (VR) demonstrations reduced the learning curve for her own residents. Coelho launched EDUCSIM in 2017 to expand the variety and reach of the training for residents and experts looking to learn new techniques.

Those techniques include a procedure to treat infant hydrocephalus that was pioneered by Warf, the director of neonatal and congenital neurosurgery at Boston Children’s Hospital. Coelho had learned the technique directly from Warf and thought his avatar might be the way for surgeons who couldn’t travel to Boston to benefit from his expertise.

To create the avatar, Coelho worked with Talis Reks, the AR/VR/gaming/big data IT technologist in the Immersion Lab.

“A lot of technology and hardware can be very expensive for startups to access as they start their company journey,” Reks explains. “START.nano is one way of enabling them to utilize and afford the tools and technologies we have at MIT.nano’s Immersion Lab.”

Coelho and her colleagues needed high-fidelity and high-resolution motion-capture technology, volumetric video capture, and a range of other VR/AR technologies to capture Warf’s dexterous finger motions and facial expressions. Warf visited MIT.nano on several occasions to be digitally “captured,” including performing an operation on the physical baby model while wearing special gloves and clothing embedded with sensors.

“These technologies have mostly been used for entertainment or VFX [visual effects] or CGI [computer-generated imagery],” says Reks, “But this is a unique project, because we’re applying it now for real medical practice and real learning.”

One of the biggest challenges, Reks says, was helping to develop what Coelho calls “holoportation”— transmitting the 3D, volumetric video capture of Warf in real-time over the internet so that his avatar can appear in transcontinental medical training.

The Warf avatar has synchronous and asynchronous modes. The training that Vasconcelos received was in the asynchronous mode, where residents can observe the avatar’s demonstrations and ask it questions. The answers, delivered in a variety of languages, come from AI algorithms that draw from previous research and an extensive bank of questions and answers provided by Warf.

In the synchronous mode, Warf operates his avatar from a distance in real time, Coelho says. “He could walk around the room, he could talk to me, he could orient me. It’s amazing.”

Coelho, Warf, Reks, and other team members demonstrated a combination of the modes in a second session in late December. This demo consisted of volumetric live video capture between the Immersion Lab and Brazil, spatialized and visible in real-time through AR headsets. It significantly expanded upon the previous demo, which had only streamed volumetric data in one direction through a two-dimensional display.

Powerful impacts

Warf has a long history of training desperately needed pediatric neurosurgeons around the world, most recently through his nonprofit Neurokids. Remote and simulated training has been an increasingly large part of training since the pandemic, he says, although he doesn’t feel it will ever completely replace personal hands-on instruction and collaboration.

“But if in fact one day we could have avatars, like this one from Giselle, in remote places showing people how to do things and answering questions for them, without the cost of travel, without the time cost and so forth, I think it could be really powerful,” Warf says.

The avatar project is especially important for surgeons serving remote and underserved areas like the Amazon region of Brazil, Coelho says. “This is a way to give them the same level of education that they would get in other places, and the same opportunity to be in touch with Dr. Warf.”

One baby treated for hydrocephalus at a recent Amazon clinic had traveled by boat 30 hours for the surgery, according to Coelho.

Training surgeons with the avatar, she says, “can change reality for this baby and can change the future.”

© Photo courtesy of the MIT.nano Immersion Lab.

Benjamin Warf, the director of neonatal and congenital neurosurgery at Boston Children’s Hospital, uses a virtual reality environment to demonstrate a procedure that he pioneered to treat infant hydrocephalus. As Warf operates his avatar from a distance in real-time, medical residents in Brazil watch, interact, and learn in a 3D environment.
  • ✇MIT News - Nanoscience and nanotechnology | MIT.nano
  • Brain surgery training from an avatarBecky Ham | MIT.nano
    Benjamin Warf, a renowned neurosurgeon at Boston Children’s Hospital, stands in the MIT.nano Immersion Lab. More than 3,000 miles away, his virtual avatar stands next to Matheus Vasconcelos in Brazil as the resident practices delicate surgery on a doll-like model of a baby’s brain. With a pair of virtual-reality goggles, Vasconcelos is able to watch Warf’s avatar demonstrate a brain surgery procedure before replicating the technique himself and while asking questions of Warf’s digital twin. “I
     

Brain surgery training from an avatar

Benjamin Warf, a renowned neurosurgeon at Boston Children’s Hospital, stands in the MIT.nano Immersion Lab. More than 3,000 miles away, his virtual avatar stands next to Matheus Vasconcelos in Brazil as the resident practices delicate surgery on a doll-like model of a baby’s brain.

With a pair of virtual-reality goggles, Vasconcelos is able to watch Warf’s avatar demonstrate a brain surgery procedure before replicating the technique himself and while asking questions of Warf’s digital twin.

“It’s an almost out-of-body experience,” Warf says of watching his avatar interact with the residents. “Maybe it’s how it feels to have an identical twin?”

And that’s the goal: Warf’s digital twin bridged the distance, allowing him to be functionally in two places at once. “It was my first training using this model, and it had excellent performance,” says Vasconcelos, a neurosurgery resident at Santa Casa de São Paulo School of Medical Sciences in São Paulo, Brazil. “As a resident, I now feel more confident and comfortable applying the technique in a real patient under the guidance of a professor.”

Warf’s avatar arrived via a new project launched by medical simulator and augmented reality (AR) company EDUCSIM. The company is part of the 2023 cohort of START.nano, MIT.nano’s deep-tech accelerator that offers early-stage startups discounted access to MIT.nano’s laboratories.

In March 2023, Giselle Coelho, EDUCSIM’s scientific director and a pediatric neurosurgeon at Santa Casa de São Paulo and Sabará Children’s Hospital, began working with technical staff in the MIT.nano Immersion Lab to create Warf’s avatar. By November, the avatar was training future surgeons like Vasconcelos.

“I had this idea to create the avatar of Dr. Warf as a proof of concept, and asked, ‘What would be the place in the world where they are working on technologies like that?’” Coelho says. “Then I found MIT.nano.”

Capturing a Surgeon

As a neurosurgery resident, Coelho was so frustrated by the lack of practical training options for complex surgeries that she built her own model of a baby brain. The physical model contains all the structures of the brain and can even bleed, “simulating all the steps of a surgery, from incision to skin closure,” she says.

She soon found that simulators and virtual reality (VR) demonstrations reduced the learning curve for her own residents. Coelho launched EDUCSIM in 2017 to expand the variety and reach of the training for residents and experts looking to learn new techniques.

Those techniques include a procedure to treat infant hydrocephalus that was pioneered by Warf, the director of neonatal and congenital neurosurgery at Boston Children’s Hospital. Coelho had learned the technique directly from Warf and thought his avatar might be the way for surgeons who couldn’t travel to Boston to benefit from his expertise.

To create the avatar, Coelho worked with Talis Reks, the AR/VR/gaming/big data IT technologist in the Immersion Lab.

“A lot of technology and hardware can be very expensive for startups to access as they start their company journey,” Reks explains. “START.nano is one way of enabling them to utilize and afford the tools and technologies we have at MIT.nano’s Immersion Lab.”

Coelho and her colleagues needed high-fidelity and high-resolution motion-capture technology, volumetric video capture, and a range of other VR/AR technologies to capture Warf’s dexterous finger motions and facial expressions. Warf visited MIT.nano on several occasions to be digitally “captured,” including performing an operation on the physical baby model while wearing special gloves and clothing embedded with sensors.

“These technologies have mostly been used for entertainment or VFX [visual effects] or CGI [computer-generated imagery],” says Reks, “But this is a unique project, because we’re applying it now for real medical practice and real learning.”

One of the biggest challenges, Reks says, was helping to develop what Coelho calls “holoportation”— transmitting the 3D, volumetric video capture of Warf in real-time over the internet so that his avatar can appear in transcontinental medical training.

The Warf avatar has synchronous and asynchronous modes. The training that Vasconcelos received was in the asynchronous mode, where residents can observe the avatar’s demonstrations and ask it questions. The answers, delivered in a variety of languages, come from AI algorithms that draw from previous research and an extensive bank of questions and answers provided by Warf.

In the synchronous mode, Warf operates his avatar from a distance in real time, Coelho says. “He could walk around the room, he could talk to me, he could orient me. It’s amazing.”

Coelho, Warf, Reks, and other team members demonstrated a combination of the modes in a second session in late December. This demo consisted of volumetric live video capture between the Immersion Lab and Brazil, spatialized and visible in real-time through AR headsets. It significantly expanded upon the previous demo, which had only streamed volumetric data in one direction through a two-dimensional display.

Powerful impacts

Warf has a long history of training desperately needed pediatric neurosurgeons around the world, most recently through his nonprofit Neurokids. Remote and simulated training has been an increasingly large part of training since the pandemic, he says, although he doesn’t feel it will ever completely replace personal hands-on instruction and collaboration.

“But if in fact one day we could have avatars, like this one from Giselle, in remote places showing people how to do things and answering questions for them, without the cost of travel, without the time cost and so forth, I think it could be really powerful,” Warf says.

The avatar project is especially important for surgeons serving remote and underserved areas like the Amazon region of Brazil, Coelho says. “This is a way to give them the same level of education that they would get in other places, and the same opportunity to be in touch with Dr. Warf.”

One baby treated for hydrocephalus at a recent Amazon clinic had traveled by boat 30 hours for the surgery, according to Coelho.

Training surgeons with the avatar, she says, “can change reality for this baby and can change the future.”

© Photo courtesy of the MIT.nano Immersion Lab.

Benjamin Warf, the director of neonatal and congenital neurosurgery at Boston Children’s Hospital, uses a virtual reality environment to demonstrate a procedure that he pioneered to treat infant hydrocephalus. As Warf operates his avatar from a distance in real-time, medical residents in Brazil watch, interact, and learn in a 3D environment.
  • ✇MIT News - Nanoscience and nanotechnology | MIT.nano
  • Brain surgery training from an avatarBecky Ham | MIT.nano
    Benjamin Warf, a renowned neurosurgeon at Boston Children’s Hospital, stands in the MIT.nano Immersion Lab. More than 3,000 miles away, his virtual avatar stands next to Matheus Vasconcelos in Brazil as the resident practices delicate surgery on a doll-like model of a baby’s brain. With a pair of virtual-reality goggles, Vasconcelos is able to watch Warf’s avatar demonstrate a brain surgery procedure before replicating the technique himself and while asking questions of Warf’s digital twin. “I
     

Brain surgery training from an avatar

Benjamin Warf, a renowned neurosurgeon at Boston Children’s Hospital, stands in the MIT.nano Immersion Lab. More than 3,000 miles away, his virtual avatar stands next to Matheus Vasconcelos in Brazil as the resident practices delicate surgery on a doll-like model of a baby’s brain.

With a pair of virtual-reality goggles, Vasconcelos is able to watch Warf’s avatar demonstrate a brain surgery procedure before replicating the technique himself and while asking questions of Warf’s digital twin.

“It’s an almost out-of-body experience,” Warf says of watching his avatar interact with the residents. “Maybe it’s how it feels to have an identical twin?”

And that’s the goal: Warf’s digital twin bridged the distance, allowing him to be functionally in two places at once. “It was my first training using this model, and it had excellent performance,” says Vasconcelos, a neurosurgery resident at Santa Casa de São Paulo School of Medical Sciences in São Paulo, Brazil. “As a resident, I now feel more confident and comfortable applying the technique in a real patient under the guidance of a professor.”

Warf’s avatar arrived via a new project launched by medical simulator and augmented reality (AR) company EDUCSIM. The company is part of the 2023 cohort of START.nano, MIT.nano’s deep-tech accelerator that offers early-stage startups discounted access to MIT.nano’s laboratories.

In March 2023, Giselle Coelho, EDUCSIM’s scientific director and a pediatric neurosurgeon at Santa Casa de São Paulo and Sabará Children’s Hospital, began working with technical staff in the MIT.nano Immersion Lab to create Warf’s avatar. By November, the avatar was training future surgeons like Vasconcelos.

“I had this idea to create the avatar of Dr. Warf as a proof of concept, and asked, ‘What would be the place in the world where they are working on technologies like that?’” Coelho says. “Then I found MIT.nano.”

Capturing a Surgeon

As a neurosurgery resident, Coelho was so frustrated by the lack of practical training options for complex surgeries that she built her own model of a baby brain. The physical model contains all the structures of the brain and can even bleed, “simulating all the steps of a surgery, from incision to skin closure,” she says.

She soon found that simulators and virtual reality (VR) demonstrations reduced the learning curve for her own residents. Coelho launched EDUCSIM in 2017 to expand the variety and reach of the training for residents and experts looking to learn new techniques.

Those techniques include a procedure to treat infant hydrocephalus that was pioneered by Warf, the director of neonatal and congenital neurosurgery at Boston Children’s Hospital. Coelho had learned the technique directly from Warf and thought his avatar might be the way for surgeons who couldn’t travel to Boston to benefit from his expertise.

To create the avatar, Coelho worked with Talis Reks, the AR/VR/gaming/big data IT technologist in the Immersion Lab.

“A lot of technology and hardware can be very expensive for startups to access as they start their company journey,” Reks explains. “START.nano is one way of enabling them to utilize and afford the tools and technologies we have at MIT.nano’s Immersion Lab.”

Coelho and her colleagues needed high-fidelity and high-resolution motion-capture technology, volumetric video capture, and a range of other VR/AR technologies to capture Warf’s dexterous finger motions and facial expressions. Warf visited MIT.nano on several occasions to be digitally “captured,” including performing an operation on the physical baby model while wearing special gloves and clothing embedded with sensors.

“These technologies have mostly been used for entertainment or VFX [visual effects] or CGI [computer-generated imagery],” says Reks, “But this is a unique project, because we’re applying it now for real medical practice and real learning.”

One of the biggest challenges, Reks says, was helping to develop what Coelho calls “holoportation”— transmitting the 3D, volumetric video capture of Warf in real-time over the internet so that his avatar can appear in transcontinental medical training.

The Warf avatar has synchronous and asynchronous modes. The training that Vasconcelos received was in the asynchronous mode, where residents can observe the avatar’s demonstrations and ask it questions. The answers, delivered in a variety of languages, come from AI algorithms that draw from previous research and an extensive bank of questions and answers provided by Warf.

In the synchronous mode, Warf operates his avatar from a distance in real time, Coelho says. “He could walk around the room, he could talk to me, he could orient me. It’s amazing.”

Coelho, Warf, Reks, and other team members demonstrated a combination of the modes in a second session in late December. This demo consisted of volumetric live video capture between the Immersion Lab and Brazil, spatialized and visible in real-time through AR headsets. It significantly expanded upon the previous demo, which had only streamed volumetric data in one direction through a two-dimensional display.

Powerful impacts

Warf has a long history of training desperately needed pediatric neurosurgeons around the world, most recently through his nonprofit Neurokids. Remote and simulated training has been an increasingly large part of training since the pandemic, he says, although he doesn’t feel it will ever completely replace personal hands-on instruction and collaboration.

“But if in fact one day we could have avatars, like this one from Giselle, in remote places showing people how to do things and answering questions for them, without the cost of travel, without the time cost and so forth, I think it could be really powerful,” Warf says.

The avatar project is especially important for surgeons serving remote and underserved areas like the Amazon region of Brazil, Coelho says. “This is a way to give them the same level of education that they would get in other places, and the same opportunity to be in touch with Dr. Warf.”

One baby treated for hydrocephalus at a recent Amazon clinic had traveled by boat 30 hours for the surgery, according to Coelho.

Training surgeons with the avatar, she says, “can change reality for this baby and can change the future.”

© Photo courtesy of the MIT.nano Immersion Lab.

Benjamin Warf, the director of neonatal and congenital neurosurgery at Boston Children’s Hospital, uses a virtual reality environment to demonstrate a procedure that he pioneered to treat infant hydrocephalus. As Warf operates his avatar from a distance in real-time, medical residents in Brazil watch, interact, and learn in a 3D environment.
  • ✇MIT News - Nanoscience and nanotechnology | MIT.nano
  • Brain surgery training from an avatarBecky Ham | MIT.nano
    Benjamin Warf, a renowned neurosurgeon at Boston Children’s Hospital, stands in the MIT.nano Immersion Lab. More than 3,000 miles away, his virtual avatar stands next to Matheus Vasconcelos in Brazil as the resident practices delicate surgery on a doll-like model of a baby’s brain. With a pair of virtual-reality goggles, Vasconcelos is able to watch Warf’s avatar demonstrate a brain surgery procedure before replicating the technique himself and while asking questions of Warf’s digital twin. “I
     

Brain surgery training from an avatar

Benjamin Warf, a renowned neurosurgeon at Boston Children’s Hospital, stands in the MIT.nano Immersion Lab. More than 3,000 miles away, his virtual avatar stands next to Matheus Vasconcelos in Brazil as the resident practices delicate surgery on a doll-like model of a baby’s brain.

With a pair of virtual-reality goggles, Vasconcelos is able to watch Warf’s avatar demonstrate a brain surgery procedure before replicating the technique himself and while asking questions of Warf’s digital twin.

“It’s an almost out-of-body experience,” Warf says of watching his avatar interact with the residents. “Maybe it’s how it feels to have an identical twin?”

And that’s the goal: Warf’s digital twin bridged the distance, allowing him to be functionally in two places at once. “It was my first training using this model, and it had excellent performance,” says Vasconcelos, a neurosurgery resident at Santa Casa de São Paulo School of Medical Sciences in São Paulo, Brazil. “As a resident, I now feel more confident and comfortable applying the technique in a real patient under the guidance of a professor.”

Warf’s avatar arrived via a new project launched by medical simulator and augmented reality (AR) company EDUCSIM. The company is part of the 2023 cohort of START.nano, MIT.nano’s deep-tech accelerator that offers early-stage startups discounted access to MIT.nano’s laboratories.

In March 2023, Giselle Coelho, EDUCSIM’s scientific director and a pediatric neurosurgeon at Santa Casa de São Paulo and Sabará Children’s Hospital, began working with technical staff in the MIT.nano Immersion Lab to create Warf’s avatar. By November, the avatar was training future surgeons like Vasconcelos.

“I had this idea to create the avatar of Dr. Warf as a proof of concept, and asked, ‘What would be the place in the world where they are working on technologies like that?’” Coelho says. “Then I found MIT.nano.”

Capturing a Surgeon

As a neurosurgery resident, Coelho was so frustrated by the lack of practical training options for complex surgeries that she built her own model of a baby brain. The physical model contains all the structures of the brain and can even bleed, “simulating all the steps of a surgery, from incision to skin closure,” she says.

She soon found that simulators and virtual reality (VR) demonstrations reduced the learning curve for her own residents. Coelho launched EDUCSIM in 2017 to expand the variety and reach of the training for residents and experts looking to learn new techniques.

Those techniques include a procedure to treat infant hydrocephalus that was pioneered by Warf, the director of neonatal and congenital neurosurgery at Boston Children’s Hospital. Coelho had learned the technique directly from Warf and thought his avatar might be the way for surgeons who couldn’t travel to Boston to benefit from his expertise.

To create the avatar, Coelho worked with Talis Reks, the AR/VR/gaming/big data IT technologist in the Immersion Lab.

“A lot of technology and hardware can be very expensive for startups to access as they start their company journey,” Reks explains. “START.nano is one way of enabling them to utilize and afford the tools and technologies we have at MIT.nano’s Immersion Lab.”

Coelho and her colleagues needed high-fidelity and high-resolution motion-capture technology, volumetric video capture, and a range of other VR/AR technologies to capture Warf’s dexterous finger motions and facial expressions. Warf visited MIT.nano on several occasions to be digitally “captured,” including performing an operation on the physical baby model while wearing special gloves and clothing embedded with sensors.

“These technologies have mostly been used for entertainment or VFX [visual effects] or CGI [computer-generated imagery],” says Reks, “But this is a unique project, because we’re applying it now for real medical practice and real learning.”

One of the biggest challenges, Reks says, was helping to develop what Coelho calls “holoportation”— transmitting the 3D, volumetric video capture of Warf in real-time over the internet so that his avatar can appear in transcontinental medical training.

The Warf avatar has synchronous and asynchronous modes. The training that Vasconcelos received was in the asynchronous mode, where residents can observe the avatar’s demonstrations and ask it questions. The answers, delivered in a variety of languages, come from AI algorithms that draw from previous research and an extensive bank of questions and answers provided by Warf.

In the synchronous mode, Warf operates his avatar from a distance in real time, Coelho says. “He could walk around the room, he could talk to me, he could orient me. It’s amazing.”

Coelho, Warf, Reks, and other team members demonstrated a combination of the modes in a second session in late December. This demo consisted of volumetric live video capture between the Immersion Lab and Brazil, spatialized and visible in real-time through AR headsets. It significantly expanded upon the previous demo, which had only streamed volumetric data in one direction through a two-dimensional display.

Powerful impacts

Warf has a long history of training desperately needed pediatric neurosurgeons around the world, most recently through his nonprofit Neurokids. Remote and simulated training has been an increasingly large part of training since the pandemic, he says, although he doesn’t feel it will ever completely replace personal hands-on instruction and collaboration.

“But if in fact one day we could have avatars, like this one from Giselle, in remote places showing people how to do things and answering questions for them, without the cost of travel, without the time cost and so forth, I think it could be really powerful,” Warf says.

The avatar project is especially important for surgeons serving remote and underserved areas like the Amazon region of Brazil, Coelho says. “This is a way to give them the same level of education that they would get in other places, and the same opportunity to be in touch with Dr. Warf.”

One baby treated for hydrocephalus at a recent Amazon clinic had traveled by boat 30 hours for the surgery, according to Coelho.

Training surgeons with the avatar, she says, “can change reality for this baby and can change the future.”

© Photo courtesy of the MIT.nano Immersion Lab.

Benjamin Warf, the director of neonatal and congenital neurosurgery at Boston Children’s Hospital, uses a virtual reality environment to demonstrate a procedure that he pioneered to treat infant hydrocephalus. As Warf operates his avatar from a distance in real-time, medical residents in Brazil watch, interact, and learn in a 3D environment.
  • ✇MIT News - Nanoscience and nanotechnology | MIT.nano
  • Brain surgery training from an avatarBecky Ham | MIT.nano
    Benjamin Warf, a renowned neurosurgeon at Boston Children’s Hospital, stands in the MIT.nano Immersion Lab. More than 3,000 miles away, his virtual avatar stands next to Matheus Vasconcelos in Brazil as the resident practices delicate surgery on a doll-like model of a baby’s brain. With a pair of virtual-reality goggles, Vasconcelos is able to watch Warf’s avatar demonstrate a brain surgery procedure before replicating the technique himself and while asking questions of Warf’s digital twin. “I
     

Brain surgery training from an avatar

Benjamin Warf, a renowned neurosurgeon at Boston Children’s Hospital, stands in the MIT.nano Immersion Lab. More than 3,000 miles away, his virtual avatar stands next to Matheus Vasconcelos in Brazil as the resident practices delicate surgery on a doll-like model of a baby’s brain.

With a pair of virtual-reality goggles, Vasconcelos is able to watch Warf’s avatar demonstrate a brain surgery procedure before replicating the technique himself and while asking questions of Warf’s digital twin.

“It’s an almost out-of-body experience,” Warf says of watching his avatar interact with the residents. “Maybe it’s how it feels to have an identical twin?”

And that’s the goal: Warf’s digital twin bridged the distance, allowing him to be functionally in two places at once. “It was my first training using this model, and it had excellent performance,” says Vasconcelos, a neurosurgery resident at Santa Casa de São Paulo School of Medical Sciences in São Paulo, Brazil. “As a resident, I now feel more confident and comfortable applying the technique in a real patient under the guidance of a professor.”

Warf’s avatar arrived via a new project launched by medical simulator and augmented reality (AR) company EDUCSIM. The company is part of the 2023 cohort of START.nano, MIT.nano’s deep-tech accelerator that offers early-stage startups discounted access to MIT.nano’s laboratories.

In March 2023, Giselle Coelho, EDUCSIM’s scientific director and a pediatric neurosurgeon at Santa Casa de São Paulo and Sabará Children’s Hospital, began working with technical staff in the MIT.nano Immersion Lab to create Warf’s avatar. By November, the avatar was training future surgeons like Vasconcelos.

“I had this idea to create the avatar of Dr. Warf as a proof of concept, and asked, ‘What would be the place in the world where they are working on technologies like that?’” Coelho says. “Then I found MIT.nano.”

Capturing a Surgeon

As a neurosurgery resident, Coelho was so frustrated by the lack of practical training options for complex surgeries that she built her own model of a baby brain. The physical model contains all the structures of the brain and can even bleed, “simulating all the steps of a surgery, from incision to skin closure,” she says.

She soon found that simulators and virtual reality (VR) demonstrations reduced the learning curve for her own residents. Coelho launched EDUCSIM in 2017 to expand the variety and reach of the training for residents and experts looking to learn new techniques.

Those techniques include a procedure to treat infant hydrocephalus that was pioneered by Warf, the director of neonatal and congenital neurosurgery at Boston Children’s Hospital. Coelho had learned the technique directly from Warf and thought his avatar might be the way for surgeons who couldn’t travel to Boston to benefit from his expertise.

To create the avatar, Coelho worked with Talis Reks, the AR/VR/gaming/big data IT technologist in the Immersion Lab.

“A lot of technology and hardware can be very expensive for startups to access as they start their company journey,” Reks explains. “START.nano is one way of enabling them to utilize and afford the tools and technologies we have at MIT.nano’s Immersion Lab.”

Coelho and her colleagues needed high-fidelity and high-resolution motion-capture technology, volumetric video capture, and a range of other VR/AR technologies to capture Warf’s dexterous finger motions and facial expressions. Warf visited MIT.nano on several occasions to be digitally “captured,” including performing an operation on the physical baby model while wearing special gloves and clothing embedded with sensors.

“These technologies have mostly been used for entertainment or VFX [visual effects] or CGI [computer-generated imagery],” says Reks, “But this is a unique project, because we’re applying it now for real medical practice and real learning.”

One of the biggest challenges, Reks says, was helping to develop what Coelho calls “holoportation”— transmitting the 3D, volumetric video capture of Warf in real-time over the internet so that his avatar can appear in transcontinental medical training.

The Warf avatar has synchronous and asynchronous modes. The training that Vasconcelos received was in the asynchronous mode, where residents can observe the avatar’s demonstrations and ask it questions. The answers, delivered in a variety of languages, come from AI algorithms that draw from previous research and an extensive bank of questions and answers provided by Warf.

In the synchronous mode, Warf operates his avatar from a distance in real time, Coelho says. “He could walk around the room, he could talk to me, he could orient me. It’s amazing.”

Coelho, Warf, Reks, and other team members demonstrated a combination of the modes in a second session in late December. This demo consisted of volumetric live video capture between the Immersion Lab and Brazil, spatialized and visible in real-time through AR headsets. It significantly expanded upon the previous demo, which had only streamed volumetric data in one direction through a two-dimensional display.

Powerful impacts

Warf has a long history of training desperately needed pediatric neurosurgeons around the world, most recently through his nonprofit Neurokids. Remote and simulated training has been an increasingly large part of training since the pandemic, he says, although he doesn’t feel it will ever completely replace personal hands-on instruction and collaboration.

“But if in fact one day we could have avatars, like this one from Giselle, in remote places showing people how to do things and answering questions for them, without the cost of travel, without the time cost and so forth, I think it could be really powerful,” Warf says.

The avatar project is especially important for surgeons serving remote and underserved areas like the Amazon region of Brazil, Coelho says. “This is a way to give them the same level of education that they would get in other places, and the same opportunity to be in touch with Dr. Warf.”

One baby treated for hydrocephalus at a recent Amazon clinic had traveled by boat 30 hours for the surgery, according to Coelho.

Training surgeons with the avatar, she says, “can change reality for this baby and can change the future.”

© Photo courtesy of the MIT.nano Immersion Lab.

Benjamin Warf, the director of neonatal and congenital neurosurgery at Boston Children’s Hospital, uses a virtual reality environment to demonstrate a procedure that he pioneered to treat infant hydrocephalus. As Warf operates his avatar from a distance in real-time, medical residents in Brazil watch, interact, and learn in a 3D environment.
  • ✇MIT News - Nanoscience and nanotechnology | MIT.nano
  • Brain surgery training from an avatarBecky Ham | MIT.nano
    Benjamin Warf, a renowned neurosurgeon at Boston Children’s Hospital, stands in the MIT.nano Immersion Lab. More than 3,000 miles away, his virtual avatar stands next to Matheus Vasconcelos in Brazil as the resident practices delicate surgery on a doll-like model of a baby’s brain. With a pair of virtual-reality goggles, Vasconcelos is able to watch Warf’s avatar demonstrate a brain surgery procedure before replicating the technique himself and while asking questions of Warf’s digital twin. “I
     

Brain surgery training from an avatar

Benjamin Warf, a renowned neurosurgeon at Boston Children’s Hospital, stands in the MIT.nano Immersion Lab. More than 3,000 miles away, his virtual avatar stands next to Matheus Vasconcelos in Brazil as the resident practices delicate surgery on a doll-like model of a baby’s brain.

With a pair of virtual-reality goggles, Vasconcelos is able to watch Warf’s avatar demonstrate a brain surgery procedure before replicating the technique himself and while asking questions of Warf’s digital twin.

“It’s an almost out-of-body experience,” Warf says of watching his avatar interact with the residents. “Maybe it’s how it feels to have an identical twin?”

And that’s the goal: Warf’s digital twin bridged the distance, allowing him to be functionally in two places at once. “It was my first training using this model, and it had excellent performance,” says Vasconcelos, a neurosurgery resident at Santa Casa de São Paulo School of Medical Sciences in São Paulo, Brazil. “As a resident, I now feel more confident and comfortable applying the technique in a real patient under the guidance of a professor.”

Warf’s avatar arrived via a new project launched by medical simulator and augmented reality (AR) company EDUCSIM. The company is part of the 2023 cohort of START.nano, MIT.nano’s deep-tech accelerator that offers early-stage startups discounted access to MIT.nano’s laboratories.

In March 2023, Giselle Coelho, EDUCSIM’s scientific director and a pediatric neurosurgeon at Santa Casa de São Paulo and Sabará Children’s Hospital, began working with technical staff in the MIT.nano Immersion Lab to create Warf’s avatar. By November, the avatar was training future surgeons like Vasconcelos.

“I had this idea to create the avatar of Dr. Warf as a proof of concept, and asked, ‘What would be the place in the world where they are working on technologies like that?’” Coelho says. “Then I found MIT.nano.”

Capturing a Surgeon

As a neurosurgery resident, Coelho was so frustrated by the lack of practical training options for complex surgeries that she built her own model of a baby brain. The physical model contains all the structures of the brain and can even bleed, “simulating all the steps of a surgery, from incision to skin closure,” she says.

She soon found that simulators and virtual reality (VR) demonstrations reduced the learning curve for her own residents. Coelho launched EDUCSIM in 2017 to expand the variety and reach of the training for residents and experts looking to learn new techniques.

Those techniques include a procedure to treat infant hydrocephalus that was pioneered by Warf, the director of neonatal and congenital neurosurgery at Boston Children’s Hospital. Coelho had learned the technique directly from Warf and thought his avatar might be the way for surgeons who couldn’t travel to Boston to benefit from his expertise.

To create the avatar, Coelho worked with Talis Reks, the AR/VR/gaming/big data IT technologist in the Immersion Lab.

“A lot of technology and hardware can be very expensive for startups to access as they start their company journey,” Reks explains. “START.nano is one way of enabling them to utilize and afford the tools and technologies we have at MIT.nano’s Immersion Lab.”

Coelho and her colleagues needed high-fidelity and high-resolution motion-capture technology, volumetric video capture, and a range of other VR/AR technologies to capture Warf’s dexterous finger motions and facial expressions. Warf visited MIT.nano on several occasions to be digitally “captured,” including performing an operation on the physical baby model while wearing special gloves and clothing embedded with sensors.

“These technologies have mostly been used for entertainment or VFX [visual effects] or CGI [computer-generated imagery],” says Reks, “But this is a unique project, because we’re applying it now for real medical practice and real learning.”

One of the biggest challenges, Reks says, was helping to develop what Coelho calls “holoportation”— transmitting the 3D, volumetric video capture of Warf in real-time over the internet so that his avatar can appear in transcontinental medical training.

The Warf avatar has synchronous and asynchronous modes. The training that Vasconcelos received was in the asynchronous mode, where residents can observe the avatar’s demonstrations and ask it questions. The answers, delivered in a variety of languages, come from AI algorithms that draw from previous research and an extensive bank of questions and answers provided by Warf.

In the synchronous mode, Warf operates his avatar from a distance in real time, Coelho says. “He could walk around the room, he could talk to me, he could orient me. It’s amazing.”

Coelho, Warf, Reks, and other team members demonstrated a combination of the modes in a second session in late December. This demo consisted of volumetric live video capture between the Immersion Lab and Brazil, spatialized and visible in real-time through AR headsets. It significantly expanded upon the previous demo, which had only streamed volumetric data in one direction through a two-dimensional display.

Powerful impacts

Warf has a long history of training desperately needed pediatric neurosurgeons around the world, most recently through his nonprofit Neurokids. Remote and simulated training has been an increasingly large part of training since the pandemic, he says, although he doesn’t feel it will ever completely replace personal hands-on instruction and collaboration.

“But if in fact one day we could have avatars, like this one from Giselle, in remote places showing people how to do things and answering questions for them, without the cost of travel, without the time cost and so forth, I think it could be really powerful,” Warf says.

The avatar project is especially important for surgeons serving remote and underserved areas like the Amazon region of Brazil, Coelho says. “This is a way to give them the same level of education that they would get in other places, and the same opportunity to be in touch with Dr. Warf.”

One baby treated for hydrocephalus at a recent Amazon clinic had traveled by boat 30 hours for the surgery, according to Coelho.

Training surgeons with the avatar, she says, “can change reality for this baby and can change the future.”

© Photo courtesy of the MIT.nano Immersion Lab.

Benjamin Warf, the director of neonatal and congenital neurosurgery at Boston Children’s Hospital, uses a virtual reality environment to demonstrate a procedure that he pioneered to treat infant hydrocephalus. As Warf operates his avatar from a distance in real-time, medical residents in Brazil watch, interact, and learn in a 3D environment.
  • ✇MIT News - Nanoscience and nanotechnology | MIT.nano
  • Brain surgery training from an avatarBecky Ham | MIT.nano
    Benjamin Warf, a renowned neurosurgeon at Boston Children’s Hospital, stands in the MIT.nano Immersion Lab. More than 3,000 miles away, his virtual avatar stands next to Matheus Vasconcelos in Brazil as the resident practices delicate surgery on a doll-like model of a baby’s brain. With a pair of virtual-reality goggles, Vasconcelos is able to watch Warf’s avatar demonstrate a brain surgery procedure before replicating the technique himself and while asking questions of Warf’s digital twin. “I
     

Brain surgery training from an avatar

Benjamin Warf, a renowned neurosurgeon at Boston Children’s Hospital, stands in the MIT.nano Immersion Lab. More than 3,000 miles away, his virtual avatar stands next to Matheus Vasconcelos in Brazil as the resident practices delicate surgery on a doll-like model of a baby’s brain.

With a pair of virtual-reality goggles, Vasconcelos is able to watch Warf’s avatar demonstrate a brain surgery procedure before replicating the technique himself and while asking questions of Warf’s digital twin.

“It’s an almost out-of-body experience,” Warf says of watching his avatar interact with the residents. “Maybe it’s how it feels to have an identical twin?”

And that’s the goal: Warf’s digital twin bridged the distance, allowing him to be functionally in two places at once. “It was my first training using this model, and it had excellent performance,” says Vasconcelos, a neurosurgery resident at Santa Casa de São Paulo School of Medical Sciences in São Paulo, Brazil. “As a resident, I now feel more confident and comfortable applying the technique in a real patient under the guidance of a professor.”

Warf’s avatar arrived via a new project launched by medical simulator and augmented reality (AR) company EDUCSIM. The company is part of the 2023 cohort of START.nano, MIT.nano’s deep-tech accelerator that offers early-stage startups discounted access to MIT.nano’s laboratories.

In March 2023, Giselle Coelho, EDUCSIM’s scientific director and a pediatric neurosurgeon at Santa Casa de São Paulo and Sabará Children’s Hospital, began working with technical staff in the MIT.nano Immersion Lab to create Warf’s avatar. By November, the avatar was training future surgeons like Vasconcelos.

“I had this idea to create the avatar of Dr. Warf as a proof of concept, and asked, ‘What would be the place in the world where they are working on technologies like that?’” Coelho says. “Then I found MIT.nano.”

Capturing a Surgeon

As a neurosurgery resident, Coelho was so frustrated by the lack of practical training options for complex surgeries that she built her own model of a baby brain. The physical model contains all the structures of the brain and can even bleed, “simulating all the steps of a surgery, from incision to skin closure,” she says.

She soon found that simulators and virtual reality (VR) demonstrations reduced the learning curve for her own residents. Coelho launched EDUCSIM in 2017 to expand the variety and reach of the training for residents and experts looking to learn new techniques.

Those techniques include a procedure to treat infant hydrocephalus that was pioneered by Warf, the director of neonatal and congenital neurosurgery at Boston Children’s Hospital. Coelho had learned the technique directly from Warf and thought his avatar might be the way for surgeons who couldn’t travel to Boston to benefit from his expertise.

To create the avatar, Coelho worked with Talis Reks, the AR/VR/gaming/big data IT technologist in the Immersion Lab.

“A lot of technology and hardware can be very expensive for startups to access as they start their company journey,” Reks explains. “START.nano is one way of enabling them to utilize and afford the tools and technologies we have at MIT.nano’s Immersion Lab.”

Coelho and her colleagues needed high-fidelity and high-resolution motion-capture technology, volumetric video capture, and a range of other VR/AR technologies to capture Warf’s dexterous finger motions and facial expressions. Warf visited MIT.nano on several occasions to be digitally “captured,” including performing an operation on the physical baby model while wearing special gloves and clothing embedded with sensors.

“These technologies have mostly been used for entertainment or VFX [visual effects] or CGI [computer-generated imagery],” says Reks, “But this is a unique project, because we’re applying it now for real medical practice and real learning.”

One of the biggest challenges, Reks says, was helping to develop what Coelho calls “holoportation”— transmitting the 3D, volumetric video capture of Warf in real-time over the internet so that his avatar can appear in transcontinental medical training.

The Warf avatar has synchronous and asynchronous modes. The training that Vasconcelos received was in the asynchronous mode, where residents can observe the avatar’s demonstrations and ask it questions. The answers, delivered in a variety of languages, come from AI algorithms that draw from previous research and an extensive bank of questions and answers provided by Warf.

In the synchronous mode, Warf operates his avatar from a distance in real time, Coelho says. “He could walk around the room, he could talk to me, he could orient me. It’s amazing.”

Coelho, Warf, Reks, and other team members demonstrated a combination of the modes in a second session in late December. This demo consisted of volumetric live video capture between the Immersion Lab and Brazil, spatialized and visible in real-time through AR headsets. It significantly expanded upon the previous demo, which had only streamed volumetric data in one direction through a two-dimensional display.

Powerful impacts

Warf has a long history of training desperately needed pediatric neurosurgeons around the world, most recently through his nonprofit Neurokids. Remote and simulated training has been an increasingly large part of training since the pandemic, he says, although he doesn’t feel it will ever completely replace personal hands-on instruction and collaboration.

“But if in fact one day we could have avatars, like this one from Giselle, in remote places showing people how to do things and answering questions for them, without the cost of travel, without the time cost and so forth, I think it could be really powerful,” Warf says.

The avatar project is especially important for surgeons serving remote and underserved areas like the Amazon region of Brazil, Coelho says. “This is a way to give them the same level of education that they would get in other places, and the same opportunity to be in touch with Dr. Warf.”

One baby treated for hydrocephalus at a recent Amazon clinic had traveled by boat 30 hours for the surgery, according to Coelho.

Training surgeons with the avatar, she says, “can change reality for this baby and can change the future.”

© Photo courtesy of the MIT.nano Immersion Lab.

Benjamin Warf, the director of neonatal and congenital neurosurgery at Boston Children’s Hospital, uses a virtual reality environment to demonstrate a procedure that he pioneered to treat infant hydrocephalus. As Warf operates his avatar from a distance in real-time, medical residents in Brazil watch, interact, and learn in a 3D environment.
  • ✇MIT News - Nanoscience and nanotechnology | MIT.nano
  • Brain surgery training from an avatarBecky Ham | MIT.nano
    Benjamin Warf, a renowned neurosurgeon at Boston Children’s Hospital, stands in the MIT.nano Immersion Lab. More than 3,000 miles away, his virtual avatar stands next to Matheus Vasconcelos in Brazil as the resident practices delicate surgery on a doll-like model of a baby’s brain. With a pair of virtual-reality goggles, Vasconcelos is able to watch Warf’s avatar demonstrate a brain surgery procedure before replicating the technique himself and while asking questions of Warf’s digital twin. “I
     

Brain surgery training from an avatar

Benjamin Warf, a renowned neurosurgeon at Boston Children’s Hospital, stands in the MIT.nano Immersion Lab. More than 3,000 miles away, his virtual avatar stands next to Matheus Vasconcelos in Brazil as the resident practices delicate surgery on a doll-like model of a baby’s brain.

With a pair of virtual-reality goggles, Vasconcelos is able to watch Warf’s avatar demonstrate a brain surgery procedure before replicating the technique himself and while asking questions of Warf’s digital twin.

“It’s an almost out-of-body experience,” Warf says of watching his avatar interact with the residents. “Maybe it’s how it feels to have an identical twin?”

And that’s the goal: Warf’s digital twin bridged the distance, allowing him to be functionally in two places at once. “It was my first training using this model, and it had excellent performance,” says Vasconcelos, a neurosurgery resident at Santa Casa de São Paulo School of Medical Sciences in São Paulo, Brazil. “As a resident, I now feel more confident and comfortable applying the technique in a real patient under the guidance of a professor.”

Warf’s avatar arrived via a new project launched by medical simulator and augmented reality (AR) company EDUCSIM. The company is part of the 2023 cohort of START.nano, MIT.nano’s deep-tech accelerator that offers early-stage startups discounted access to MIT.nano’s laboratories.

In March 2023, Giselle Coelho, EDUCSIM’s scientific director and a pediatric neurosurgeon at Santa Casa de São Paulo and Sabará Children’s Hospital, began working with technical staff in the MIT.nano Immersion Lab to create Warf’s avatar. By November, the avatar was training future surgeons like Vasconcelos.

“I had this idea to create the avatar of Dr. Warf as a proof of concept, and asked, ‘What would be the place in the world where they are working on technologies like that?’” Coelho says. “Then I found MIT.nano.”

Capturing a Surgeon

As a neurosurgery resident, Coelho was so frustrated by the lack of practical training options for complex surgeries that she built her own model of a baby brain. The physical model contains all the structures of the brain and can even bleed, “simulating all the steps of a surgery, from incision to skin closure,” she says.

She soon found that simulators and virtual reality (VR) demonstrations reduced the learning curve for her own residents. Coelho launched EDUCSIM in 2017 to expand the variety and reach of the training for residents and experts looking to learn new techniques.

Those techniques include a procedure to treat infant hydrocephalus that was pioneered by Warf, the director of neonatal and congenital neurosurgery at Boston Children’s Hospital. Coelho had learned the technique directly from Warf and thought his avatar might be the way for surgeons who couldn’t travel to Boston to benefit from his expertise.

To create the avatar, Coelho worked with Talis Reks, the AR/VR/gaming/big data IT technologist in the Immersion Lab.

“A lot of technology and hardware can be very expensive for startups to access as they start their company journey,” Reks explains. “START.nano is one way of enabling them to utilize and afford the tools and technologies we have at MIT.nano’s Immersion Lab.”

Coelho and her colleagues needed high-fidelity and high-resolution motion-capture technology, volumetric video capture, and a range of other VR/AR technologies to capture Warf’s dexterous finger motions and facial expressions. Warf visited MIT.nano on several occasions to be digitally “captured,” including performing an operation on the physical baby model while wearing special gloves and clothing embedded with sensors.

“These technologies have mostly been used for entertainment or VFX [visual effects] or CGI [computer-generated imagery],” says Reks, “But this is a unique project, because we’re applying it now for real medical practice and real learning.”

One of the biggest challenges, Reks says, was helping to develop what Coelho calls “holoportation”— transmitting the 3D, volumetric video capture of Warf in real-time over the internet so that his avatar can appear in transcontinental medical training.

The Warf avatar has synchronous and asynchronous modes. The training that Vasconcelos received was in the asynchronous mode, where residents can observe the avatar’s demonstrations and ask it questions. The answers, delivered in a variety of languages, come from AI algorithms that draw from previous research and an extensive bank of questions and answers provided by Warf.

In the synchronous mode, Warf operates his avatar from a distance in real time, Coelho says. “He could walk around the room, he could talk to me, he could orient me. It’s amazing.”

Coelho, Warf, Reks, and other team members demonstrated a combination of the modes in a second session in late December. This demo consisted of volumetric live video capture between the Immersion Lab and Brazil, spatialized and visible in real-time through AR headsets. It significantly expanded upon the previous demo, which had only streamed volumetric data in one direction through a two-dimensional display.

Powerful impacts

Warf has a long history of training desperately needed pediatric neurosurgeons around the world, most recently through his nonprofit Neurokids. Remote and simulated training has been an increasingly large part of training since the pandemic, he says, although he doesn’t feel it will ever completely replace personal hands-on instruction and collaboration.

“But if in fact one day we could have avatars, like this one from Giselle, in remote places showing people how to do things and answering questions for them, without the cost of travel, without the time cost and so forth, I think it could be really powerful,” Warf says.

The avatar project is especially important for surgeons serving remote and underserved areas like the Amazon region of Brazil, Coelho says. “This is a way to give them the same level of education that they would get in other places, and the same opportunity to be in touch with Dr. Warf.”

One baby treated for hydrocephalus at a recent Amazon clinic had traveled by boat 30 hours for the surgery, according to Coelho.

Training surgeons with the avatar, she says, “can change reality for this baby and can change the future.”

© Photo courtesy of the MIT.nano Immersion Lab.

Benjamin Warf, the director of neonatal and congenital neurosurgery at Boston Children’s Hospital, uses a virtual reality environment to demonstrate a procedure that he pioneered to treat infant hydrocephalus. As Warf operates his avatar from a distance in real-time, medical residents in Brazil watch, interact, and learn in a 3D environment.
  • ✇MIT News - Nanoscience and nanotechnology | MIT.nano
  • MIT.nano Family Day invites those at home to come to workMIT.nano
    Every day, researchers come to MIT.nano to investigate at the nanoscale, but what’s it like to work there? On Aug. 21, MIT.nano staff invited their family members to come see what it takes to support discovery, education, and innovation in this cutting-edge research facility. More than 50 people attended — spouses and partners, parents and children, nephews and nieces, in-laws, and others. The event was so much fun that staff and families alike were asking to do it again; by the end of the day
     

MIT.nano Family Day invites those at home to come to work

Every day, researchers come to MIT.nano to investigate at the nanoscale, but what’s it like to work there? On Aug. 21, MIT.nano staff invited their family members to come see what it takes to support discovery, education, and innovation in this cutting-edge research facility.

More than 50 people attended — spouses and partners, parents and children, nephews and nieces, in-laws, and others. The event was so much fun that staff and families alike were asking to do it again; by the end of the day calling it the “first annual” MIT.nano Family Day.

After a welcome from Vladimir Bulović, faculty director of MIT.nano and Fariborz Maseeh Professor of Emerging Technologies, technical staff introduced the families to the building and its various facilities. Following a behind-the-scenes tour of some of the infrastructure spaces, they enjoyed lunch in MIT.nano’s East Lobby, then split into groups for hands-on experiences throughout the building.

In MIT.nano’s characterization facility, visitors gained a firsthand look at the powerful microscopes positioned inside the basement imaging suites and learned how to minimize vibrational and electromagnetic interference in order to make videos of atoms. In one suite, the guests viewed individual columns of atoms using an aberration-corrected scanning transmission electron microscope. In another, staff demonstrated how to use micro-computed tomography (microCT) to obtain three-dimensional imagery of the interior of electronic devices, biological samples, and other objects.

The next stop was the MIT.nano Immersion Lab for demonstrations of sensing technology and immersive experiences. Family members put on a mixed-reality headset and were transported — virtually — into the cockpit of an airplane preparing for takeoff. Those not interested in flying stepped inside a virtual art studio complete with balloons on the ceiling and snow falling outside.

Family members also donned full-body protective clothing called “bunny suits” and headed into MIT.nano’s cleanroom. As they toured the nanofabrication facility, the visitors observed researchers operating equipment and tested a particle counter that illustrated just how much of a wrecking ball dust can be at the nanoscale. A smaller group of volunteers joined MIT.nano staff in using the cleanroom processing tools to expose, develop, and etch a Family Day group photo onto a 50-nanometer-thick layer of aluminum on a silicon wafer, now displayed in MIT.nano’s first floor cleanroom window.

The day concluded with an ice cream social and swag grab in MIT.nano’s courtyard, where staff and their visitors mingled with one another as a new, extended MIT and MIT.nano family.

© Photo collage: Tom Gearty

MIT.nano Family Day brought over 50 family members of MIT.nano staff to MIT for a fun-filled day of nanoscale exploration.
  • ✇MIT News - Nanoscience and nanotechnology | MIT.nano
  • Brain surgery training from an avatarBecky Ham | MIT.nano
    Benjamin Warf, a renowned neurosurgeon at Boston Children’s Hospital, stands in the MIT.nano Immersion Lab. More than 3,000 miles away, his virtual avatar stands next to Matheus Vasconcelos in Brazil as the resident practices delicate surgery on a doll-like model of a baby’s brain. With a pair of virtual-reality goggles, Vasconcelos is able to watch Warf’s avatar demonstrate a brain surgery procedure before replicating the technique himself and while asking questions of Warf’s digital twin. “I
     

Brain surgery training from an avatar

Benjamin Warf, a renowned neurosurgeon at Boston Children’s Hospital, stands in the MIT.nano Immersion Lab. More than 3,000 miles away, his virtual avatar stands next to Matheus Vasconcelos in Brazil as the resident practices delicate surgery on a doll-like model of a baby’s brain.

With a pair of virtual-reality goggles, Vasconcelos is able to watch Warf’s avatar demonstrate a brain surgery procedure before replicating the technique himself and while asking questions of Warf’s digital twin.

“It’s an almost out-of-body experience,” Warf says of watching his avatar interact with the residents. “Maybe it’s how it feels to have an identical twin?”

And that’s the goal: Warf’s digital twin bridged the distance, allowing him to be functionally in two places at once. “It was my first training using this model, and it had excellent performance,” says Vasconcelos, a neurosurgery resident at Santa Casa de São Paulo School of Medical Sciences in São Paulo, Brazil. “As a resident, I now feel more confident and comfortable applying the technique in a real patient under the guidance of a professor.”

Warf’s avatar arrived via a new project launched by medical simulator and augmented reality (AR) company EDUCSIM. The company is part of the 2023 cohort of START.nano, MIT.nano’s deep-tech accelerator that offers early-stage startups discounted access to MIT.nano’s laboratories.

In March 2023, Giselle Coelho, EDUCSIM’s scientific director and a pediatric neurosurgeon at Santa Casa de São Paulo and Sabará Children’s Hospital, began working with technical staff in the MIT.nano Immersion Lab to create Warf’s avatar. By November, the avatar was training future surgeons like Vasconcelos.

“I had this idea to create the avatar of Dr. Warf as a proof of concept, and asked, ‘What would be the place in the world where they are working on technologies like that?’” Coelho says. “Then I found MIT.nano.”

Capturing a Surgeon

As a neurosurgery resident, Coelho was so frustrated by the lack of practical training options for complex surgeries that she built her own model of a baby brain. The physical model contains all the structures of the brain and can even bleed, “simulating all the steps of a surgery, from incision to skin closure,” she says.

She soon found that simulators and virtual reality (VR) demonstrations reduced the learning curve for her own residents. Coelho launched EDUCSIM in 2017 to expand the variety and reach of the training for residents and experts looking to learn new techniques.

Those techniques include a procedure to treat infant hydrocephalus that was pioneered by Warf, the director of neonatal and congenital neurosurgery at Boston Children’s Hospital. Coelho had learned the technique directly from Warf and thought his avatar might be the way for surgeons who couldn’t travel to Boston to benefit from his expertise.

To create the avatar, Coelho worked with Talis Reks, the AR/VR/gaming/big data IT technologist in the Immersion Lab.

“A lot of technology and hardware can be very expensive for startups to access as they start their company journey,” Reks explains. “START.nano is one way of enabling them to utilize and afford the tools and technologies we have at MIT.nano’s Immersion Lab.”

Coelho and her colleagues needed high-fidelity and high-resolution motion-capture technology, volumetric video capture, and a range of other VR/AR technologies to capture Warf’s dexterous finger motions and facial expressions. Warf visited MIT.nano on several occasions to be digitally “captured,” including performing an operation on the physical baby model while wearing special gloves and clothing embedded with sensors.

“These technologies have mostly been used for entertainment or VFX [visual effects] or CGI [computer-generated imagery],” says Reks, “But this is a unique project, because we’re applying it now for real medical practice and real learning.”

One of the biggest challenges, Reks says, was helping to develop what Coelho calls “holoportation”— transmitting the 3D, volumetric video capture of Warf in real-time over the internet so that his avatar can appear in transcontinental medical training.

The Warf avatar has synchronous and asynchronous modes. The training that Vasconcelos received was in the asynchronous mode, where residents can observe the avatar’s demonstrations and ask it questions. The answers, delivered in a variety of languages, come from AI algorithms that draw from previous research and an extensive bank of questions and answers provided by Warf.

In the synchronous mode, Warf operates his avatar from a distance in real time, Coelho says. “He could walk around the room, he could talk to me, he could orient me. It’s amazing.”

Coelho, Warf, Reks, and other team members demonstrated a combination of the modes in a second session in late December. This demo consisted of volumetric live video capture between the Immersion Lab and Brazil, spatialized and visible in real-time through AR headsets. It significantly expanded upon the previous demo, which had only streamed volumetric data in one direction through a two-dimensional display.

Powerful impacts

Warf has a long history of training desperately needed pediatric neurosurgeons around the world, most recently through his nonprofit Neurokids. Remote and simulated training has been an increasingly large part of training since the pandemic, he says, although he doesn’t feel it will ever completely replace personal hands-on instruction and collaboration.

“But if in fact one day we could have avatars, like this one from Giselle, in remote places showing people how to do things and answering questions for them, without the cost of travel, without the time cost and so forth, I think it could be really powerful,” Warf says.

The avatar project is especially important for surgeons serving remote and underserved areas like the Amazon region of Brazil, Coelho says. “This is a way to give them the same level of education that they would get in other places, and the same opportunity to be in touch with Dr. Warf.”

One baby treated for hydrocephalus at a recent Amazon clinic had traveled by boat 30 hours for the surgery, according to Coelho.

Training surgeons with the avatar, she says, “can change reality for this baby and can change the future.”

© Photo courtesy of the MIT.nano Immersion Lab.

Benjamin Warf, the director of neonatal and congenital neurosurgery at Boston Children’s Hospital, uses a virtual reality environment to demonstrate a procedure that he pioneered to treat infant hydrocephalus. As Warf operates his avatar from a distance in real-time, medical residents in Brazil watch, interact, and learn in a 3D environment.
  • ✇MIT News - Nanoscience and nanotechnology | MIT.nano
  • MIT.nano Family Day invites those at home to come to workMIT.nano
    Every day, researchers come to MIT.nano to investigate at the nanoscale, but what’s it like to work there? On Aug. 21, MIT.nano staff invited their family members to come see what it takes to support discovery, education, and innovation in this cutting-edge research facility. More than 50 people attended — spouses and partners, parents and children, nephews and nieces, in-laws, and others. The event was so much fun that staff and families alike were asking to do it again; by the end of the day
     

MIT.nano Family Day invites those at home to come to work

Every day, researchers come to MIT.nano to investigate at the nanoscale, but what’s it like to work there? On Aug. 21, MIT.nano staff invited their family members to come see what it takes to support discovery, education, and innovation in this cutting-edge research facility.

More than 50 people attended — spouses and partners, parents and children, nephews and nieces, in-laws, and others. The event was so much fun that staff and families alike were asking to do it again; by the end of the day calling it the “first annual” MIT.nano Family Day.

After a welcome from Vladimir Bulović, faculty director of MIT.nano and Fariborz Maseeh Professor of Emerging Technologies, technical staff introduced the families to the building and its various facilities. Following a behind-the-scenes tour of some of the infrastructure spaces, they enjoyed lunch in MIT.nano’s East Lobby, then split into groups for hands-on experiences throughout the building.

In MIT.nano’s characterization facility, visitors gained a firsthand look at the powerful microscopes positioned inside the basement imaging suites and learned how to minimize vibrational and electromagnetic interference in order to make videos of atoms. In one suite, the guests viewed individual columns of atoms using an aberration-corrected scanning transmission electron microscope. In another, staff demonstrated how to use micro-computed tomography (microCT) to obtain three-dimensional imagery of the interior of electronic devices, biological samples, and other objects.

The next stop was the MIT.nano Immersion Lab for demonstrations of sensing technology and immersive experiences. Family members put on a mixed-reality headset and were transported — virtually — into the cockpit of an airplane preparing for takeoff. Those not interested in flying stepped inside a virtual art studio complete with balloons on the ceiling and snow falling outside.

Family members also donned full-body protective clothing called “bunny suits” and headed into MIT.nano’s cleanroom. As they toured the nanofabrication facility, the visitors observed researchers operating equipment and tested a particle counter that illustrated just how much of a wrecking ball dust can be at the nanoscale. A smaller group of volunteers joined MIT.nano staff in using the cleanroom processing tools to expose, develop, and etch a Family Day group photo onto a 50-nanometer-thick layer of aluminum on a silicon wafer, now displayed in MIT.nano’s first floor cleanroom window.

The day concluded with an ice cream social and swag grab in MIT.nano’s courtyard, where staff and their visitors mingled with one another as a new, extended MIT and MIT.nano family.

© Photo collage: Tom Gearty

MIT.nano Family Day brought over 50 family members of MIT.nano staff to MIT for a fun-filled day of nanoscale exploration.
  • ✇MIT News - Nanoscience and nanotechnology | MIT.nano
  • MIT.nano Family Day invites those at home to come to workMIT.nano
    Every day, researchers come to MIT.nano to investigate at the nanoscale, but what’s it like to work there? On Aug. 21, MIT.nano staff invited their family members to come see what it takes to support discovery, education, and innovation in this cutting-edge research facility. More than 50 people attended — spouses and partners, parents and children, nephews and nieces, in-laws, and others. The event was so much fun that staff and families alike were asking to do it again; by the end of the day
     

MIT.nano Family Day invites those at home to come to work

Every day, researchers come to MIT.nano to investigate at the nanoscale, but what’s it like to work there? On Aug. 21, MIT.nano staff invited their family members to come see what it takes to support discovery, education, and innovation in this cutting-edge research facility.

More than 50 people attended — spouses and partners, parents and children, nephews and nieces, in-laws, and others. The event was so much fun that staff and families alike were asking to do it again; by the end of the day calling it the “first annual” MIT.nano Family Day.

After a welcome from Vladimir Bulović, faculty director of MIT.nano and Fariborz Maseeh Professor of Emerging Technologies, technical staff introduced the families to the building and its various facilities. Following a behind-the-scenes tour of some of the infrastructure spaces, they enjoyed lunch in MIT.nano’s East Lobby, then split into groups for hands-on experiences throughout the building.

In MIT.nano’s characterization facility, visitors gained a firsthand look at the powerful microscopes positioned inside the basement imaging suites and learned how to minimize vibrational and electromagnetic interference in order to make videos of atoms. In one suite, the guests viewed individual columns of atoms using an aberration-corrected scanning transmission electron microscope. In another, staff demonstrated how to use micro-computed tomography (microCT) to obtain three-dimensional imagery of the interior of electronic devices, biological samples, and other objects.

The next stop was the MIT.nano Immersion Lab for demonstrations of sensing technology and immersive experiences. Family members put on a mixed-reality headset and were transported — virtually — into the cockpit of an airplane preparing for takeoff. Those not interested in flying stepped inside a virtual art studio complete with balloons on the ceiling and snow falling outside.

Family members also donned full-body protective clothing called “bunny suits” and headed into MIT.nano’s cleanroom. As they toured the nanofabrication facility, the visitors observed researchers operating equipment and tested a particle counter that illustrated just how much of a wrecking ball dust can be at the nanoscale. A smaller group of volunteers joined MIT.nano staff in using the cleanroom processing tools to expose, develop, and etch a Family Day group photo onto a 50-nanometer-thick layer of aluminum on a silicon wafer, now displayed in MIT.nano’s first floor cleanroom window.

The day concluded with an ice cream social and swag grab in MIT.nano’s courtyard, where staff and their visitors mingled with one another as a new, extended MIT and MIT.nano family.

© Photo collage: Tom Gearty

MIT.nano Family Day brought over 50 family members of MIT.nano staff to MIT for a fun-filled day of nanoscale exploration.

Arrays of quantum rods could enhance TVs or virtual reality devices

Flat screen TVs that incorporate quantum dots are now commercially available, but it has been more difficult to create arrays of their elongated cousins, quantum rods, for commercial devices. Quantum rods can control both the polarization and color of light, to generate 3D images for virtual reality devices.

Using scaffolds made of folded DNA, MIT engineers have come up with a new way to precisely assemble arrays of quantum rods. By depositing quantum rods onto a DNA scaffold in a highly controlled way, the researchers can regulate their orientation, which is a key factor in determining the polarization of light emitted by the array. This makes it easier to add depth and dimensionality to a virtual scene.

“One of the challenges with quantum rods is: How do you align them all at the nanoscale so they’re all pointing in the same direction?” says Mark Bathe, an MIT professor of biological engineering and the senior author of the new study. “When they’re all pointing in the same direction on a 2D surface, then they all have the same properties of how they interact with light and control its polarization.”

MIT postdocs Chi Chen and Xin Luo are the lead authors of the paper, which appears today in Science Advances. Robert Macfarlane, an associate professor of materials science and engineering; Alexander Kaplan PhD ’23; and Moungi Bawendi, the Lester Wolfe Professor of Chemistry, are also authors of the study.

Nanoscale structures

Over the past 15 years, Bathe and others have led in the design and fabrication of nanoscale structures made of DNA, also known as DNA origami. DNA, a highly stable and programmable molecule, is an ideal building material for tiny structures that could be used for a variety of applications, including delivering drugs, acting as biosensors, or forming scaffolds for light-harvesting materials.

Bathe’s lab has developed computational methods that allow researchers to simply enter a target nanoscale shape they want to create, and the program will calculate the sequences of DNA that will self-assemble into the right shape. They also developed scalable fabrication methods that incorporate quantum dots into these DNA-based materials.

In a 2022 paper, Bathe and Chen showed that they could use DNA to scaffold quantum dots in precise positions using scalable biological fabrication. Building on that work, they teamed up with Macfarlane’s lab to tackle the challenge of arranging quantum rods into 2D arrays, which is more difficult because the rods need to be aligned in the same direction.

Existing approaches that create aligned arrays of quantum rods using mechanical rubbing with a fabric or an electric field to sweep the rods into one direction have had only limited success. This is because high-efficiency light-emission requires the rods to be kept at least 10 nanometers from each other, so that they won’t “quench,” or suppress, their neighbors’ light-emitting activity.

To achieve that, the researchers devised a way to attach quantum rods to diamond-shaped DNA origami structures, which can be built at the right size to maintain that distance. These DNA structures are then attached to a surface, where they fit together like puzzle pieces.

“The quantum rods sit on the origami in the same direction, so now you have patterned all these quantum rods through self-assembly on 2D surfaces, and you can do that over the micron scale needed for different applications like microLEDs,” Bathe says. “You can orient them in specific directions that are controllable and keep them well-separated because the origamis are packed and naturally fit together, as puzzle pieces would.”

Assembling the puzzle

As the first step in getting this approach to work, the researchers had to come up with a way to attach DNA strands to the quantum rods. To do that, Chen developed a process that involves emulsifying DNA into a mixture with the quantum rods, then rapidly dehydrating the mixture, which allows the DNA molecules to form a dense layer on the surface of the rods.

This process takes only a few minutes, much faster than any existing method for attaching DNA to nanoscale particles, which may be key to enabling commercial applications.

“The unique aspect of this method lies in its near-universal applicability to any water-loving ligand with affinity to the nanoparticle surface, allowing them to be instantly pushed onto the surface of the nanoscale particles. By harnessing this method, we achieved a significant reduction in manufacturing time from several days to just a few minutes,” Chen says.

These DNA strands then act like Velcro, helping the quantum rods stick to a DNA origami template, which forms a thin film that coats a silicate surface. This thin film of DNA is first formed via self-assembly by joining neighboring DNA templates together via overhanging strands of DNA along their edges.

The researchers now hope to create wafer-scale surfaces with etched patterns, which could allow them to scale their design to device-scale arrangements of quantum rods for numerous applications, beyond only microLEDs or augmented reality/virtual reality.

“The method that we describe in this paper is great because it provides good spatial and orientational control of how the quantum rods are positioned. The next steps are going to be making arrays that are more hierarchical, with programmed structure at many different length scales. The ability to control the sizes, shapes, and placement of these quantum rod arrays is a gateway to all sorts of different electronics applications,” Macfarlane says.

“DNA is particularly attractive as a manufacturing material because it can be biologically produced, which is both scalable and sustainable, in line with the emerging U.S. bioeconomy. Translating this work toward commercial devices by solving several remaining bottlenecks, including switching to environmentally safe quantum rods, is what we’re focused on next,” Bathe adds.

The research was funded by the Office of Naval Research, the National Science Foundation, the Army Research Office, the Department of Energy, and the National Institute of Environmental Health Sciences.

© Image: Dr. Xin Luo, Bathe BioNanoLab

MIT engineers have used DNA origami scaffolds to create precisely structured arrays of quantum rods, which could be incorporated into LEDs for televisions or virtual reality devices.

Q&A: A high-tech take on Wagner’s “Parsifal” opera

The world-famous Bayreuth Festival in Germany, annually centered around the works of composer Richard Wagner, launched this summer on July 25 with a production that has been making headlines. Director Jay Scheib, an MIT faculty member, has created a version of Wagner’s celebrated opera “Parsifal” that is set in an apocalyptic future (rather than the original Medieval past), and uses augmented reality headset technology for a portion of the audience, among other visual effects. People using the headsets see hundreds of additional visuals, from fast-moving clouds to arrows being shot at them. The AR portion of the production was developed through a team led by designer and MIT Technical Instructor Joshua Higgason.

The new “Parsifal” has engendered extensive media attention and discussion among opera followers and the viewing public. Five years in the making, it was developed with the encouragement of Bayreuth Festival general manager Katharina Wagner, Richard Wagner’s great-granddaughter. The production runs until Aug. 27, and can also be streamed on Stage+. Scheib, the Class of 1949 Professor in MIT’s Music and Theater Arts program, recently talked to MIT News about the project from Bayreuth.

Q: Your production of “Parsifal” led off this year’s entire Bayreuth festival. How’s it going?

A: From my point of view it’s going quite swimmingly. The leading German opera critics and the audiences have been super-supportive and Bayreuth makes it possible for a work to evolve … Given the complexity of the technical challenge of making an AR project function in an opera house, the bar was so high, it was a difficult challenge, and we’re really happy we found a way forward, a way to make it work, and a way to make it fit into an artistic process. I feel great.

Q: You offer a new interpretation of “Parsifal,” and a new setting for it. What is it, and why did you choose to interpret it this way?

A: One of the main themes in “Parsifal” is that the long-time king of this holy grail cult is wounded, and his wound will not heal. [With that in mind], we looked at what the world was like when the opera premiered in the late 19th century, around the time of what was known as the Great African Scramble, when Europe re-drew the map of Africa, largely based on resources, including mineral resources.

Cobalt remains [the focus of] dirty mining practices in the Democratic Republic of Congo, and is a requirement for a lot of our electronic objects, in particular batteries. There are also these massive copper deposits discovered under a Buddhist temple in Afghanistan, and lithium under a sacred site in Nevada. We face an intense challenge in climate change, and the predictions are not good. Some of our solutions like electric cars require these materials, so they’re only solutions for some people, while others suffer [where minerals are being mined]. We started thinking about how wounds never heal, and when the prospect of creating a better world opens new wounds in other communities. … That became a theme. It also comes out of the time when we were making it, when Covid happened and George Floyd was murdered, which created an opportunity in the U.S. to start speaking very openly about wounds that have not healed.

We set it in a largely post-human environment, where we didn’t succeed, and everything has collapsed. In the third act, there’s derelict mining equipment, and the holy water is this energy-giving force, but in fact it’s this lithium-ion pool, which gives us energy and then poisons us. That’s the theme we created.

Q: What were your goals about integrating the AR technology into the opera, and how did you achieve that?

A: First, I was working with my collaborator Joshua Higgason. No one had ever really done this before, so we just started researching whether it was possible. And most of the people we talked to said, “Don’t do it. It’s just not going to work.” Having always been a daredevil at heart, I was like, “Oh, come on, we can figure this out.”

We were diligent in exploring the possibilities. We made multiple trips to Bayreuth and made these milimeter-accurate laser scans of the auditorium and the stage. We built a variety of models to see how to make AR work in a large environment, where 2,000 headsets could respond simultaneously. We built a team of animators and developers and programmers and designers, from Portugal to Cambridge to New York to Hungary, the UK, and a group in Germany. Josh led this team, and they got after it, but it took us the better part of two years to make it possible for an audience, some of whom don’t really use smartphones, to put on an AR headset and have it just work.

I can’t even believe we did this. But it’s working.

Q: In opera there’s hopefully a productive tension between tradition and innovation. How do you think about that when it comes to Wagner at Bayreuth?

A: Innovation is the tradition at Bayreuth. Musically and scenographically. “Parsifal” was composed for this particular opera house, and I’m incredibly respectful of what this event is made for. We are trying to create a balanced and unified experience, between the scenic design and the AR and the lighting and the costume design, and create perfect moments of convergence where you really lose yourself in the environment. I believe wholly in the production and the performers are extraordinary. Truly, truly, truly extraordinary.

Q: People have been focused on the issue of bringing AR to Bayreuth, but what has Bayreuth brought to you as a director?

A: Working in Bayreuth has been an incredible experience. The level of intellectual integrity among the technicians is extraordinary. The amount of care and patience and curiosity and expertise in Bayreuth is off the charts. This community of artists is the greatest. … People come here because it’s an incredible meeting of the minds, and for that I’m immensely filled with gratitude every day I come into the rehearsal room. The conductor, Pablo Heras-Casado, and I have been working on this for several years. And the music is still first. We’re setting up technology not to overtake the music, but to support it, and visually amplify it.

It must be said that Katharina Wagner has been one of the most powerfully supportive artistic directors I have ever worked with. I find it inspiring to witness her tenacity and vision in seeing all of this through, despite the hurdles. It’s been a great collaboration. That’s the essence: great collaboration.

This work was supported, in part, by an MIT.nano Immersion Lab Gaming Program seed grant, and was developed using capabilities in the Immersion Lab. The project was also funded, in part, by a grant from the MIT Center for Art, Science, and Technology.

© Image: Enrico Nawrath. Courtesy of the Bayreuther Festival

Director and MIT Professor Jay Scheib speaks about his widely heralded production of Wagner’s “Parsifal” opera at the Bayreuth Festival, which features an apocalyptic theme and augmented reality headsets for the audience.
  • ✇MIT News - Nanoscience and nanotechnology | MIT.nano
  • MIT.nano Family Day invites those at home to come to workMIT.nano
    Every day, researchers come to MIT.nano to investigate at the nanoscale, but what’s it like to work there? On Aug. 21, MIT.nano staff invited their family members to come see what it takes to support discovery, education, and innovation in this cutting-edge research facility. More than 50 people attended — spouses and partners, parents and children, nephews and nieces, in-laws, and others. The event was so much fun that staff and families alike were asking to do it again; by the end of the day
     

MIT.nano Family Day invites those at home to come to work

Every day, researchers come to MIT.nano to investigate at the nanoscale, but what’s it like to work there? On Aug. 21, MIT.nano staff invited their family members to come see what it takes to support discovery, education, and innovation in this cutting-edge research facility.

More than 50 people attended — spouses and partners, parents and children, nephews and nieces, in-laws, and others. The event was so much fun that staff and families alike were asking to do it again; by the end of the day calling it the “first annual” MIT.nano Family Day.

After a welcome from Vladimir Bulović, faculty director of MIT.nano and Fariborz Maseeh Professor of Emerging Technologies, technical staff introduced the families to the building and its various facilities. Following a behind-the-scenes tour of some of the infrastructure spaces, they enjoyed lunch in MIT.nano’s East Lobby, then split into groups for hands-on experiences throughout the building.

In MIT.nano’s characterization facility, visitors gained a firsthand look at the powerful microscopes positioned inside the basement imaging suites and learned how to minimize vibrational and electromagnetic interference in order to make videos of atoms. In one suite, the guests viewed individual columns of atoms using an aberration-corrected scanning transmission electron microscope. In another, staff demonstrated how to use micro-computed tomography (microCT) to obtain three-dimensional imagery of the interior of electronic devices, biological samples, and other objects.

The next stop was the MIT.nano Immersion Lab for demonstrations of sensing technology and immersive experiences. Family members put on a mixed-reality headset and were transported — virtually — into the cockpit of an airplane preparing for takeoff. Those not interested in flying stepped inside a virtual art studio complete with balloons on the ceiling and snow falling outside.

Family members also donned full-body protective clothing called “bunny suits” and headed into MIT.nano’s cleanroom. As they toured the nanofabrication facility, the visitors observed researchers operating equipment and tested a particle counter that illustrated just how much of a wrecking ball dust can be at the nanoscale. A smaller group of volunteers joined MIT.nano staff in using the cleanroom processing tools to expose, develop, and etch a Family Day group photo onto a 50-nanometer-thick layer of aluminum on a silicon wafer, now displayed in MIT.nano’s first floor cleanroom window.

The day concluded with an ice cream social and swag grab in MIT.nano’s courtyard, where staff and their visitors mingled with one another as a new, extended MIT and MIT.nano family.

© Photo collage: Tom Gearty

MIT.nano Family Day brought over 50 family members of MIT.nano staff to MIT for a fun-filled day of nanoscale exploration.

Arrays of quantum rods could enhance TVs or virtual reality devices

Flat screen TVs that incorporate quantum dots are now commercially available, but it has been more difficult to create arrays of their elongated cousins, quantum rods, for commercial devices. Quantum rods can control both the polarization and color of light, to generate 3D images for virtual reality devices.

Using scaffolds made of folded DNA, MIT engineers have come up with a new way to precisely assemble arrays of quantum rods. By depositing quantum rods onto a DNA scaffold in a highly controlled way, the researchers can regulate their orientation, which is a key factor in determining the polarization of light emitted by the array. This makes it easier to add depth and dimensionality to a virtual scene.

“One of the challenges with quantum rods is: How do you align them all at the nanoscale so they’re all pointing in the same direction?” says Mark Bathe, an MIT professor of biological engineering and the senior author of the new study. “When they’re all pointing in the same direction on a 2D surface, then they all have the same properties of how they interact with light and control its polarization.”

MIT postdocs Chi Chen and Xin Luo are the lead authors of the paper, which appears today in Science Advances. Robert Macfarlane, an associate professor of materials science and engineering; Alexander Kaplan PhD ’23; and Moungi Bawendi, the Lester Wolfe Professor of Chemistry, are also authors of the study.

Nanoscale structures

Over the past 15 years, Bathe and others have led in the design and fabrication of nanoscale structures made of DNA, also known as DNA origami. DNA, a highly stable and programmable molecule, is an ideal building material for tiny structures that could be used for a variety of applications, including delivering drugs, acting as biosensors, or forming scaffolds for light-harvesting materials.

Bathe’s lab has developed computational methods that allow researchers to simply enter a target nanoscale shape they want to create, and the program will calculate the sequences of DNA that will self-assemble into the right shape. They also developed scalable fabrication methods that incorporate quantum dots into these DNA-based materials.

In a 2022 paper, Bathe and Chen showed that they could use DNA to scaffold quantum dots in precise positions using scalable biological fabrication. Building on that work, they teamed up with Macfarlane’s lab to tackle the challenge of arranging quantum rods into 2D arrays, which is more difficult because the rods need to be aligned in the same direction.

Existing approaches that create aligned arrays of quantum rods using mechanical rubbing with a fabric or an electric field to sweep the rods into one direction have had only limited success. This is because high-efficiency light-emission requires the rods to be kept at least 10 nanometers from each other, so that they won’t “quench,” or suppress, their neighbors’ light-emitting activity.

To achieve that, the researchers devised a way to attach quantum rods to diamond-shaped DNA origami structures, which can be built at the right size to maintain that distance. These DNA structures are then attached to a surface, where they fit together like puzzle pieces.

“The quantum rods sit on the origami in the same direction, so now you have patterned all these quantum rods through self-assembly on 2D surfaces, and you can do that over the micron scale needed for different applications like microLEDs,” Bathe says. “You can orient them in specific directions that are controllable and keep them well-separated because the origamis are packed and naturally fit together, as puzzle pieces would.”

Assembling the puzzle

As the first step in getting this approach to work, the researchers had to come up with a way to attach DNA strands to the quantum rods. To do that, Chen developed a process that involves emulsifying DNA into a mixture with the quantum rods, then rapidly dehydrating the mixture, which allows the DNA molecules to form a dense layer on the surface of the rods.

This process takes only a few minutes, much faster than any existing method for attaching DNA to nanoscale particles, which may be key to enabling commercial applications.

“The unique aspect of this method lies in its near-universal applicability to any water-loving ligand with affinity to the nanoparticle surface, allowing them to be instantly pushed onto the surface of the nanoscale particles. By harnessing this method, we achieved a significant reduction in manufacturing time from several days to just a few minutes,” Chen says.

These DNA strands then act like Velcro, helping the quantum rods stick to a DNA origami template, which forms a thin film that coats a silicate surface. This thin film of DNA is first formed via self-assembly by joining neighboring DNA templates together via overhanging strands of DNA along their edges.

The researchers now hope to create wafer-scale surfaces with etched patterns, which could allow them to scale their design to device-scale arrangements of quantum rods for numerous applications, beyond only microLEDs or augmented reality/virtual reality.

“The method that we describe in this paper is great because it provides good spatial and orientational control of how the quantum rods are positioned. The next steps are going to be making arrays that are more hierarchical, with programmed structure at many different length scales. The ability to control the sizes, shapes, and placement of these quantum rod arrays is a gateway to all sorts of different electronics applications,” Macfarlane says.

“DNA is particularly attractive as a manufacturing material because it can be biologically produced, which is both scalable and sustainable, in line with the emerging U.S. bioeconomy. Translating this work toward commercial devices by solving several remaining bottlenecks, including switching to environmentally safe quantum rods, is what we’re focused on next,” Bathe adds.

The research was funded by the Office of Naval Research, the National Science Foundation, the Army Research Office, the Department of Energy, and the National Institute of Environmental Health Sciences.

© Image: Dr. Xin Luo, Bathe BioNanoLab

MIT engineers have used DNA origami scaffolds to create precisely structured arrays of quantum rods, which could be incorporated into LEDs for televisions or virtual reality devices.

Q&A: A high-tech take on Wagner’s “Parsifal” opera

The world-famous Bayreuth Festival in Germany, annually centered around the works of composer Richard Wagner, launched this summer on July 25 with a production that has been making headlines. Director Jay Scheib, an MIT faculty member, has created a version of Wagner’s celebrated opera “Parsifal” that is set in an apocalyptic future (rather than the original Medieval past), and uses augmented reality headset technology for a portion of the audience, among other visual effects. People using the headsets see hundreds of additional visuals, from fast-moving clouds to arrows being shot at them. The AR portion of the production was developed through a team led by designer and MIT Technical Instructor Joshua Higgason.

The new “Parsifal” has engendered extensive media attention and discussion among opera followers and the viewing public. Five years in the making, it was developed with the encouragement of Bayreuth Festival general manager Katharina Wagner, Richard Wagner’s great-granddaughter. The production runs until Aug. 27, and can also be streamed on Stage+. Scheib, the Class of 1949 Professor in MIT’s Music and Theater Arts program, recently talked to MIT News about the project from Bayreuth.

Q: Your production of “Parsifal” led off this year’s entire Bayreuth festival. How’s it going?

A: From my point of view it’s going quite swimmingly. The leading German opera critics and the audiences have been super-supportive and Bayreuth makes it possible for a work to evolve … Given the complexity of the technical challenge of making an AR project function in an opera house, the bar was so high, it was a difficult challenge, and we’re really happy we found a way forward, a way to make it work, and a way to make it fit into an artistic process. I feel great.

Q: You offer a new interpretation of “Parsifal,” and a new setting for it. What is it, and why did you choose to interpret it this way?

A: One of the main themes in “Parsifal” is that the long-time king of this holy grail cult is wounded, and his wound will not heal. [With that in mind], we looked at what the world was like when the opera premiered in the late 19th century, around the time of what was known as the Great African Scramble, when Europe re-drew the map of Africa, largely based on resources, including mineral resources.

Cobalt remains [the focus of] dirty mining practices in the Democratic Republic of Congo, and is a requirement for a lot of our electronic objects, in particular batteries. There are also these massive copper deposits discovered under a Buddhist temple in Afghanistan, and lithium under a sacred site in Nevada. We face an intense challenge in climate change, and the predictions are not good. Some of our solutions like electric cars require these materials, so they’re only solutions for some people, while others suffer [where minerals are being mined]. We started thinking about how wounds never heal, and when the prospect of creating a better world opens new wounds in other communities. … That became a theme. It also comes out of the time when we were making it, when Covid happened and George Floyd was murdered, which created an opportunity in the U.S. to start speaking very openly about wounds that have not healed.

We set it in a largely post-human environment, where we didn’t succeed, and everything has collapsed. In the third act, there’s derelict mining equipment, and the holy water is this energy-giving force, but in fact it’s this lithium-ion pool, which gives us energy and then poisons us. That’s the theme we created.

Q: What were your goals about integrating the AR technology into the opera, and how did you achieve that?

A: First, I was working with my collaborator Joshua Higgason. No one had ever really done this before, so we just started researching whether it was possible. And most of the people we talked to said, “Don’t do it. It’s just not going to work.” Having always been a daredevil at heart, I was like, “Oh, come on, we can figure this out.”

We were diligent in exploring the possibilities. We made multiple trips to Bayreuth and made these milimeter-accurate laser scans of the auditorium and the stage. We built a variety of models to see how to make AR work in a large environment, where 2,000 headsets could respond simultaneously. We built a team of animators and developers and programmers and designers, from Portugal to Cambridge to New York to Hungary, the UK, and a group in Germany. Josh led this team, and they got after it, but it took us the better part of two years to make it possible for an audience, some of whom don’t really use smartphones, to put on an AR headset and have it just work.

I can’t even believe we did this. But it’s working.

Q: In opera there’s hopefully a productive tension between tradition and innovation. How do you think about that when it comes to Wagner at Bayreuth?

A: Innovation is the tradition at Bayreuth. Musically and scenographically. “Parsifal” was composed for this particular opera house, and I’m incredibly respectful of what this event is made for. We are trying to create a balanced and unified experience, between the scenic design and the AR and the lighting and the costume design, and create perfect moments of convergence where you really lose yourself in the environment. I believe wholly in the production and the performers are extraordinary. Truly, truly, truly extraordinary.

Q: People have been focused on the issue of bringing AR to Bayreuth, but what has Bayreuth brought to you as a director?

A: Working in Bayreuth has been an incredible experience. The level of intellectual integrity among the technicians is extraordinary. The amount of care and patience and curiosity and expertise in Bayreuth is off the charts. This community of artists is the greatest. … People come here because it’s an incredible meeting of the minds, and for that I’m immensely filled with gratitude every day I come into the rehearsal room. The conductor, Pablo Heras-Casado, and I have been working on this for several years. And the music is still first. We’re setting up technology not to overtake the music, but to support it, and visually amplify it.

It must be said that Katharina Wagner has been one of the most powerfully supportive artistic directors I have ever worked with. I find it inspiring to witness her tenacity and vision in seeing all of this through, despite the hurdles. It’s been a great collaboration. That’s the essence: great collaboration.

This work was supported, in part, by an MIT.nano Immersion Lab Gaming Program seed grant, and was developed using capabilities in the Immersion Lab. The project was also funded, in part, by a grant from the MIT Center for Art, Science, and Technology.

© Image: Enrico Nawrath. Courtesy of the Bayreuther Festival

Director and MIT Professor Jay Scheib speaks about his widely heralded production of Wagner’s “Parsifal” opera at the Bayreuth Festival, which features an apocalyptic theme and augmented reality headsets for the audience.
❌
❌