FreshRSS

Normální zobrazení

Jsou dostupné nové články, klikněte pro obnovení stránky.
PředevčíremHlavní kanál
  • ✇IEEE Spectrum
  • Nasir Ahmed: An Unsung Hero of Digital MediaWillie D. Jones
    Stop for a second and think about the Internet without digital images or video. There would be no faces on Facebook. Instagram and TikTok probably wouldn’t exist. Those Zoom meetings that took the place of in-person gatherings for school or work during the height of the COVID-19 pandemic? Not an option.Digital audio’s place in our Internet-connected world is just as important as still images and video. It has changed the music business—from production to distribution to the way fans buy, collect
     

Nasir Ahmed: An Unsung Hero of Digital Media

19. Srpen 2024 v 14:00


Stop for a second and think about the Internet without digital images or video. There would be no faces on Facebook. Instagram and TikTok probably wouldn’t exist. Those Zoom meetings that took the place of in-person gatherings for school or work during the height of the COVID-19 pandemic? Not an option.

Digital audio’s place in our Internet-connected world is just as important as still images and video. It has changed the music business—from production to distribution to the way fans buy, collect, and store their favorite songs.

What do those millions of profiles on LinkedIn, dating apps, and social media platforms (and the inexhaustible selection of music available for download online) have in common? They rely on a compression algorithm called the discrete cosine transform, or DCT, which played a major role in allowing digital files to be transmitted across computer networks.

“DCT has been one of the key components of many past image- and video-coding algorithms for more than three decades,” says Touradj Ebrahimi, a professor at Ecole Polytechnique Fédérale de Lausanne, in Switzerland, who currently serves as chairman of the JPEG standardization committee. “Only a few image-compression standards not using DCT exist today,” he adds.

The Internet applications people use every day but largely take for granted were made possible by scientists and engineers who, for the most part, toiled in anonymity. One such “hidden figure” is Nasir Ahmed, the Indian-American engineer who figured out an elegant way to cut down the size of digital image files without sacrificing their most critical visual details.

Ahmed published his seminal paper about the discrete cosine transform compression algorithm he invented in 1974, a time when the fledgling Internet was exclusively dial-up and text-based. There were no pictures accompanying the words, nor could there have been, because Internet data was transmitted over standard copper telephone landlines, which was a major limitation on speed and bandwidth.

“Only a few image-compression standards not using DCT exist today.” –Touradj Ebrahimi, EPFL

These days, with the benefit of superfast chips and optical-fiber networks, data download speeds for a laptop with a fiber connection reach 1 gigabit per second. So, a music lover can download a 4-minute song to their laptop (or more likely a smartphone) in a second or two. In the dial-up era, when Internet users’ download speeds topped out at 56 kilobits per second (and were usually only half that fast), pulling down the same song from a server would have taken nearly all day. Getting a picture to appear on a computer’s screen was a process akin to watching grass grow.

Ahmed was convinced there had to be a way to cut down the size of digital files and speed up the process. He set off on a quest to represent with ones and zeros what is critical to an image being legible, while tossing aside the bits that are less important. The answer, which built on the earlier work of mathematician and information-theory pioneer Claude Shannon, took a while to come into focus. But because of Ahmed’s determination and unwavering belief in the value of what he was doing, he persevered even after others told him that it was not worth the effort.

Raised to Love Technology

It seemed almost preordained that Ahmed would have a career in one of the STEM fields. Nasir, who was born in Bengaluru, India, in 1940, was raised by his maternal grandparents. Ahmed’s grandfather was an electrical engineer who told him that he had been sent to the United States in 1919 to work at General Electric‘s location in Schenectady, N.Y. He shared tales of his time in the United States with his grandson and encouraged young Nasir to emigrate there. In 1961, after earning a bachelor’s degree in electrical engineering at the University of Visvesvaraya College of Engineering, in Bengaluru, Ahmed did just that, leaving India that fall for graduate school at the University of New Mexico, in Albuquerque. Ahmed earned a master’s degree and a Ph.D. in electrical engineering in 1963 and 1966, respectively.

During his first year in Albuquerque, he met Esther Parente, a graduate student from Argentina. They soon became inseparable and were married while he was working toward his doctorate. Sixty years later, they are still together.

The Seed of an Idea

In 1966, Ahmed, fresh out of grad school with his Ph.D., was hired as a principal research engineer at Honeywell’s newly created computer division. While there, Ahmed was first exposed to Walsh functions, a technique for analyzing digital representations of analog signals. The fast algorithms that could be created based on Walsh functions had many potential applications. Ahmed focused on using these signal-processing and analysis techniques to reduce the file size of a digital image without losing too much of the visual detail in the uncompressed version.

That research focus remained his primary interest when he returned to academia, taking a job as a professor in the electrical and computer engineering department at Kansas State University, in 1968.

Ahmed, like dozens of other researchers around the globe, was obsessed with finding the answer to a single question: How do you create a mathematical formula for deciphering which of the ones and zeros that represent a digital image need to be kept and which can be thrown away? The things he’d learned at Honeywell gave him a framework for understanding the elements of the problem and how to attack it. But the majority of the credit for the eventual breakthrough has to go to Ahmed’s steely determination and willingness to take a gamble on himself.

In 1972, he sought grant funding that would let him afford to spend the months between Kansas State’s spring and fall semesters furthering his ideas. He applied for a U.S. National Science Foundation grant, but was denied. Ahmed recalls the moment: “I had a strong intuition that I could find an efficient way to compress digital signal data. But to my surprise, the reviewers said the idea was too simple, so they rejected the proposal.”

Undaunted, Ahmed and his wife worked to make the salary he earned during the nine-month school year last through the summer so he could focus on his research. Money was tight, the couple recalls, but that moment of financial belt-tightening only seemed to heighten Ahmed’s industriousness. They persevered, and Ahmed’s long days and late nights in the lab eventually yielded the desired result.

DCT Compression Comes Together

Ahmed took a technique for turning the array of image-processing data representing an image’s pixels into a waveform, effectively rendering it as a series of waves with oscillating frequencies, and combined it with cosine functions that were already being used to model phenomena such as light waves, sound waves, and electric current. The result was a long string of numbers with values bounded by 1 and –1. Ahmed realized that by quantizing this string of values and performing a Fourier transformation to break the function into its constituent frequencies, each pixel’s data could be represented in a way that was helpful for deciding what data points must be kept and what could be omitted. Ahmed observed that the lower-frequency waves corresponded to the necessary or “high information” regions of the image, while the higher-frequency waves represented the bits that were less important and could therefore be approximated. The compressed-image files he and his team produced were one-tenth the size of the originals. What’s more, the process could be reversed, and a shrunken data file would yield an image that was sufficiently similar to the original.

After another two years of laborious testing, with he and his two collaborators running computer programs written on decks of data punch cards, the trio published a paper in IEEE Transactions On Computers titled “Discrete Cosine Transform” in January 1974. Though the paper’s publication did not make it immediately clear, the worldwide search for a reliable method of doing the lossy compression that Claude Shannon had postulated in the 1940s was over.

JPEGs, MPEGs, and More

It wasn’t until 1983 that the International Organization for Standardization (ISO) began working on the technology that would allow photo-quality images to accompany text on the screens of computer terminals. To that end, ISO established the Joint Photographic Experts Group, better known by the ubiquitous acronym JPEG. By the time the first JPEG standard was published in 1992, DCT and advances made by a cadre of other researchers had come to be recognized by the group as basic elements of their method for the digital compression and coding of still images. “This is the beauty of standardization, where several dozen bright minds are behind the success of advances such as JPEG,” says Ebrahimi.

And because video can be described as a succession of still images, Ahmed’s technique was also well suited to making video files smaller. DCT was the compression technique of choice when ISO and the international Electrotechnical Commission (IEC) established the Moving Picture Experts Group, or MPEG, for the compression and coding of audio, video, graphics, and genomic data in 1988. When the first MPEG standard was published in 1993, the World Wide Web that now includes Google Maps, dating apps, and e-commerce businesses was just four years old.

The ramping up of computer speeds and network bandwidth during that decade—along with the ability to transmit pictures and video via much smaller files—quickly transformed the Internet before anyone knew that Amazon would eventually let readers judge millions of books by their covers.

Having solved the problem that had monopolized his time and attention for several years, Ahmed resumed his career in academia. In 1993, the year the first MPEG standard went on the books, Ahmed left Kansas State and returned to the University of New Mexico. There he was a presidential professor of electrical and computer engineering until 1989, when he was promoted to chair of the ECE department. Five years after that, he became dean of UNM’s school of engineering­. Ahmed held that post for two years until he was named associate provost for research and dean of graduate studies. He stayed in that job until he retired from the university in 2001 and was named professor emeritus.

  • ✇IEEE Spectrum
  • Two Companies Plan to Fuel Cargo Ships With AmmoniaWillie D. Jones
    In July, two companies announced a collaboration aimed at helping to decarbonize maritime fuel technology. The companies, Brooklyn-based Amogy and Osaka-based Yanmar, say they plan to combine their respective areas of expertise to develop power plants for ships that use Amogy’s advanced technology for cracking ammonia to produce hydrogen fuel for Yanmar’s hydrogen internal combustion engines.This partnership responds directly to the maritime industry’s ambitious goals to significantly reduce gre
     

Two Companies Plan to Fuel Cargo Ships With Ammonia

3. Srpen 2024 v 15:00


In July, two companies announced a collaboration aimed at helping to decarbonize maritime fuel technology. The companies, Brooklyn-based Amogy and Osaka-based Yanmar, say they plan to combine their respective areas of expertise to develop power plants for ships that use Amogy’s advanced technology for cracking ammonia to produce hydrogen fuel for Yanmar’s hydrogen internal combustion engines.

This partnership responds directly to the maritime industry’s ambitious goals to significantly reduce greenhouse gas emissions. The International Maritime Organization (IMO) has set stringent targets. It is calling for a 40 percent reduction in shipping’s carbon emissions from 2008 levels by 2030. But will the companies have a commercially available reformer-engine unit available in time for shipping fleet owners to launch vessels featuring this technology by the IMO’s deadline? The urgency is there, but so are the technical hurdles that come with new technologies.

Shipping accounts for less than 3 percent of global human-caused CO2 emissions, but decarbonizing the industry would still have a profound impact on global efforts to combat climate change. According to the IMO’s 2020 Fourth Greenhouse Gas Study, shipping produced 1,056 million tonnes of carbon dioxide in 2018.

Amogy and Yanmar did not respond to IEEE Spectrum‘s requests for comment about the specifics of how they plan to synergize their areas of focus. But John Prousalidis, a professor at the National Technical University of Athens’s School of Naval Architecture and Marine Engineering, spoke with Spectrum to help put the announcement in context.

“We have a long way to go. I don’t mean to sound like a pessimist, but we have to be very cautious.” —John Prousalidis, National Technical University of Athens

Prousalidis is among a group of researchers pushing for electrification of seaport activities as a means of cutting greenhouse gas emissions and reducing the amount of pollutants such as nitrogen oxides and sulfur oxides being spewed into the air by ships at berth and by the cranes, forklifts, and trucks that handle shipping containers in ports. He acknowledged that he hasn’t seen any information specific to Amogy and Yanmar’s technical ideas for using ammonia as ships’ primary fuel source for propulsion, but he has studied maritime sector trends long enough—and helped create standards for the IEEE, the International Electrotechnical Commission (IEC), and the International Organization for Standardization (ISO)—in order to have a strong sense of how things will likely play out.

“We have a long way to go,” Prousalidis says. “I don’t mean to sound like a pessimist, but we have to be very cautious.” He points to NASA’s Artemis project, which is using hydrogen as its primary fuel for its rockets.

“The planned missile launch for a flight to the moon was repeatedly postponed because of a hydrogen leak that could not be well traced,” Prousalidis says. “If such a problem took place with one spaceship that is the singular focus of dozens of people who are paying attention to the most minor detail, imagine what could happen on any of the 100,000 ships sailing across the world?”

What’s more, he says, bold but ultimately unsubstantiated announcements from companies are fairly common. Amogy and Yanmar aren’t the first companies to suggest tapping into ammonia for cargo ships—the industry is no stranger to plans to adopt the fuel to move massive ships across the world’s oceans.

“A couple of big pioneering companies have announced that they’re going to have ammonia-fueled ship propulsion pretty soon,” Prousalidis says. “Originally, they announced that it would be available at the end of 2022. Then they said the end of 2023. Now they’re saying something about 2025.”

Shipping produced 1,056 million tonnes of carbon dioxide in 2018.

Prousalidis adds, “Everybody keeps claiming that ‘in a couple of years’ we’ll have [these alternatives to diesel for marine propulsion] ready. We periodically get these announcements about engines that will be hydrogen-ready or ammonia-ready. But I’m not sure what will happen during real operation. I’m sure that they performed several running tests in their industrial units. But in most cases, according to Murphy’s Law, failures will take place at the worst moment that we can imagine.”

All that notwithstanding, Prousalidis says he believes these technical hurdles will someday be solved, and engines running on alternative fuels will replace their diesel-fueled counterparts eventually. But he says he sees the rollout likely mirroring the introduction of natural gas. At the point when a few machines capable of running on that type of fuel were ready, the rest of the logistics chain was not. “We need to have all these brand-new pieces of equipment, including piping, that must be able to withstand the toxicity and combustibility of these new fuels. This is a big challenge, but it means that all engineers have work to do.”

Spectrum also reached out to researchers at the U.S. Department of Energy’s Office of Energy Efficiency and Renewable Energy with several questions about what Amogy and Yanmar say they are looking to pull off. The DOE’s e-mail response: “Theoretically possible, but we don’t have enough technical details (temperature of coupling engine to cracker, difficulty of manifolding, startup dynamics, controls, etc.) to say for certain and if it is a good idea or not.”

This article was updated on 5 August 2024 to correct global shipping emission data.

  • ✇IEEE Spectrum
  • Gladys West: The Hidden Figure Behind GPSWillie D. Jones
    Schoolchildren around the world are told that they have the potential to be great, often with the cheery phrase: “The sky’s the limit!” Gladys West took those words literally. While working for four decades as a mathematician and computer programmer at the U.S. Naval Proving Ground (now the Naval Surface Warfare Center) in Dahlgren, Va., she prepared the way for a satellite constellation in the sky that became an indispensable part of modern life: the Global Positioning System, or GPS. Th
     

Gladys West: The Hidden Figure Behind GPS

30. Červenec 2024 v 20:00


Schoolchildren around the world are told that they have the potential to be great, often with the cheery phrase: “The sky’s the limit!”

Gladys West took those words literally.

While working for four decades as a mathematician and computer programmer at the U.S. Naval Proving Ground (now the Naval Surface Warfare Center) in Dahlgren, Va., she prepared the way for a satellite constellation in the sky that became an indispensable part of modern life: the Global Positioning System, or GPS.

The second Black woman to ever work at the proving ground, West led a group of analysts who used satellite sensor data to calculate the shape of the Earth and the orbital routes around it. Her meticulous calculations and programming work established the flight paths now used by GPS satellites, setting the stage for navigation and positioning systems on which the world has come to rely.

For decades, West’s contributions went unacknowledged. But she has begun receiving overdue recognition. In 2018 she was inducted into the U.S. Air Force Space and Missile Pioneers Hall of Fame. In 2021 the International Academy of Digital Arts and Sciences presented her its Webby Lifetime Achievement Award, while the U.K. Royal Academy of Engineering gave her the Prince Philip Medal, the organization’s highest individual honor.

West was presented the 2024 IEEE President’s Award for “mathematical modeling and development of satellite geodesy models that played a pivotal role in the development of the Global Positioning System.” The award is sponsored by IEEE.

How the “hidden figure” overcame barriers

West’s path to becoming a technology professional and an IEEE honoree was an unlikely one. Born in 1930 in Sutherland, Va., she grew up working on her family’s farm. To supplement the family’s income, her mother worked at a tobacco factory and her father was employed by a railroad company.

Physical toil in the hot sun from daybreak until sundown with paltry financial returns, West says, made her determined to do something other than farming.

Every day when she ventured into the fields to sow or harvest crops with her family, her thoughts were on the little red schoolhouse beyond the edge of the farm. She recalls gladly making the nearly 5-kilometer trek from her house, through the woods and over streams, to reach the one-room school.

She knew that postsecondary education was her ticket out of farm life, so throughout her school years she made sure she was a standout student and a model of focus and perseverance.

Her parents couldn’t afford to pay for her college education, but as valedictorian of her high school class, she earned a full-tuition scholarship from the state of Virginia. Money she earned as a babysitter paid for her room and board.

West decided to pursue a degree in mathematics at Virginia State College (now Virginia State University), a historically Black school in Petersburg.

At the time, the field was dominated by men. She earned a bachelor’s degree in the subject in 1952 and became a schoolteacher in Waverly, Va. After two years in the classroom, she returned to Virginia State to pursue a master’s degree in mathematics, which she earned in 1955.

black and white image of a woman sitting at a desk writing on a pad of paper Gladys West at her desk, meticulously crunching numbers manually in the era before computers took over such tasks.Gladys West

Setting the groundwork for GPS

West began her career at the Naval Proving Ground in early 1956. She was hired as a mathematician, joining a cadre of workers who used linear algebra, calculus, and other methods to manually solve complex problems such as differential equations. Their mathematical wizardry was used to handle trajectory analysis for ships and aircraft as well as other applications.

She was one of four Black employees at the facility, she says, adding that her determination to prove the capability of Black professionals drove her to excel.

As computers were introduced into the Navy’s operations in the 1960s, West became proficient in Fortran IV. The programming language enabled her to use the IBM 7030—the world’s fastest supercomputer at the time—to process data at an unprecedented rate.

Because of her expertise in mathematics and computer science, she was appointed director of projects that extracted valuable insights from satellite data gathered during NASA missions. West and her colleagues used the data to create ever more accurate models of the geoid—the shape of the Earth—factoring in gravitational fields and the planet’s rotation.

One such mission was Seasat, which lasted from June to October 1978. Seasat was launched into orbit to test oceanographic sensors and gain a better understanding of Earth’s seas using the first space-based synthetic aperture radar (SAR) system, which enabled the first remote sensing of the Earth’s oceans.

SAR can acquire high-resolution images at night and can penetrate through clouds and rain. Seasat captured many valuable 2D and 3D images before a malfunction caused the satellite to be taken down.

Enough data was collected from Seasat for West’s team to refine existing geodetic models to better account for gravity and magnetic forces. The models were important for precisely mapping the Earth’s topography, determining the orbital routes that would later be used by GPS satellites, as well as documenting the spatial relationships that now let GPS determine exactly where a receiver is.

In 1986 she published the “Data Processing System Specifications for the GEOSAT Satellite Radar Altimeter” technical report. It contained new calculations that could make her geodetic models more accurate. The calculations were made possible by data from the radio altimeter on the GEOSAT, a Navy satellite that went into orbit in March 1985.

West’s career at Dahlgren lasted 42 years. By the time she retired in 1998, all 24 satellites in the GPS constellation had been launched to help the world keep time and handle navigation. But her role was largely unknown.

A model of perseverance

Neither an early bout of imposter syndrome nor the racial tensions that were an everyday element of her work life during the height of the Civil Rights Movement were able to knock her off course, West says.

In the early 1970s, she decided that her career advancement was not proceeding as smoothly as she thought it should, so she decided to go to graduate school part time for another degree. She considered pursuing a doctorate in mathematics but realized, “I already had all the technical credentials I would ever need for my work for the Navy.” Instead, to solidify her skills as a manager, she earned a master’s degree in 1973 in public administration from the University of Oklahoma in Norman.

After retiring from the Navy, she earned a doctorate in public administration in 2000 from Virginia Tech. Although she was recovering from a stroke at the time that affected her physical abilities, she still had the same drive to pursue an education that had once kept her focused on a little red schoolhouse.

A formidable legacy

West’s contributions have had a lasting impact on the fields of mathematics, geodesy, and computer science. Her pioneering efforts in a predominantly male and racially segregated environment set a precedent for future generations of female and minority scientists.

West says her life and career are testaments to the power of perseverance, skill, and dedication—or “stick-to-it-iveness,” to use her parlance. Her story continues to inspire people who strive to push boundaries. She has shown that the sky is indeed not the limit but just the beginning.

  • ✇IEEE Spectrum
  • Tsunenobu Kimoto Leads the Charge in Power DevicesWillie D. Jones
    Tsunenobu Kimoto, a professor of electronic science and engineering at Kyoto University, literally wrote the book on silicon carbide technology. Fundamentals of Silicon Carbide Technology, published in 2014, covers properties of SiC materials, processing technology, theory, and analysis of practical devices. Kimoto, whose silicon carbide research has led to better fabrication techniques, improved the quality of wafers and reduced their defects. His innovations, which made silicon carbide semi
     

Tsunenobu Kimoto Leads the Charge in Power Devices

23. Červen 2024 v 20:00


Tsunenobu Kimoto, a professor of electronic science and engineering at Kyoto University, literally wrote the book on silicon carbide technology. Fundamentals of Silicon Carbide Technology, published in 2014, covers properties of SiC materials, processing technology, theory, and analysis of practical devices.

Kimoto, whose silicon carbide research has led to better fabrication techniques, improved the quality of wafers and reduced their defects. His innovations, which made silicon carbide semiconductor devices more efficient and more reliable and thus helped make them commercially viable, have had a significant impact on modern technology.

Tsunenobu Kimoto


Employer

Kyoto University

Title

Professor of electronic science and engineering

Member grade

Fellow

Alma mater

Kyoto University

For his contributions to silicon carbide material and power devices, the IEEE Fellow was honored with this year’s IEEE Andrew S. Grove Award, sponsored by the IEEE Electron Devices Society.

Silicon carbide’s humble beginnings

Decades before a Tesla Model 3 rolled off the assembly line with an SiC inverter, a small cadre of researchers, including Kimoto, foresaw the promise of silicon carbide technology. In obscurity they studied it and refined the techniques for fabricating power transistors with characteristics superior to those of the silicon devices then in mainstream use.

Today MOSFETs and other silicon carbide transistors greatly reduce on-state loss and switching losses in power-conversion systems, such as the inverters in an electric vehicle used to convert the battery’s direct current to the alternating current that drives the motor. Lower switching losses make the vehicles more efficient, reducing the size and weight of their power electronics and improving power-train performance. Silicon carbide–based chargers, which convert alternating current to direct current, provide similar improvements in efficiency.

But those tools didn’t just appear. “We had to first develop basic techniques such as how to dope the material to make n-type and p-type semiconductor crystals,” Kimoto says. N-type crystals’ atomic structures are arranged so that electrons, with their negative charges, move freely through the material’s lattice. Conversely, the atomic arrangement of p-type crystals’ contains positively charged holes.

Kimoto’s interest in silicon carbide began when he was working on his Ph.D. at Kyoto University in 1990.

“At that time, few people were working on silicon carbide devices,” he says. “And for those who were, the main target for silicon carbide was blue LED.

“There was hardly any interest in silicon carbide power devices, like MOSFETs and Schottky barrier diodes.”

Kimoto began by studying how SiC might be used as the basis of a blue LED. But then he read B. Jayant Baliga’s 1989 paper “Power Semiconductor Device Figure of Merit for High-Frequency Applications” in IEEE Electron Device Letters, and he attended a presentation by Baliga, the 2014 IEEE Medal of Honor recipient, on the topic.

“I was convinced that silicon carbide was very promising for power devices,” Kimoto says. “The problem was that we had no wafers and no substrate material,” without which it was impossible to fabricate the devices commercially.

In order to get silicon carbide power devices, “researchers like myself had to develop basic technology such as how to dope the material to make p-type and n-type crystals,” he says. “There was also the matter of forming high-quality oxides on silicon carbide.” Silicon dioxide is used in a MOSFET to isolate the gate and prevent electrons from flowing into it.

The first challenge Kimoto tackled was producing pure silicon carbide crystals. He decided to start with carborundum, a form of silicon carbide commonly used as an abrasive. Kimoto took some factory waste materials—small crystals of silicon carbide measuring roughly 5 millimeters by 8 mm­—and polished them.

He found he had highly doped n-type crystals. But he realized having only highly doped n-type SiC would be of little use in power applications unless he also could produce lightly doped (high purity) n-type and p-type SiC.

Connecting the two material types creates a depletion region straddling the junction where the n-type and p-type sides meet. In this region, the free, mobile charges are lost because of diffusion and recombination with their opposite charges, and an electric field is established that can be exploited to control the flow of charges across the boundary.

“Silicon carbide is a family with many, many brothers.”

By using an established technique, chemical vapor deposition, Kimoto was able to grow high-purity silicon carbide. The technique grows SiC as a layer on a substrate by introducing gasses into a reaction chamber.

At the time, silicon carbide, gallium nitride, and zinc selenide were all contenders in the race to produce a practical blue LED. Silicon carbide, Kimoto says, had only one advantage: It was relatively easy to make a silicon carbide p-n junction. Creating p-n junctions was still difficult to do with the other two options.

By the early 1990s, it was starting to become clear that SiC wasn’t going to win the blue-LED sweepstakes, however. The inescapable reality of the laws of physics trumped the SiC researchers’ belief that they could somehow overcome the material’s inherent properties. SiC has what is known as an indirect band gap structure, so when charge carriers are injected, the probability of the charges recombining and emitting photons is low, leading to poor efficiency as a light source.

While the blue-LED quest was making headlines, many low-profile advances were being made using SiC for power devices. By 1993, a team led by Kimoto and Hiroyuki Matsunami demonstrated the first 1,100-volt silicon carbide Schottky diodes, which they described in a paper in IEEE Electron Device Letters. The diodes produced by the team and others yielded fast switching that was not possible with silicon diodes.

“With silicon p-n diodes,” Kimoto says, “we need about a half microsecond for switching. But with a silicon carbide, it takes only 10 nanoseconds.”

The ability to switch devices on and off rapidly makes power supplies and inverters more efficient because they waste less energy as heat. Higher efficiency and less heat also permit designs that are smaller and lighter. That’s a big deal for electric vehicles, where less weight means less energy consumption.

Kimoto’s second breakthrough was identifying which form of the silicon carbide material would be most useful for electronics applications.

“Silicon carbide is a family with many, many brothers,” Kimoto says, noting that more than 100 variants with different silicon-carbon atomic structures exist.

The 6H-type silicon carbide was the default standard phase used by researchers targeting blue LEDs, but Kimoto discovered that the 4H-type has much better properties for power devices, including high electron mobility. Now all silicon carbide power devices and wafer products are made with the 4H-type.

Silicon carbide power devices in electric vehicles can improve energy efficiency by about 10 percent compared with silicon, Kimoto says. In electric trains, he says, the power required to propel the cars can be cut by 30 percent compared with those using silicon-based power devices.

Challenges remain, he acknowledges. Although silicon carbide power transistors are used in Teslas, other EVs, and electric trains, their performance is still far from ideal because of defects present at the silicon dioxide–SiC interface, he says. The interface defects lower the performance and reliability of MOS-based transistors, so Kimoto and others are working to reduce the defects.

A career sparked by semiconductors

When Kimoto was an only child growing up in Wakayama, Japan, near Osaka, his parents insisted he study medicine, and they expected him to live with them as an adult. His father was a garment factory worker; his mother was a homemaker. His move to Kyoto to study engineering “disappointed them on both counts,” he says.

His interest in engineering was sparked, he recalls, when he was in junior high school, and Japan and the United States were competing for semiconductor industry supremacy.

At Kyoto University, he earned bachelor’s and master’s degrees in electrical engineering, in 1986 and 1988. After graduating, he took a job at Sumitomo Electric Industries’ R&D center in Itami. He worked with silicon-based materials there but wasn’t satisfied with the center’s research opportunities.

He returned to Kyoto University in 1990 to pursue his doctorate. While studying power electronics and high-temperature devices, he also gained an understanding of material defects, breakdown, mobility, and luminescence.

“My experience working at the company was very valuable, but I didn’t want to go back to industry again,” he says. By the time he earned his doctorate in 1996, the university had hired him as a research associate.

He has been there ever since, turning out innovations that have helped make silicon carbide an indispensable part of modern life.

Growing the silicon carbide community at IEEE

Kimoto joined IEEE in the late 1990s. An active volunteer, he has helped grow the worldwide silicon carbide community.

He is an editor of IEEE Transactions on Electron Devices, and he has served on program committees for conferences including the International Symposium on Power Semiconductor Devices and ICs and the IEEE Workshop on Wide Bandgap Power Devices and Applications.

“Now when we hold a silicon carbide conference, more than 1,000 people gather,” he says. “At IEEE conferences like the International Electron Devices Meeting or ISPSD, we always see several well-attended sessions on silicon carbide power devices because more IEEE members pay attention to this field now.”

  • ✇IEEE Spectrum
  • For EVs, Semi-Solid-State Batteries Offer a Step ForwardWillie D. Jones
    Earlier this month, China announced that it is pouring 6 billion yuan (about US $826 million) into a fund meant to spur the development of solid-state batteries by the nation’s leading battery manufacturers. Solid-state batteries use electrolytes of either glass, ceramic, or solid polymer material instead of the liquid lithium salts that are in the vast majority of today’s electric vehicle (EV) batteries. They’re greatly anticipated because they will have three or four times as much energy den
     

For EVs, Semi-Solid-State Batteries Offer a Step Forward

19. Červen 2024 v 18:00


Earlier this month, China announced that it is pouring 6 billion yuan (about US $826 million) into a fund meant to spur the development of solid-state batteries by the nation’s leading battery manufacturers. Solid-state batteries use electrolytes of either glass, ceramic, or solid polymer material instead of the liquid lithium salts that are in the vast majority of today’s electric vehicle (EV) batteries. They’re greatly anticipated because they will have three or four times as much energy density as batteries with liquid electrolytes, offer more charge-discharge cycles over their lifetimes, and be far less susceptible to the thermal runaway reaction that occasionally causes lithium batteries to catch fire.

But China’s investment in the future of batteries won’t likely speed up the timetable for mass production and use in production vehicles. As IEEE Spectrum pointed out in January, it’s not realistic to look for solid-state batteries in production vehicles anytime soon. Experts Spectrum consulted at the time “noted a pointed skepticism toward the technical merits of these announcements. None could isolate anything on the horizon indicating that solid-state technology can escape the engineering and ‘production hell’ that lies ahead.”

“To state at this point that any one battery and any one country’s investments in battery R&D will dominate in the future is simply incorrect.” —Steve W. Martin, Iowa State University

Reaching scale production of solid-state batteries for EVs will first require validating existing solid-state battery technologies—now being used for other, less demanding applications—in terms of performance, life-span, and relative cost for vehicle propulsion. Researchers must still determine how those batteries take and hold a charge and deliver power as they age. They’ll also need to provide proof that a glass or ceramic battery can stand up to the jarring that comes with driving on bumpy roads and certify that it can withstand the occasional fender bender.

Here Come Semi-Solid-State Batteries

Meanwhile, as the world waits for solid electrolytes to shove liquids aside, Chinese EV manufacturer Nio and battery maker WeLion New Energy Technology Co. have partnered to stake a claim on the market for a third option that splits the difference: semi-solid-state batteries, with gel electrolytes.

CarNewsChina.com reported in April that the WeLion cells have an energy density of 360 watt-hours per kilogram. Fully packaged, the battery’s density rating is 260 Wh/kg. That’s still a significant improvement over lithium iron phosphate batteries, whose density tops out at 160 Wh/kg. In tests conducted last month with Nio’s EVs in Shanghai, Chengdu, and several other cities, the WeLion battery packs delivered more than 1,000 kilometers of driving range on a single charge. Nio says it plans to roll out the new battery type across its vehicle lineup beginning this month.

But the Beijing government’s largesse and the Nio-WeLion partnership’s attempt to be first to get semi-solid-state batteries into production vehicles shouldn’t be a temptation to call the EV propulsion game prematurely in China’s favor.

So says Steve W. Martin, a professor of materials science and engineering at Iowa State University, in Ames. Martin, whose research areas include glassy solid electrolytes for solid-state lithium batteries and high-capacity reversible anodes for lithium batteries, believes that solid-state batteries are the future and that hybrid semi-solid batteries will likely be a transition between liquid and solid-state batteries. However, he says, “to state at this point that any one battery and any one country’s investments in battery R&D will dominate in the future is simply incorrect.” Martin explains that “there are too many different kinds of solid-state batteries being developed right now and no one of these has a clear technological lead.”

The Advantages of Semi-Solid-State Batteries

The main innovation that gives semi-solid-state batteries an advantage over conventional batteries is the semisolid electrolyte from which they get their name. The gel electrolyte contains ionic conductors such as lithium salts just as liquid electrolytes do, but the way they are suspended in the gel matrix supports much more efficient ion conductivity. Enhanced transport of ions from one side of the battery to the other boosts the flow of current in the opposite direction that makes a complete circuit. This is important during the charging phase because the process happens more rapidly than it can in a battery with a liquid electrolyte. The gel’s structure also resists the formation of dendrites, the needlelike structures that can form on the anode during charging and cause short circuits. Additionally, gels are less volatile than liquid electrolytes and are therefore less prone to catching fire.

Though semi-solid-state batteries won’t reach the energy densities and life-spans that are expected from those with solid electrolytes, they’re at an advantage in the short term because they can be made on conventional lithium-ion battery production lines. Just as important, they have been tested and are available now rather than at some as yet unknown date.

Semi-solid-state batteries can be made on conventional lithium-ion battery production lines.

Several companies besides WeLion are actively developing semi-solid-state batteries. China’s prominent battery manufacturers, including CATL, BYD, and the state-owned automakers FAW Group and SAIC Group are, like WeLion, beneficiaries of Beijing’s plans to advance next-generation battery technology domestically. Separately, the startup Farasis Energy, founded in Ganzhou, China, in 2009, is collaborating with Mercedes-Benz to commercialize advanced batteries.

The Road Forward to Solid-State Batteries

U.S. startup QuantumScape says the solid-state lithium metal batteries it’s developing will offer energy density of around 400 Wh/kg. The company notes that its cells eliminate the charging bottleneck that occurs in conventional lithium-ion cells, where lithium must diffuse into the carbon particles. QuantumScape’s advanced batteries will therefore allow fast charging from 10 to 80 percent in 15 minutes. That’s a ways off, but the Silicon Valley–based company announced in March that it had begun shipping its prototype Alpha-2 semi-solid-state cells to manufacturers for testing.

Toyota is among a group of companies not looking to hedge their bets. The automaker, ignoring naysayers, aims to commercialize solid-state batteries by 2027 that it says will give an EV a range of 1,200 km on a single charge and allow 10-minute fast charging. It attributes its optimism to breakthroughs addressing durability issues. And for companies like Solid Power, it’s also solid-state or bust. Solid Power, which aims to commercialize a lithium battery with a proprietary sulfide-based solid electrolyte, has partnered with major automakers Ford and BMW. ProLogium Technology, which is also forging ahead with preparations for a solid-state battery rollout, claims that it will start delivering batteries this year that combine a ceramic oxide electrolyte with a lithium-free soft cathode (for energy density exceeding 500 Wh/kg). The company, which has teamed up with Mercedes-Benz, demonstrated confidence in its timetable by opening the world’s first giga-level solid-state lithium ceramic battery factory earlier this year in Taoyuan, Taiwan.

  • ✇IEEE Spectrum
  • High-Speed Rail Finally Coming to the U.S.Willie D. Jones
    In late April, the Miami-based rail company Brightline Trains broke ground on a project that the company promises will give the United States its first dedicated, high-speed passenger rail service. The 350-kilometer (218-mile) corridor, which the company calls Brightline West, will connect Las Vegas to the suburbs of Los Angeles. Brightline says it hopes to complete the project in time for the 2028 Summer Olympic Games, which will take place in Los Angeles.Brightline has chosen Siemens American
     

High-Speed Rail Finally Coming to the U.S.

16. Květen 2024 v 15:11


In late April, the Miami-based rail company Brightline Trains broke ground on a project that the company promises will give the United States its first dedicated, high-speed passenger rail service. The 350-kilometer (218-mile) corridor, which the company calls Brightline West, will connect Las Vegas to the suburbs of Los Angeles. Brightline says it hopes to complete the project in time for the 2028 Summer Olympic Games, which will take place in Los Angeles.

Brightline has chosen Siemens American Pioneer 220 engines that will run at speeds averaging 165 kilometers per hour, with an advertised top speed of 320 km/h. That average speed still falls short of the Eurostar network connecting London, Paris, Brussels, and Amsterdam (300 km/h), Germany’s Intercity-Express 3 service (330 km/h), and the world’s fastest train service, China’s Beijing-to-Shanghai regional G trains (350 km/h).

There are currently only two rail lines in the U.S. that ever reach the 200 km/h mark, which is the unofficial minimum speed at which a train can be considered to be high-speed rail. Brightline, the company that is about to construct the L.A.-to-Las-Vegas Brightline West line, also operates a Miami-Orlando rail line that averages 111 km/h. The other is Amtrak’s Acela line between Boston and Washington, D.C.—and that line only qualifies as high-speed rail for just 80 km of its 735-km route. That’s a consequence of the rail status quo in the United States, in which slower freight trains typically have right of way on shared rail infrastructure.

As Vaclav Smil, professor emeritus at the University of Manitoba, noted in IEEE Spectrum in 2018, there has long been hope that the United States would catch up with Europe, China, and Japan, where high-speed regional rail travel has long been a regular fixture. “In a rational world, one that valued convenience, time, low energy intensity and low carbon conversions, the high-speed electric train would always be the first choice for [intercity travel],” Smil wrote at the time. And yet, in the United States, funding and regulatory approval for such projects have been in short supply.

Now, Brightline West, as well as a few preexisting rail projects that are at some stage of development, such as the California High-Speed Rail Network and the Texas Central Line, could be a bellwether for an attitude shift that could—belatedly—put trains closer to equal footing with cars and planes for travelers in the continental United States.

The U.S. government, like many national governments, has pledged to reduce greenhouse gas emissions. Because that generally requires decarbonizing transportation and improving energy efficiency, trains, which can run on electricity generated from fossil-fuel as well as non-fossil-fuel sources, are getting a big push. As Smil noted in 2018, trains use a fraction of a megajoule of energy per passenger-kilometer, while a lone driver in even one of the most efficient gasoline-powered cars will use orders of magnitude more energy per passenger-kilometer.

Brightline and Siemens did not respond to inquiries by Spectrum seeking to find out what innovations they plan to introduce that would make the L.A.-to-Las Vegas passenger line run faster or perhaps use less energy than its Asian and European counterparts. But Karen E. Philbrick, executive director of the Mineta Transportation Institute at San Jose State University, in California, says that’s beside the point. She notes that the United States, having focused on cars for the better part of the past century, already missed the period when major innovations were being made in high-speed rail. “What’s important about Brightline West and, say, the California High-speed Rail project, is not how innovative they are, but the fact that they’re happening at all. I am thrilled to see the U.S. catching up.”

Maybe Brightline or other groups seeking to get Americans off the roadways and onto railways will be able to seize the moment and create high-speed rail lines connecting other intraregional population centers in the United States. With enough of those pieces in place, it might someday be possible to ride the rails from California to New York in a single day, in the same way train passengers in China can get from Beijing to Shanghai between breakfast and lunch.

  • ✇IEEE Spectrum
  • Why Haven’t Hoverbikes Taken Off?Willie D. Jones
    Ever since Return of the Jedi premiered in 1983, people have been imagining the day when they, like the film’s protagonist Luke Skywalker, would get to ride speeder bikes that zip across the landscape while hovering just a few meters above the ground. In the intervening years, there have been numerous claims made by companies that they’ve figured out how to make a real-world product that mimics movie magic. Their demos and PR campaigns have continued to whet the public’s appetite for hoverbikes,
     

Why Haven’t Hoverbikes Taken Off?

25. Duben 2024 v 17:10


Ever since Return of the Jedi premiered in 1983, people have been imagining the day when they, like the film’s protagonist Luke Skywalker, would get to ride speeder bikes that zip across the landscape while hovering just a few meters above the ground. In the intervening years, there have been numerous claims made by companies that they’ve figured out how to make a real-world product that mimics movie magic. Their demos and PR campaigns have continued to whet the public’s appetite for hoverbikes, but there are some solid reasons why the nascent hoverbike industry has yet to get airborne.

“it’s gonna happen, but I don’t know if it’ll happen in your lifetime and mine,” says Ronald Barrett-Gonzalez, an aerospace-engineering professor at the University of Kansas. “With the current approaches, I think it’s just going to be a while.”

Barrett-Gonzalez was the advisor for a group of University of Kansas aerospace-engineering grad students who participated in the GoFly competition, sponsored by Boeing. The challenge—in which 3,800 teams from 100 countries participated—was to “design and build a safe, quiet, ultra-compact, near-VTOL personal flying device capable of flying 20 miles [32 kilometers] while carrying a single person.”

The eventual grand prize winner was to have been awarded US $1 million, and $250,000 prizes were supposed to have gone to the quietest and the smallest compliant GoFly entries when the challenge concluded in September 2023. But the scheduled final fly-off between the teams whose personal aircraft were selected as the best-built entries in the second of the competition’s three phases was canceled because windy conditions made it unsafe for the machines to take to the skies.

Solving the Physics of Hoverbikes

“Helicopters, for a long time, have been built with relatively small engines for their size,” says Barrett-Gonzalez. “The way that such large vehicles can be lifted by small engines is that they have large rotor diameters. The volume of the column of air that they stand on is great. If you look at hoverbikes, the diameters of their rotors are much smaller. And physics says that you need more power per unit weight to lift an aircraft like that.”

To get the efficiency that comes along with a helicopter’s extensive rotor sweep, hoverbike designers will likely have to give up the thought of these machines touching down in parking spots meant for cars, or at least wait for new generations of engines and electric motors with greater power density to appear along with batteries capable of delivering a lot more power and storing a lot more energy than those available today.

Assessing Hoverbikes’ Risks

Safety concerns are just as big a hurdle to making hoverbikes available for sale. The University of Kansas team’s GoFly entry, called Mamba, had been one of 10 Phase I winners for best design. The six-rotored hexcopter, which emphasized safety, certifiability, and performance, featured shrouded rotors and a tilting stabilizer surface.

A model of a circular red craft with six rotors arranged around its edge and a black-suited person straddling the craft's middle. The University of Kansas’ GoFly entry is a red-and-black hexcopter with two large and two small horizontal rotors, and two vertically placed rotors in the back. University of Kansas

But Mamba didn’t make it through Phase II, the build stage. Barrett-Gonzalez explains that “the kinds of safety criteria that we hold to are enforced by rules and regulations, such as the U.S. government’s FAR 23 airworthiness standards that govern small airplanes and FAR 27 standards for smaller helicopters.” That standard of safety, he says, is meant to ensure that the probability of a fatal event is no greater than one in 1 million flight hours. “For larger aircraft like the big Boeing commercial jets and larger helicopters, the standard is even more stringent. It’s one in 1 billion flight hours.”

That focus on safety doesn’t come without a cost, Barrett-Gonzalez adds. “The current thing that is keeping an aircraft like the Mamba from going from the drawing board to reality is that it’s costly. We could do what a Star Wars podracer can do, but that’s a $3.2 million machine. And then, only maybe Elon Musk and half a dozen other people could afford it.”

Several would-be hoverbike manufacturers have enticed potential buyers with price points more than an order of magnitude lower than Barrett-Gonzalez’s estimate. But Barrett-Gonzalez points out that they don’t include the combination of safety features built into the Mamba design. The Mamba has a roll cage, and the motors are cross-shafted. “So if you lose one motor you don’t come spiraling out of the sky,” he says. What’s more, the Mamba’s rotors are arranged according to a patented design that the team says makes it impossible for a rider or bystander to come in contact with the machine’s spinning blades.

For anyone who might argue that the Mamba project imploded because of overdesign, Barrett-Gonzalez recalls the Mamba team having extensive briefings with the director of the U.S. Federal Aviation Administration’s small airplanes directorate. “And he put it plainly: ‘The FAA will not certify a human eggbeater,’” says Barrett-Gonzalez.

“We could do what a Star Wars podracer can do, but that’s a $3.2 million machine. And then, only maybe Elon Musk and half a dozen other people could afford it.” —Ronald Barrett-Gonzalez, University of Kansas

Hover (a hoverbike hopeful formerly known as Hoversurf) recently moved its headquarters from California back to Russia, and Joby Aviation decided to start its electric vertical-takeoff-and-landing (eVTOL) air taxi business in Dubai. These moves might not necessarily indicate their need to generate revenue before refinements to their designs will give them the ability to meet U.S. safety standards. But that explanation is as plausible as any. “Neither Russia nor Dubai have mature airborne safety standards that cover vehicles of this type,” says Barrett-Gonzalez.

Where Are They Now?

In 2014, IEEE Spectrum reported on Aerofex’s pledge to deliver a commercially available hoverbike, the Aero-X, by 2017. Spoiler alert: It didn’t happen. Though Aerofex is still in business, the company retired the Aero-X before the aircraft’s anticipated go-to-market date. The company proudly recalls the progress it made during Aero-X’s development, including kinesthetic control, which lets the pilot stabilize and control a personal aircraft by shifting their weight pretty much the same way one does when riding a bike. But 16 years after its 2008 maiden flight, the $85,000 Aero-X is still not available for sale.

The Aerofex Aero-X is shown as an orange and black aircraft seen hovering over a yellow grassy field. Seven years after its initial go-to-market date, Aerofex’s Aero-X is still not available for sale.Aerofex

Meanwhile, Hover’s series of Scorpion hoverbike designs have gotten plenty of press attention. But as of this writing, the company’s flying motorcycles are still in the preorder stage, with no indication regarding when models like the $150,000 S-3 will be delivered to people who put down deposits.

And Tetra Aviation, the Tokyo startup that won the $100,000 Pratt & Whitney Disruptor Award by the GoFly judges for its Mk-5 single-seat eVTOL vehicle, is also stuck in the development phase. Tetra said it planned to offer the Mk-5, with its 32 vertical lift rotors distributed across long, thin, aluminum-and-carbon-fiber wings and a single pusher prop at the rear, for $320,000, beginning in 2022. But the 8.5-meter wide, 6.1-meter-long machine, which is supposed to travel 160 kilometers (at speeds up to 160 kilometers per hour) on a single charge, is still in the preorder stage.

According to the statements made by the companies seeking to market hoverbikes, the vehicles have been two or three years away for more than a decade. The market predictions made by these companies are starting to sound a lot like an old saw about nuclear fusion, which claims fusion has been “just 20 years away” for nearly 50 years.

  • ✇IEEE Spectrum
  • 50 Years Later, This Apollo-Era Antenna Still Talks to Voyager 2Willie D. Jones
    For more than 50 years, Deep Space Station 43 has been an invaluable tool for space probes as they explore our solar system and push into the beyond. The DSS-43 radio antenna, located at the Canberra Deep Space Communication Complex, near Canberra, Australia, keeps open the line of communication between humans and probes during NASA missions. Today more than 40 percent of all data retrieved by celestial explorers, including Voyagers, New Horizons, and the Mars Curiosity rover, comes through DSS-
     

50 Years Later, This Apollo-Era Antenna Still Talks to Voyager 2

18. Duben 2024 v 20:00


For more than 50 years, Deep Space Station 43 has been an invaluable tool for space probes as they explore our solar system and push into the beyond. The DSS-43 radio antenna, located at the Canberra Deep Space Communication Complex, near Canberra, Australia, keeps open the line of communication between humans and probes during NASA missions.

Today more than 40 percent of all data retrieved by celestial explorers, including Voyagers, New Horizons, and the Mars Curiosity rover, comes through DSS-43.

“As Australia’s largest antenna, DSS-43 has provided two-way communication with dozens of robotic spacecraft,” IEEE President-Elect Kathleen Kramer said during a ceremony where the antenna was recognized as an IEEE Milestone. It has supported missions, Kramer noted, “from the Apollo program and NASA’s Mars exploration rovers such as Spirit and Opportunity to the Voyagers’ grand tour of the solar system.

“In fact,” she said, “it is the only antenna remaining on Earth capable of communicating with Voyager 2.”

Why NASA needed DSS-43

Maintaining two-way contact with spacecraft hurtling billions of kilometers away across the solar system is no mean feat. Researchers at NASA’s Jet Propulsion Laboratory, in Pasadena, Calif., knew that communication with distant space probes would require a dish antenna with unprecedented accuracy. In 1964 they built DSS-42—DSS-43’s predecessor—to support NASA’s Mariner 4 spacecraft as it performed the first-ever successful flyby of Mars in July 1965. The antenna had a 26-meter-diameter dish. Along with two other antennas at JPL and in Spain, DSS-42 obtained the first close-up images of Mars. DSS-42 was retired in 2000.

NASA engineers predicted that to carry out missions beyond Mars, the space agency needed more sensitive antennas. So in 1969 they began work on DSS-43, which has a 64-meter-diameter dish.

DSS-43 was brought online in December 1972—just in time to receive video and audio transmissions sent by Apollo 17 from the surface of the moon. It had greater reach and sensitivity than DSS-42 even after 42’s dish was upgraded in the early 1980s.

The gap between the two antennas’ capabilities widened in 1987, when DSS-43 was equipped with a 70-meter dish in anticipation of Voyager 2’s 1989 encounter with the planet Neptune.

DSS-43 has been indispensable in maintaining contact with the deep-space probe ever since.

The dish’s size isn’t its only remarkable feature. The dish’s manufacturer took great pains to ensure that its surface had no bumps or rough spots. The smoother the dish surface, the better it is at focusing incident waves onto the signal detector so there’s a higher signal-to-noise ratio.

DSS-43 boasts a pointing accuracy of 0.005 degrees (18 arc seconds)—which is important for ensuring that it is pointed directly at the receiver on a distant spacecraft. Voyager 2 broadcasts using a 23-watt radio. But by the time the signals traverse the multibillion-kilometer distance from the heliopause to Earth, their power has faded to a level 20 billion times weaker than what is needed to run a digital watch. Capturing every bit of the incident signals is crucial to gathering useful information from the transmissions.

The antenna has a transmitter capable of 400 kilowatts, with a beam width of 0.0038 degrees. Without the 1987 upgrade, signals sent from DSS-43 to a spacecraft venturing outside the solar system likely never would reach their target.

NASA’s Deep Space Network

The Canberra Deep Space Complex, where DSS-43 resides, is one of three such tracking stations operated by JPL. The other two are DSS-11 at the Goldstone Deep Space Communications Complex near Barstow, Calif., and DSS-63 at the Madrid Deep Space Communications Complex in Robledo de Chavela, Spain. Together, the facilities make up the Deep Space Network, which is the most sensitive scientific telecommunications system on the planet, according to NASA. At any given time, the network is tracking dozens of spacecraft carrying out scientific missions. The three facilities are spaced about 120 degrees longitude apart. The strategic placement ensures that as the Earth rotates, at least one of the antennas has a line of sight to an object being tracked, at least for those close to the plane of the solar system.

But DSS-43 is the only member of the trio that can maintain contact with Voyager 2. Ever since its flyby of Neptune’s moon Triton in 1989, Voyager 2 has been on a trajectory below the plane of the planets, so that it no longer has a line of sight with any radio antennas in the Earth’s Northern Hemisphere.

To ensure that DSS-43 can still place the longest of long-distance calls, the antenna underwent a round of updates in 2020. A new X-band cone was installed. DSS-43 transmits radio signals in the X (8 to 12 gigahertz) and S (2 to 4 GHz) bands; it can receive signals in the X, S, L (1 to 2 GHz), and K (12 to 40 GHz) bands. The dish’s pointing accuracy also was tested and recertified.

Once the updates were completed, test commands were sent to Voyager 2. After about 37 hours, DSS-43 received a response from the space probe confirming it had received the call, and it executed the test commands with no issues.

DSS-43 is still relaying signals between Earth and Voyager 2, which passed the heliopause in 2018 and is now some 20 billion km from Earth.

a group of people smiling and standing around a plaque on a wooden stand with a large white pillar structure in the background [From left] IEEE Region 10 director Lance Fung, Kevin Furguson, IEEE President-Elect Kathleen Kramer, and Ambarish Natu, past chair of the IEEE Australian Capital Territory Section at the IEEE Milestone dedication ceremony held at the Canberra Deep Space Communication Complex in Australia. Furguson is the director of the complex.Ambarish Natu

Other important missions

DSS-43 has played a vital role in missions closer to Earth as well, including NASA’s Mars Science Laboratory mission. When the space agency sent Curiosity, a golf cart–size rover, to explore the Gale crater and Mount Sharp on Mars in 2011, DSS-43 tracked Curiosity as it made its nail-biting seven-minute descent into Mars’s atmosphere. It took roughly 20 minutes for radio signals to traverse the 320-million km distance between Mars and Earth, and then DSS-43 delivered the good news: The rover had landed safely and was operational.

“NASA plans to send future generations of astronauts from the Moon to Mars, and DSS-43 will play an important role as part of NASA’s Deep Space Network,” says Ambarish Natu, an IEEE senior member who is a past chair of the IEEE Australian Capital Territory (ACT) Section.

DSS-43 was honored with an IEEE Milestone in March during a ceremony held at the Canberra Deep Space Communication Complex.

“This is the second IEEE Milestone recognition given in Australia, and the first for ACT,” Lance Fung, IEEE Region 10 director, said during the ceremony. A plaque recognizing the technology is now displayed at the complex. It reads:

First operational in 1972 and later upgraded in 1987, Deep Space Station 43 (DSS-43) is a steerable parabolic antenna that supported the Apollo 17 lunar mission, Viking Mars landers, Pioneer and Mariner planetary probes, and Voyager’s encounters with Jupiter, Saturn, Uranus, and Neptune. Planning for many robotic and human missions to explore the solar system and beyond has included DSS-43 for critical communications and tracking in NASA’s Deep Space Network.

Administered by the IEEE History Center and supported by donors, the Milestone program recognizes outstanding technical developments around the world. The IEEE Australian Capital Territory Section sponsored the nomination.

  • ✇IEEE Spectrum
  • 50 by 20: Wireless EV Charging Hits Key BenchmarkWillie D. Jones
    Researchers at Oak Ridge National Laboratory in Tennessee recently announced that they have set a record for wireless EV charging. Their system’s magnetic coils have reached a 100-kilowatt power level. In tests in their lab, the researchers reported their system’s transmitter supplied enough energy to a receiver mounted on the underside of a Hyundai Kona EV to boost the state of charge in the car’s battery by 50 percent (enough for about 150 kilometers of range) in less than 20 minutes. “Impress
     

50 by 20: Wireless EV Charging Hits Key Benchmark

18. Duben 2024 v 14:00


Researchers at Oak Ridge National Laboratory in Tennessee recently announced that they have set a record for wireless EV charging. Their system’s magnetic coils have reached a 100-kilowatt power level. In tests in their lab, the researchers reported their system’s transmitter supplied enough energy to a receiver mounted on the underside of a Hyundai Kona EV to boost the state of charge in the car’s battery by 50 percent (enough for about 150 kilometers of range) in less than 20 minutes.

“Impressive,” says Duc Minh Nguyen, a research associate in the Communication Theory Lab at King Abdullah University of Science and Technology (KAUST) in Saudi Arabia. Nguyen is the lead author of several of papers on dynamic wireless charging, including some published when he was working toward his PhD at KAUST.

In 15 minutes, “the batteries could take on enough energy to drive for another two-and-a-half or three hours—just in time for another pit stop.”
–Omer Onar, Oak Ridge National Laboratory

The Oak Ridge announcement marks the latest milestone in work on wireless charging that stretches back more than a decade. As IEEE Spectrum reported in 2018, WiTricity, headquartered in Watertown, Mass., had announced a partnership with an unspecified automaker to install wireless charging receivers on its EVs. Then in 2021, the company revealed that it was working with Hyundai to outfit some of its Genesis GV60 EVs with Wireless charging. (In early 2023, Car Buzz reported that it had sniffed out paperwork pointing to Hyundai’s plans to equip its Ionic 5 EV with wireless charging capability.)

The plan, said WiTricity, was to equip EVs with magnetic resonance charging capability so that if such a vehicle were parked over a static charging pad installed in, say, the driver’s garage, the battery would reach full charge overnight. By 2020, we noted, a partnership had been worked out between Jaguar, Momentum Dynamics, Nordic taxi operator Cabonline, and charging company Fortam Recharge. That group set out to outfit 25 Jaguar I-Pace electric SUVs with Momentum Dynamics’ inductive charging receivers. The receivers and transmitters, rated at 50 to 75 kilowatts, were designed so that any of the specially equipped taxis would receive enough energy for 80 kilometers of range by spending 15 minutes above the energized coils embedded in the pavement as the vehicle works its way through a taxi queue. Now, according to Oak Ridge, roughly the same amount of charging time will yield about 1.5 times that range.

The Oak Ridge research team admits that installing wireless charging pads is expensive, but they say dynamic and static wireless charging can play an important role in expanding the EV charging infrastructure.

black lines sitting in a yellow case on top of a box with a screen showing a circle with different colors in it This magnetic resonance transmitter pad can wirelessly charge an EV outfitted with a corresponding receiver.Oak Ridge National Laboratory

Omad Onar, an R&D staffer in the Power Electronics and Electric Machinery Group at Oak Ridge and a member of the team that developed the newest version of the wireless charging system, envisions the static versions of these wireless charging systems being useful even for extended drives on highways. He imagines them being placed under a section of specially marked parking spaces that allow drivers to pull up and start charging without plugging in. “The usual routine—fueling up, using the restroom, and grabbing coffee or a snack usually takes about 15 minutes or more. In that amount of time, the batteries could take on enough energy to drive for another two-and-a-half or three hours—just in time for another pit stop.” What’s more, says Onar, he and his colleagues are still working to refine the system so it will transfer energy more efficiently than the one-off prototype they built in their lab.

Meanwhile, Israeli company Electreon has already installed electrified roads for pilot projects in Sweden, Norway, Italy, and other European countries, and has plans for similar projects in the United States. The company found that by installing a stationary wireless charging spot at one terminal end of a bus route near Tel Aviv University (its first real-world project), electric buses operating on that route were able to ferry passengers back and forth using batteries with one-tenth the storage capacity that was previously deemed necessary. Smaller batteries mean cheaper vehicles. What’s more, says Nguyen, charging a battery in short bursts throughout the day instead of depleting it and filling it with up with, say, an hour-long charge at a supercharging station extends the battery’s life.

  • ✇IEEE Spectrum
  • Hydrogen Is Coming to the RescueWillie D. Jones
    A consortium of U.S. federal agencies has pooled their funds and wide array of expertise to reinvent the emergency vehicle. The hybrid electric box truck they’ve come up with is carbon neutral. And in the aftermath of a natural disaster like a tornado or wildfire, the vehicle, called H2Rescue, can supply electric power and potable water to survivors while acting as a temperature-controlled command center for rescue personnel.The agencies that funded and developed it from an idea on paper to a fu
     

Hydrogen Is Coming to the Rescue

16. Duben 2024 v 17:43


A consortium of U.S. federal agencies has pooled their funds and wide array of expertise to reinvent the emergency vehicle. The hybrid electric box truck they’ve come up with is carbon neutral. And in the aftermath of a natural disaster like a tornado or wildfire, the vehicle, called H2Rescue, can supply electric power and potable water to survivors while acting as a temperature-controlled command center for rescue personnel.

The agencies that funded and developed it from an idea on paper to a functional Class 7 emergency vehicle prototype say they are pleased with the outcome of the project, which is now being used for further research and development.

“Any time the fuel cell is producing energy to move the vehicle or to export power, it’s generating water.” –Nicholas Josefik, U.S. Army Corps of Engineers Construction Research Lab

Commercial truck and locomotive engine maker Cummins, which has pledged to make all its heavy-duty road and rail vehicles zero-emission by 2050, won a $1 million competitive award to build the H2Rescue, which gets its power from a hydrogen fuel cell that charges its lithium-ion batteries. In demonstrations, including one last summer at National Renewable Energy Lab facilities in Colorado, the truck proved capable of driving 290-kilometers, then taking on the roles of power plant, mobile command center, and (courtesy of the truck’s “exhaust”) supplier of clean drinking water.

A hydrogen tank system located behind the 15,000-kilogram truck’s cab holds 175 kg of fuel at 70 megapascals (700 bars) of pressure. Civilian anthropology researcher Lance Larkin at the U.S. Army Corps of Engineers’ Construction Engineering Research Laboratory (CERL) in Champaign, Ill., told IEEE Spectrum that that’s enough fuel for the fuel cell to generate 1,800 kilowatt-hours of energy. Or enough, he says, to keep the lights on in 15 to 20 average U.S. homes for about three days.

The fuel cell can provide energy directly to the truck’s powertrain. However, it mainly charges two battery packs with a total capacity of 155-kilowatt-hours because batteries are better than fuel cells at handling the variable power demands that come with vehicle propulsion. When the truck is at a disaster site, the fuel cell can automatically turn itself on and off to keep the batteries charged up while they are exporting electric power to buildings that would otherwise be in the dark. “If it’s called upon to export, say, 3 kilowatts to keep a few computers running, the fuel in its tanks could keep them powered for weeks,” says Nicholas Josefik, an industrial engineer at CERL.

As if that weren’t enough, an onboard storage tank captures the water that is the byproduct of the electrochemical reactions in the fuel cell. “Any time the fuel cell is producing energy to move the vehicle or to export power, it’s generating water,” says Josefik. The result: roughly 1,500 liters of clean water available any place where municipal or well water supplies are unavailable or unsafe.

“When the H2Rescue drives to a location, you won’t need to pull that generator behind you, because the truck itself is a generator.” —Nicholas Josefik, U.S. Army Corps of Engineers Construction Research Lab

Just as important as what it can do, Josefik notes, is what it won’t do: “In a traditional emergency situation, you send in a diesel truck and that diesel truck is pulling a diesel-powered generator, so you can provide power to the site,” he says. “And another diesel truck is pulling in a fuel tank to fuel that diesel generator. A third truck might pull a trailer with a water tank on it.

“But when the H2Rescue drives to a location,” he continues, “You won’t need to pull that generator behind you, because the truck itself is a generator. You don’t have to drag a trailer full of water, because you know that while you’re on site, H2Rescue will be your water source.” He adds that H2Rescue will not only allow first responders to eliminate a few pieces of equipment but will also eliminate the air pollution and noise that come standard with diesel-powered vehicles and generators.

Larkin recalls that the impetus for developing the zero-emission emergency vehicle came in 2019, when a series of natural disasters across the United States, including wildfires and hurricanes, spurred action. “The organizations that funded this project were observing this and saw a need for an alternative emergency support,” he says. They asked themselves, Larkin notes, “‘What can we do to help our first responders take on these natural disasters?’ The rest, as they say, is history.”

Asked when we’ll see the Federal Emergency Management Agency, which is typically in charge of disaster response anywhere in the 50 U.S. states, dispatch the H2Rescue truck to the aftermath of, say, a hurricane, Josefik says, “This is still a research unit. We’re working on trying to build a version 2.0 that could go and support responders to an emergency.” That next version, he says, would be the result of some optimizations suggested by Cummins as it was putting the H2Rescue together. “Because this was a one-off build, [Cummins] identified a number of areas for improvement, like how they would do the wiring and the piping differently, so it’s more compact in the unit.” The aim for the second iteration, Larkin says, is “a turnkey unit, ready to operate without all the extra gauges and monitoring equipment that you wouldn’t want in a vehicle that you would turn over to somebody.”

There is no timetable for when the new and improved H2Rescue will go into production. The agencies that allocated the funds for the prototype have not yet put up the money to create its successor.

❌
❌