FreshRSS

Normální zobrazení

Jsou dostupné nové články, klikněte pro obnovení stránky.
PředevčíremHlavní kanál
  • ✇IEEE Spectrum
  • The Saga of AD-X2, the Battery Additive That Roiled the NBSAllison Marsh
    Senate hearings, a post office ban, the resignation of the director of the National Bureau of Standards, and his reinstatement after more than 400 scientists threatened to resign. Who knew a little box of salt could stir up such drama?What was AD-X2?It all started in 1947 when a bulldozer operator with a 6th grade education, Jess M. Ritchie, teamed up with UC Berkeley chemistry professor Merle Randall to promote AD-X2, an additive to extend the life of lead-acid batteries. The problem of these r
     

The Saga of AD-X2, the Battery Additive That Roiled the NBS

1. Srpen 2024 v 16:00


Senate hearings, a post office ban, the resignation of the director of the National Bureau of Standards, and his reinstatement after more than 400 scientists threatened to resign. Who knew a little box of salt could stir up such drama?

What was AD-X2?

It all started in 1947 when a bulldozer operator with a 6th grade education, Jess M. Ritchie, teamed up with UC Berkeley chemistry professor Merle Randall to promote AD-X2, an additive to extend the life of lead-acid batteries. The problem of these rechargeable batteries’ dwindling capacity was well known. If AD-X2 worked as advertised, millions of car owners would save money.

Black and white photo of a man in a suit holding an object in his hands and talking. Jess M. Ritchie demonstrates his AD-X2 battery additive before the Senate Select Committee on Small Business.National Institute of Standards and Technology Digital Collections

A basic lead-acid battery has two electrodes, one of lead and the other of lead dioxide, immersed in dilute sulfuric acid. When power is drawn from the battery, the chemical reaction splits the acid molecules, and lead sulfate is deposited in the solution. When the battery is charged, the chemical process reverses, returning the electrodes to their original state—almost. Each time the cell is discharged, the lead sulfate “hardens” and less of it can dissolve in the sulfuric acid. Over time, it flakes off, and the battery loses capacity until it’s dead.

By the 1930s, so many companies had come up with battery additives that the U.S. National Bureau of Standards stepped in. Its lab tests revealed that most were variations of salt mixtures, such as sodium and magnesium sulfates. Although the additives might help the battery charge faster, they didn’t extend battery life. In May 1931, NBS (now the National Institute of Standards and Technology, or NIST) summarized its findings in Letter Circular No. 302: “No case has been found in which this fundamental reaction is materially altered by the use of these battery compounds and solutions.”

Of course, innovation never stops. Entrepreneurs kept bringing new battery additives to market, and the NBS kept testing them and finding them ineffective.

Do battery additives work?

After World War II, the National Better Business Bureau decided to update its own publication on battery additives, “Battery Compounds and Solutions.” The publication included a March 1949 letter from NBS director Edward Condon, reiterating the NBS position on additives. Prior to heading NBS, Condon, a physicist, had been associate director of research at Westinghouse Electric in Pittsburgh and a consultant to the National Defense Research Committee. He helped set up MIT’s Radiation Laboratory, and he was also briefly part of the Manhattan Project. Needless to say, Condon was familiar with standard practices for research and testing.

Meanwhile, Ritchie claimed that AD-X2’s secret formula set it apart from the hundreds of other additives on the market. He convinced his senator, William Knowland, a Republican from Oakland, Calif., to write to NBS and request that AD-X2 be tested. NBS declined, not out of any prejudice or ill will, but because it tested products only at the request of other government agencies. The bureau also had a longstanding policy of not naming the brands it tested and not allowing its findings to be used in advertisements.

Photo of a product box with directions printed on it. AD-X2 consisted mainly of Epsom salt and Glauber’s salt.National Institute of Standards and Technology Digital Collections

Ritchie cried foul, claiming that NBS was keeping new businesses from entering the marketplace. Merle Randall launched an aggressive correspondence with Condon and George W. Vinal, chief of NBS’s electrochemistry section, extolling AD-X2 and the testimonials of many users. In its responses, NBS patiently pointed out the difference between anecdotal evidence and rigorous lab testing.

Enter the Federal Trade Commission. The FTC had received a complaint from the National Better Business Bureau, which suspected that Pioneers, Inc.—Randall and Ritchie’s distribution company—was making false advertising claims. On 22 March 1950, the FTC formally asked NBS to test AD-X2.

By then, NBS had already extensively tested the additive. A chemical analysis revealed that it was 46.6 percent magnesium sulfate (Epsom salt) and 49.2 percent sodium sulfate (Glauber’s salt, a horse laxative) with the remainder being water of hydration (water that’s been chemically treated to form a hydrate). That is, AD-X2 was similar in composition to every other additive on the market. But, because of its policy of not disclosing which brands it tests, NBS didn’t immediately announce what it had learned.

The David and Goliath of battery additives

NBS then did something unusual: It agreed to ignore its own policy and let the National Better Business Bureau include the results of its AD-X2 tests in a public statement, which was published in August 1950. The NBBB allowed Pioneers to include a dissenting comment: “These tests were not run in accordance with our specification and therefore did not indicate the value to be derived from our product.”

Far from being cowed by the NBBB’s statement, Ritchie was energized, and his story was taken up by the mainstream media. Newsweek’s coverage pitted an up-from-your-bootstraps David against an overreaching governmental Goliath. Trade publications, such as Western Construction News and Batteryman, also published flattering stories about Pioneers. AD-X2 sales soared.

Then, in January 1951, NBS released its updated pamphlet on battery additives, Circular 504. Once again, tests by the NBS found no difference in performance between batteries treated with additives and the untreated control group. The Government Printing Office sold the circular for 15 cents, and it was one of NBS’s most popular publications. AD-X2 sales plummeted.

Ritchie needed a new arena in which to challenge NBS. He turned to politics. He called on all of his distributors to write to their senators. Between July and December 1951, 28 U.S. senators and one U.S. representative wrote to NBS on behalf of Pioneers.

Condon was losing his ability to effectively represent the Bureau. Although the Senate had confirmed Condon’s nomination as director without opposition in 1945, he was under investigation by the House Committee on Un-American Activities for several years. FBI Director J. Edgar Hoover suspected Condon to be a Soviet spy. (To be fair, Hoover suspected the same of many people.) Condon was repeatedly cleared and had the public backing of many prominent scientists.

But Condon felt the investigations were becoming too much of a distraction, and so he resigned on 10 August 1951. Allen V. Astin became acting director, and then permanent director the following year. And he inherited the AD-X2 mess.

Astin had been with NBS since 1930. Originally working in the electronics division, he developed radio telemetry techniques, and he designed instruments to study dielectric materials and measurements. During World War II, he shifted to military R&D, most notably development of the proximity fuse, which detonates an explosive device as it approaches a target. I don’t think that work prepared him for the political bombs that Ritchie and his supporters kept lobbing at him.

Mr. Ritchie almost goes to Washington

On 6 September 1951, another government agency entered the fray. C.C. Garner, chief inspector of the U.S. Post Office Department, wrote to Astin requesting yet another test of AD-X2. NBS dutifully submitted a report that the additive had “no beneficial effects on the performance of lead acid batteries.” The post office then charged Pioneers with mail fraud, and Ritchie was ordered to appear at a hearing in Washington, D.C., on 6 April 1952. More tests were ordered, and the hearing was delayed for months.

Back in March 1950, Ritchie had lost his biggest champion when Merle Randall died. In preparation for the hearing, Ritchie hired another scientist: Keith J. Laidler, an assistant professor of chemistry at the Catholic University of America. Laidler wrote a critique of Circular 504, questioning NBS’s objectivity and testing protocols.

Ritchie also got Harold Weber, a professor of chemical engineering at MIT, to agree to test AD-X2 and to work as an unpaid consultant to the Senate Select Committee on Small Business.

Life was about to get more complicated for Astin and NBS.

Why did the NBS Director resign?

Trying to put an end to the Pioneers affair, Astin agreed in the spring of 1952 that NBS would conduct a public test of AD-X2 according to terms set by Ritchie. Once again, the bureau concluded that the battery additive had no beneficial effect.

However, NBS deviated slightly from the agreed-upon parameters for the test. Although the bureau had a good scientific reason for the minor change, Ritchie had a predictably overblown reaction—NBS cheated!

Then, on 18 December 1952, the Senate Select Committee on Small Business—for which Ritchie’s ally Harold Weber was consulting—issued a press release summarizing the results from the MIT tests: AD-X2 worked! The results “demonstrate beyond a reasonable doubt that this material is in fact valuable, and give complete support to the claims of the manufacturer.” NBS was “simply psychologically incapable of giving Battery AD-X2 a fair trial.”

Black and white photo of a man standing next to a row of lead-acid batteries. The National Bureau of Standards’ regular tests of battery additives found that the products did not work as claimed.National Institute of Standards and Technology Digital Collections

But the press release distorted the MIT results.The MIT tests had focused on diluted solutions and slow charging rates, not the normal use conditions for automobiles, and even then AD-X2’s impact was marginal. Once NBS scientists got their hands on the report, they identified the flaws in the testing.

How did the AD-X2 controversy end?

The post office finally got around to holding its mail fraud hearing in the fall of 1952. Ritchie failed to attend in person and didn’t realize his reports would not be read into the record without him, which meant the hearing was decidedly one-sided in favor of NBS. On 27 February 1953, the Post Office Department issued a mail fraud alert. All of Pioneers’ mail would be stopped and returned to sender stamped “fraudulent.” If this charge stuck, Ritchie’s business would crumble.

But something else happened during the fall of 1952: Dwight D. Eisenhower, running on a pro-business platform, was elected U.S. president in a landslide.

Ritchie found a sympathetic ear in Eisenhower’s newly appointed Secretary of Commerce Sinclair Weeks, who acted decisively. The mail fraud alert had been issued on a Friday. Over the weekend, Weeks had a letter hand-delivered to Postmaster General Arthur Summerfield, another Eisenhower appointee. By Monday, the fraud alert had been suspended.

What’s more, Weeks found that Astin was “not sufficiently objective” and lacked a “business point of view,” and so he asked for Astin’s resignation on 24 March 1953. Astin complied. Perhaps Weeks thought this would be a mundane dismissal, just one of the thousands of political appointments that change hands with every new administration. That was not the case.

More than 400 NBS scientists—over 10 percent of the bureau’s technical staff— threatened to resign in protest. The American Academy for the Advancement of Science also backed Astin and NBS. In an editorial published in Science, the AAAS called the battery additive controversy itself “minor.” “The important issue is the fact that the independence of the scientist in his findings has been challenged, that a gross injustice has been done, and that scientific work in the government has been placed in jeopardy,” the editorial stated.

Two black and white portrait photos of men in suits. National Bureau of Standards director Edward Condon [left] resigned in 1951 because investigations into his political beliefs were impeding his ability to represent the bureau. Incoming director Allen V. Astin [right] inherited the AD-X2 controversy, which eventually led to Astin’s dismissal and then his reinstatement after a large-scale protest by NBS researchers and others. National Institute of Standards and Technology Digital Collections

Clearly, AD-X2’s effectiveness was no longer the central issue. The controversy was a stand-in for a larger debate concerning the role of government in supporting small business, the use of science in making policy decisions, and the independence of researchers. Over the previous few years, highly respected scientists, including Edward Condon and J. Robert Oppenheimer, had been repeatedly investigated for their political beliefs. The request for Astin’s resignation was yet another government incursion into scientific freedom.

Weeks, realizing his mistake, temporarily reinstated Astin on 17 April 1953, the day the resignation was supposed to take effect. He also asked the National Academy of Sciences to test AD-X2 in both the lab and the field. By the time the academy’s report came out in October 1953, Weeks had permanently reinstated Astin. The report, unsurprisingly, concluded that NBS was correct: AD-X2 had no merit. Science had won.

NIST makes a movie

On 9 December 2023, NIST released the 20-minute docudrama The AD-X2 Controversy. The film won the Best True Story Narrative and Best of Festival at the 2023 NewsFest Film Festival. I recommend taking the time to watch it.

The AD-X2 Controversy www.youtube.com

Many of the actors are NIST staff and scientists, and they really get into their roles. Much of the dialogue comes verbatim from primary sources, including congressional hearings and contemporary newspaper accounts.

Despite being an in-house production, NIST’s film has a Hollywood connection. The film features brief interviews with actors John and Sean Astin (of Lord of The Rings and Stranger Things fame)—NBS director Astin’s son and grandson.

The AD-X2 controversy is just as relevant today as it was 70 years ago. Scientific research, business interests, and politics remain deeply entangled. If the public is to have faith in science, it must have faith in the integrity of scientists and the scientific method. I have no objection to science being challenged—that’s how science moves forward—but we have to make sure that neither profit nor politics is tipping the scales.

Part of a continuing series looking at historical artifacts that embrace the boundless potential of technology.

An abridged version of this article appears in the August 2024 print issue as “The AD-X2 Affair.”

References


I first heard about AD-X2 after my IEEE Spectrum editor sent me a notice about NIST’s short docudrama The AD-X2 Controversy, which you can, and should, stream online. NIST held a colloquium on 31 July 2018 with John Astin and his brother Alexander (Sandy), where they recalled what it was like to be college students when their father’s reputation was on the line. The agency has also compiled a wonderful list of resources, including many of the primary source government documents.

The AD-X2 controversy played out in the popular media, and I read dozens of articles following the almost daily twists and turns in the case in the New York Times, Washington Post, and Science.

I found Elio Passaglia’s A Unique Institution: The National Bureau of Standards 1950-1969 to be particularly helpful. The AD-X2 controversy is covered in detail in Chapter 2: Testing Can Be Troublesome.

A number of graduate theses have been written about AD-X2. One I consulted was Samuel Lawrence’s 1958 thesis “The Battery AD-X2 Controversy: A Study of Federal Regulation of Deceptive Business Practices.” Lawrence also published the 1962 book The Battery Additive Controversy.


  • ✇IEEE Spectrum
  • Build a Radar Cat DetectorStephen Cass
    You have a closed box. There may be a live cat inside, but you won’t know until you open the box. For most people, this situation is a theoretical conundrum that probes the foundations of quantum mechanics. For me, however, it’s a pressing practical problem, not least because physics completely skates over the vital issue of how annoyed the cat will be when the box is opened. But fortunately, engineering comes to the rescue, in the form of a new US $50 maker-friendly pulsed coherent radar sens
     

Build a Radar Cat Detector

29. Červenec 2024 v 16:00


You have a closed box. There may be a live cat inside, but you won’t know until you open the box. For most people, this situation is a theoretical conundrum that probes the foundations of quantum mechanics. For me, however, it’s a pressing practical problem, not least because physics completely skates over the vital issue of how annoyed the cat will be when the box is opened. But fortunately, engineering comes to the rescue, in the form of a new US $50 maker-friendly pulsed coherent radar sensor from SparkFun.

Perhaps I should back up a little bit. Working from home during the pandemic, my wife and I discovered a colony of feral cats living in the backyards of our block in New York City. We reversed the colony’s growth by doing trap-neuter-return (TNR) on as many of its members as we could, and we purchased three Feralvilla outdoor shelters to see our furry neighbors through the harsh New York winters. These roughly cube-shaped insulated shelters allow the cats to enter via an opening in a raised floor. A removable lid on top allows us to replace straw bedding every few months. It’s impossible to see inside the shelter without removing the lid, meaning you run the risk of surprising a clawed predator that, just moments before, had been enjoying a quiet snooze.

A set of components, including an enclosure with two large holes for LEDs and what looks like cat ears on top. The enclosure for the radar [left column] is made of basswood (adding cat ears on top is optional). A microcontroller [top row, middle column] processes the results from the radar module [top row, right column] and illuminates the LEDs [right column, second from top] accordingly. A battery and on/off switch [bottom row, left to right] make up the power supply.James Provost

Feral cats respond to humans differently than socialized pet cats do. They see us as threats rather than bumbling servants. Even after years of daily feeding, most of the cats in our block’s colony will not let us approach closer than a meter or two, let alone suffer being touched. They have claws that have never seen a clipper. And they don’t like being surprised or feeling hemmed in. So I wanted a way to find out if a shelter was occupied before I popped open its lid for maintenance. And that’s where radar comes in.

SparkFun’s pulsed coherent radar module is based on Acconeer’s low-cost A121 sensor. Smaller than a fingernail, the sensor operates at 60 gigahertz, which means its signal can penetrate many common materials. As the signal passes through a material, some of it is reflected back to the sensor, allowing you to determine distances to multiple surfaces with millimeter-level precision. The radar can be put into a “presence detector” mode—intended to flag whether or not a human is present—in which it looks for changes in the distance of reflections to identify motion.

As soon as I saw the announcement for SparkFun’s module, the wheels began turning. If the radar could detect a human, why not a feline? Sure, I could have solved my is-there-a-cat-in-the-box problem with less sophisticated technology, by, say, putting a pressure sensor inside the shelter. But that would have required a permanent setup complete with weatherproofing, power, and some way of getting data out. Plus I’d have to perform three installations, one for each shelter. For information I needed only once every few months, that seemed a bit much. So I ordered the radar module, along with a $30 IoT RedBoard microcontroller. The RedBoard operates at the same 3.3 volts as the radar and can configure the module and parse its output.

If the radar could detect a human, why not a feline?

Connecting the radar to the RedBoard was a breeze, as they both have Qwiic 4-wire interfaces, which provides power along with an I2C serial connection to peripherals. SparkFun’s Arduino libraries and example code let me quickly test the idea’s feasibility by connecting the microcontroller to a host computer via USB, and I could view the results from the radar via a serial monitor. Experiments with our indoor cats (two defections from the colony) showed that the motion of their breathing was enough to trigger the presence detector, even when they were sound asleep. Further testing showed the radar could penetrate the wooden walls of the shelters and the insulated lining.

The next step was to make the thing portable. I added a small $11 lithium battery and spliced an on/off switch into its power lead. I hooked up two gumdrop LEDs to the RedBoard’s input/output pins and modified SparkFun’s sample scripts to illuminate the LEDs based on the output of the presence detector: a green LED for “no cat” and red for “cat.” I built an enclosure out of basswood, mounted the circuit boards and battery, and cut a hole in the back as a window for the radar module. (Side note: Along with tending feral cats, another thing I tried during the pandemic was 3D-printing plastic enclosures for projects. But I discovered that cutting, drilling, and gluing wood was faster, sturdier, and much more forgiving when making one-offs or prototypes.)

An outgoing sine-wave pulse from the radar is depicted on top. A series of returning pulses of lower amplitudes and at different distances are depicted on the bottom. The radar sensor sends out 60-gigahertz pulses through the walls and lining of the shelter. As the radar penetrates the layers, some radiation is reflected back to the sensor, which it detects to determine distances. Some materials will reflect the pulse more strongly than others, depending on their electrical permittivity. James Provost

I also modified the scripts to adjust the range over which the presence detector scans. When I hold the detector against the wall of a shelter, it looks only at reflections coming from the space inside that wall and the opposite side, a distance of about 50 centimeters. As all the cats in the colony are adults, they take up enough of a shelter’s volume to intersect any such radar beam, as long as I don’t place the detector near a corner.

I performed in-shelter tests of the portable detector with one of our indoor cats, bribed with treats to sit in the open box for several seconds at a time. The detector did successfully spot him whenever he was inside, although it is prone to false positives. I will be trying to reduce these errors by adjusting the plethora of available configuration settings for the radar. But in the meantime, false positives are much more desirable than false negatives: A “no cat” light means it’s definitely safe to open the shelter lid, and my nerves (and the cats’) are the better for it.

  • ✇IEEE Spectrum
  • The Engineer Who Pins Down the Particles at the LHCEdd Gent
    The Large Hadron Collider has transformed our understanding of physics since it began operating in 2008, enabling researchers to investigate the fundamental building blocks of the universe. Some 100 meters below the border between France and Switzerland, particles accelerate along the LHC’s 27-kilometer circumference, nearly reaching the speed of light before smashing together. The LHC is often described as the biggest machine ever built. And while the physicists who carry out experiments
     

The Engineer Who Pins Down the Particles at the LHC

Od: Edd Gent
26. Červenec 2024 v 15:00


The Large Hadron Collider has transformed our understanding of physics since it began operating in 2008, enabling researchers to investigate the fundamental building blocks of the universe. Some 100 meters below the border between France and Switzerland, particles accelerate along the LHC’s 27-kilometer circumference, nearly reaching the speed of light before smashing together.

The LHC is often described as the biggest machine ever built. And while the physicists who carry out experiments at the facility tend to garner most of the attention, it takes hundreds of engineers and technicians to keep the LHC running. One such engineer is Irene Degl’Innocenti, who works in digital electronics at the European Organization for Nuclear Research (CERN), which operates the LHC. As a member of CERN’s beam instrumentation group, Degl’Innocenti creates custom electronics that measure the position of the particle beams as they travel.

Irene Degl’Innocenti


Employer:

CERN

Occupation:

Digital electronics engineer

Education:

Bachelor’s and master’s degrees in electrical engineering; Ph.D. in electrical, electronics, and communications engineering, University of Pisa, in Italy

“It’s a huge machine that does very challenging things, so the amount of expertise needed is vast,” Degl’Innocenti says.

The electronics she works on make up only a tiny part of the overall operation, something Degl’Innocenti is keenly aware of when she descends into the LHC’s cavernous tunnels to install or test her equipment. But she gets great satisfaction from working on such an important endeavor.

“You’re part of something that is very huge,” she says. “You feel part of this big community trying to understand what is actually going on in the universe, and that is very fascinating.”

Opportunities to Work in High-energy Physics

Growing up in Italy, Degl’Innocenti wanted to be a novelist. Throughout high school she leaned toward the humanities, but she had a natural affinity for math, thanks in part to her mother, who is a science teacher.

“I’m a very analytical person, and that has always been part of my mind-set, but I just didn’t find math charming when I was little,” Degl’Innocenti says. “It took a while to realize the opportunities it could open up.”

She started exploring electronics around age 17 because it seemed like the most direct way to translate her logical, mathematical way of thinking into a career. In 2011, she enrolled in the University of Pisa, in Italy, earning a bachelor’s degree in electrical engineering in 2014 and staying on to earn a master’s degree in the same subject.

At the time, Degl’Innocenti had no idea there were opportunities for engineers to work in high-energy physics. But she learned that a fellow student had attended a summer internship at Fermilab, the participle physics and accelerator laboratory in Batavia, Ill. So she applied for and won an internship there in 2015. Since Fermilab and CERN closely collaborate, she was able to help design a data-processing board for LHC’s Compact Muon Solenoid experiment.

Next she looked for an internship closer to home and discovered CERN’s technical student program, which allows students to work on a project over the course of a year. Working in the beam-instrumentation group, Degl’Innocenti designed a digital-acquisition system that became the basis for her master’s thesis.

Measuring the Position of Particle Beams

After receiving her master’s in 2017, Degl’Innocenti went on to pursue a Ph.D., also at the University of Pisa. She conducted her research at CERN’s beam-position section, which builds equipment to measure the position of particle beams within CERN’s accelerator complex. The LHC has roughly 1,000 monitors spaced around the accelerator ring. Each monitor typically consists of two pairs of sensors positioned on opposite sides of the accelerator pipe, and it is possible to measure the beam’s horizontal and vertical positions by comparing the strength of the signal at each sensor.

The underlying concept is simple, Degl’Innocenti says, but these measurements must be precise. Bunches of particles pass through the monitors every 25 nanoseconds, and their position must be tracked to within 50 micrometers.

“We start developing a system years in advance, and then it has to work for a couple of decades.”

Most of the signal processing is normally done in analog, but during her Ph.D., she focused on shifting as much of this work as possible to the digital domain because analog circuits are finicky, she says. They need to be precisely calibrated, and their accuracy tends to drift over time or when temperatures fluctuate.

“It’s complex to maintain,” she says. “It becomes particularly tricky when you have 1,000 monitors, and they are located in an accelerator 100 meters underground.”

Information is lost when analog is converted to digital, however, so Degl’Innocenti analyzed the performance of the latest analog-to-digital converters (ADCs) and investigated their effect on position measurements.

Designing Beam-Monitor Electronics

After completing her Ph.D. in electrical, electronics, and communications engineering in 2021, Degl’Innocenti joined CERN as a senior postdoctoral fellow. Two years later, she became a full-time employee there, applying the results of her research to developing new hardware. She’s currently designing a new beam-position monitor for the High-Luminosity upgrade to the LHC, expected to be completed in 2028. This new system will likely use a system-on-chip to house most of the electronics, including several ADCs and a field-programmable gate array (FPGA) that Degl’Innocenti will program to run a new digital signal-processing algorithm.

She’s part of a team of just 15 who handle design, implementation, and ongoing maintenance of CERN’s beam-position monitors. So she works closely with the engineers who design sensors and software for those instruments and the physicists who operate the accelerator and set the instruments’ requirements.

“We start developing a system years in advance, and then it has to work for a couple of decades,” Degl’Innocenti says.

Opportunities in High-Energy Physics

High-energy physics has a variety of interesting opportunities for engineers, Degl’Innocenti says, including high-precision electronics, vacuum systems, and cryogenics.

“The machines are very large and very complex, but we are looking at very small things,” she says. “There are a lot of big numbers involved both at the large scale and also when it comes to precision on the small scale.”

FPGA design skills are in high demand at all kinds of research facilities, and embedded systems are also becoming more important, Degl’Innocenti says. The key is keeping an open mind about where to apply your engineering knowledge, she says. She never thought there would be opportunities for people with her skill set at CERN.

“Always check what technologies are being used,” she advises. “Don’t limit yourself by assuming that working somewhere would not be possible.”

This article appears in the August 2024 print issue as “Irene Degl’Innocenti.”

  • ✇IEEE Spectrum
  • A Bosch Engineer Speeds Hybrid Race Cars to the Finish LineEdd Gent
    When it comes to motorsports, the need for speed isn’t only on the racetrack. Engineers who support race teams also need to work at a breakneck pace to fix problems, and that’s something Aakhilesh Singhania relishes. Singhania is a senior applications engineer at Bosch Engineering, in Novi, Mich. He develops and supports electronic control systems for hybrid race cars, which feature combustion engines and battery-powered electric motors. Aakhilesh Singhania Employer: Bosch Engineering Occ
     

A Bosch Engineer Speeds Hybrid Race Cars to the Finish Line

Od: Edd Gent
24. Červen 2024 v 16:00


When it comes to motorsports, the need for speed isn’t only on the racetrack. Engineers who support race teams also need to work at a breakneck pace to fix problems, and that’s something Aakhilesh Singhania relishes.

Singhania is a senior applications engineer at Bosch Engineering, in Novi, Mich. He develops and supports electronic control systems for hybrid race cars, which feature combustion engines and battery-powered electric motors.

Aakhilesh Singhania


Employer:

Bosch Engineering

Occupation:

Senior applications engineer

Education:

Bachelor’s degree in mechanical engineering, Manipal Institute of Technology, India; master’s degree in automotive engineering, University of Michigan, Ann Arbor

His vehicles compete in two iconic endurance races: the Rolex 24 at Daytona in Daytona Beach, Fla., and the 24 Hours of Le Mans in France. He splits his time between refining the underlying technology and providing trackside support on competition day. Given the relentless pace of the racing calendar and the intense time pressure when cars are on the track, the job is high octane. But Singhania says he wouldn’t have it any other way.

“I’ve done jobs where the work gets repetitive and mundane,” he says. “Here, I’m constantly challenged. Every second counts, and you have to be very quick at making decisions.”

An Early Interest in Motorsports

Growing up in Kolkata, India, Singhania picked up a fascination with automobiles from his father, a car enthusiast.

In 2010, when Singhania began his mechanical engineering studies at India’s Manipal Institute of Technology, he got involved in the Formula Student program, an international engineering competition that challenges teams of university students to design, build, and drive a small race car. The cars typically weigh less than 250 kilograms and can have an engine no larger than 710 cubic centimeters.

“It really hooked me,” he says. “I devoted a lot of my spare time to the program, and the experience really motivated me to dive further into motorsports.”

One incident in particular shaped Singhania’s career trajectory. In 2013, he was leading Manipal’s Formula Student team and was one of the drivers for a competition in Germany. When he tried to start the vehicle, smoke poured out of the battery, and the team had to pull out of the race.

“I asked myself what I could have done differently,” he says. “It was my lack of knowledge of the electrical system of the car that was the problem.” So, he decided to get more experience and education.

Learning About Automotive Electronics

After graduating in 2014, Singhania began working on engine development for Indian car manufacturer Tata Motors in Pune. In 2016, determined to fill the gaps in his knowledge about automotive electronics, he left India to begin a master’s degree program in automotive engineering at the University of Michigan in Ann Arbor.

He took courses in battery management, hybrid controls, and control-system theory, parlaying this background into an internship with Bosch in 2017. After graduation in 2018, he joined Bosch full-time as a calibration engineer, developing technology for hybrid and electric vehicles.

Transitioning into motorsports required perseverance, Singhania says. He became friendly with the Bosch team that worked on electronics for race cars. Then in 2020 he got his big break.

That year, the U.S.-based International Motor Sports Association and the France-based Automobile Club de l’Ouest created standardized rules to allow the same hybrid race cars to compete in both the Sportscar Championship in North America, host of the famous Daytona race, and the global World Endurance Championship, host of Le Mans.

The Bosch motorsports team began preparing a proposal to provide the standardized hybrid system. Singhania, whose job already included creating simulations of how vehicles could be electrified, volunteered to help.

“I’m constantly challenged. Every second counts, and you have to be very quick at making decisions.”

The competition organizers selected Bosch as lead developer of the hybrid system that would be provided to all teams. Bosch engineers would also be required to test the hardware they supplied to each team to ensure none had an advantage.

“The performance of all our parts in all the cars has to fall within 1 percent of each other,” Singhania says.

After Bosch won the contract, Singhania officially became a motorsports calibration engineer, responsible for tweaking the software to fit the idiosyncrasies of each vehicle.

In 2022 he stepped up to his current role: developing software for the hybrid control unit (HCU), which is essentially the brains of the vehicle. The HCU helps coordinate all of the different subsystems such as the engine, battery, and electric motor and is responsible for balancing power requirements among these different components to maximize performance and lifetime.

Bosch’s engineers also designed software known as an equity model, which runs on the HCU. It is based on historical data collected from the operation of the hybrid systems’ various components, and controls their performance in real time to ensure all the teams’ hardware operates at the same level.

In addition, Singhania creates simulations of the race cars, which are used to better understand how the different components interact and how altering their configuration would affect performance.

Troubleshooting Problems on Race Day

Technology development is only part of Singhania’s job. On race days, he works as a support engineer, helping troubleshoot problems with the hybrid system as they crop up. Singhania and his colleagues monitor each team’s hardware using computers on Bosch’s race-day trailer, a mobile nerve center hardwired to the organizers’ control center on the race track.

“We are continuously looking at all the telemetry data coming from the hybrid system and analyzing [the system’s] health and performance,” he says.

If the Bosch engineers spot an issue or a team notifies them of a problem, they rush to the pit stall to retrieve a USB stick from the vehicle, which contains detailed data to help them diagnose and fix the issue.

After the race, the Bosch engineers analyze the telemetry data to identify ways to boost the standardized hybrid system’s performance for all the teams. In motorsports, where the difference between winning and losing can come down to fractions of a second, that kind of continual improvement is crucial.

Customers “put lots of money into this program, and they are there to win,” Singhania says.

Breaking Into Motorsports Engineering

Many engineers dream about working in the fast-paced and exciting world of motorsports, but it’s not easy breaking in. The biggest lesson Singhania learned is that if you don’t ask, you don’t get invited.

“Keep pursuing them because nobody’s going to come to you with an offer,” he says. “You have to keep talking to people and be ready when the opportunity presents itself.”

Demonstrating that you have experience contributing to challenging projects is a big help. Many of the engineers Bosch hires have been involved in Formula Student or similar automotive-engineering programs, such as the EcoCAR EV Challenge, says Singhania.

The job isn’t for everyone, though, he says. It’s demanding and requires a lot of travel and working on weekends during race season. But if you thrive under pressure and have a knack for problem solving, there are few more exciting careers.

  • ✇IEEE Spectrum
  • Lord Kelvin and His Analog ComputerAllison Marsh
    In 1870, William Thomson, mourning the death of his wife and flush with cash from various patents related to the laying of the first transatlantic telegraph cable, decided to buy a yacht. His schooner, the Lalla Rookh, became Thomson’s summer home and his base for hosting scientific parties. It also gave him firsthand experience with the challenge of accurately predicting tides. Mariners have always been mindful of the tides lest they find themselves beached on low-lying shoals. Naval admira
     

Lord Kelvin and His Analog Computer

2. Červen 2024 v 15:00


In 1870, William Thomson, mourning the death of his wife and flush with cash from various patents related to the laying of the first transatlantic telegraph cable, decided to buy a yacht. His schooner, the Lalla Rookh, became Thomson’s summer home and his base for hosting scientific parties. It also gave him firsthand experience with the challenge of accurately predicting tides.

Mariners have always been mindful of the tides lest they find themselves beached on low-lying shoals. Naval admirals guarded tide charts as top-secret information. Civilizations recognized a relationship between the tides and the moon early on, but it wasn’t until 1687 that Isaac Newton explained how the gravitational forces of the sun and the moon caused them. Nine decades later, the French astronomer and mathematician Pierre-Simon Laplace suggested that the tides could be represented as harmonic oscillations. And a century after that, Thomson used that concept to design the first machine for predicting them.

Lord Kelvin’s Rising Tide

William Thomson was born on 26 June 1824, which means this month marks his 200th birthday and a perfect time to reflect on his all-around genius. Thomson was a mathematician, physicist, engineer, and professor of natural philosophy. Queen Victoria knighted him in 1866 for his work on the transatlantic cable, then elevated him to the rank of baron in 1892 for his contributions to thermodynamics, and so he is often remembered as Lord Kelvin. He determined the correct value of absolute zero, for which he is honored by the SI unit of temperature—the kelvin. He dabbled in atmospheric electricity, was a proponent of the vortex theory of the atom, and in the absence of any knowledge of radioactivity made a rather poor estimation of the age of the Earth, which he gave as somewhere between 24 million and 400 million years.

Old photo of an elderly man with a white beard, holding a model of a molecule. William Thomson, also known as Lord Kelvin, is best known for establishing the value of absolute zero. He believed in the practical application of scientific knowledge and invented a wide array of useful, and beautiful, devices. Pictorial Press/Alamy

Thomson’s tide-predicting machine calculated the tide pattern for a given location based on 10 cyclic constituents associated with the periodic motions of the Earth, sun, and moon. (There are actually hundreds of periodic motions associated with these objects, but modern tidal analysis uses only the 37 of them that have the most significant effects.) The most notable one is the lunar semidiurnal, observable in areas that have two high tides and two low tides each day, due to the effects of the moon. The period of a lunar semidiurnal is 12 hours and 25 minutes—half of a lunar day, which lasts 24 hours and 50 minutes.

As Laplace had suggested in 1775, each tidal constituent can be represented as a repeating cosine curve, but those curves are specific to a location and can be calculated only through the collection of tidal data. Luckily for Thomson, many ports had been logging tides for decades. For places that did not have complete logs, Thomson designed both an improved tide gauge and a tidal harmonic analyzer.

On Thomson’s tide-predicting machine, each of 10 components was associated with a specific tidal constituent and had its own gearing to set the amplitude. The components were geared together so that their periods were proportional to the periods of the tidal constituents. A single crank turned all of the gears simultaneously, having the effect of summing each of the cosine curves. As the user turned the crank, an ink pen traced the resulting complex curve on a moving roll of paper. The device marked each hour with a small horizontal mark, making a deeper notch each day at noon. Turning the wheel rapidly allowed the user to run a year’s worth of tide readings in about 4 hours.

Although Thomson is credited with designing the machine, in his paper “The Tide Gauge, Tidal Harmonic Analyser, and Tide Predicter” (published in Minutes of the Proceedings of the Institution of Civil Engineers), he acknowledges a number of people who helped him solve specific problems. Craftsman Alexander Légé drew up the plan for the screw gearing for the motions of the shafts and constructed the initial prototype machine and subsequent models. Edward Roberts of the Nautical Almanac Office completed the arithmetic to express the ratio of shaft speeds. Thomson’s older brother, James, a professor of civil engineering at Queen’s College Belfast, designed the disk-globe-and-cylinder integrator that was used for the tidal harmonic analyzer. Thomson’s generous acknowledgments are a reminder that the work of engineers is almost always a team effort.

Photos of two machines. On the left, a machine with a clock face and four smaller dials. On the right, a large machine with multiple metal gears mounted on a long stand. Like Thomson’s tide-prediction machine, these two devices, developed at the U.S. Coast and Geodetic Survey, also looked at tidal harmonic oscillations. William Ferrel’s machine [left] used 19 tidal constituents, while the later machine by Rollin A. Harris and E.G. Fischer [right], relied on 37 constituents. U.S. Coast and Geodetic Survey/NOAA

As with many inventions, the tide predictor was simultaneously and independently developed elsewhere and continued to be improved by others, as did the science of tide prediction. In 1874 in the United States, William Ferrel, a mathematician with the Coast and Geodetic Survey, developed a similar harmonic analysis and prediction device that used 19 harmonic constituents. George Darwin, second son of the famous naturalist, modified and improved the harmonic analysis and published several articles on tides throughout the 1880s. Oceanographer Rollin A. Harris wrote several editions of the Manual of Tides for the Coast and Geodetic Survey from 1897 to 1907, and in 1910 he developed, with E.G. Fischer, a tide-predicting machine that used 37 constituents. In the 1920s, Arthur Doodson of the Tidal Institute of the University of Liverpool, in England, and Paul Schureman of the Coast and Geodetic Survey further refined techniques for harmonic analysis and prediction that served for decades. Because of the complexity of the math involved, many of these old brass machines remained in use into the 1950s, when electronic computers finally took over the work of predicting tides.

What Else Did Lord Kelvin Invent?

As regular readers of this column know, I always feature a museum object from the history of computer or electrical engineering and then spin out a story. When I started scouring museum collections for a suitable artifact for Thomson, I was almost paralyzed by the plethora of choices.

I considered Thomson’s double-curb transmitter, which was designed for use with the 1858 transatlantic cable to speed up telegraph signals. Thomson had sailed on the HMS Agamemnon in 1857 on its failed mission to lay a transatlantic cable and was instrumental to the team that finally succeeded.

Photo of a rectangular scientific instrument made of wood and brass. Thomson invented the double-curb transmitter to speed up signals in transatlantic cables.Science Museum Group

I also thought about featuring one of his quadrant electrometers, which measured electrical charge. Indeed, Thomson introduced a number of instruments for measuring electricity, and a good part of his legacy is his work on the precise specifications of electrical units.

But I chose to highlight Thomson’s tide-predicting machine for a number of reasons: Thomson had a lifelong love of seafaring and made many contributions to marine technology that are sometimes overshadowed by his other work. And the tide-predicting machine is an example of an early analog computer that was much more useful than Babbage’s difference engine but not nearly as well known. Also, it is simply a beautiful machine. In fact, Thomson seems to have had a knack for designing stunningly gorgeous devices. (The tide-predicting machine at top and many other Kelvin inventions are in the collection of the Science Museum, in London.)

Photo of a brass scientific instrument with a triangular base supporting a 6-sided cylinder. Thomson devised the quadrant electrometer to measure electric charge. Science Museum Group

The tide-predicting machine was not Thomson’s only contribution to maritime technology. He also patented a compass, an astronomical clock, a sounding machine, and a binnacle (a pedestal that houses nautical instruments). With respect to maritime science, Thomson thought and wrote much about the nature of waves. He mathematically explained the v-shaped wake patterns that ships and waterfowl make as they move across a body of water, which is aptly named the Kelvin wake pattern. He also described what is now known as a Kelvin wave, a type of wave that retains its shape as it moves along the shore due to the balancing of the Earth’s spin against a topographic boundary, such as a coastline.

Considering how much Thomson contributed to all things seafaring, it is amazing that these are some of his lesser known achievements. I guess if you have an insatiable curiosity, a robust grasp of mathematics and physics, and a strong desire to tinker with machinery and apply your scientific knowledge to solving practical problems that benefit humankind, you too have the means to come to great conclusions about the natural world. It can’t hurt to have a nice yacht to spend your summers on.

Part of a continuing series looking at historical artifacts that embrace the boundless potential of technology.

An abridged version of this article appears in the June 2024 print issue as “Brass for Brains.”

References


Before the days of online databases for their collections, museums would periodically publish catalogs of their collections. In 1877, the South Kensington Museum (originator of the collections of the Science Museum, in London, and now known as the Victoria & Albert Museum) published the third edition of its Catalogue of the Special Loan Collection of Scientific Apparatus, which lists a description of Lord Kelvin’s tide-predicting machine on page 11. That description is much more detailed, albeit more confusing, than its current online one.

In 1881, William Thomson published “The Tide Gauge, Tidal Harmonic Analyser, and Tide Predicter” in the Minutes of the Proceedings of the Institute of Civil Engineers, where he gave detailed information on each of those three devices.

I also relied on a number of publications from the National Oceanic and Atmospheric Administration to help me understand tidal analysis and prediction.

  • ✇IEEE Spectrum
  • Build Long-Range IoT Applications Fast With MeshtasticStephen Cass
    Oh me, oh mesh! Many journalists in this business have at least one pet technology that’s never taken off in the way they think it should. Hypersonic passenger planes, deep-sea thermal-energy power plants, chording keyboards—all have their adherents, eager to jump at the chance of covering their infatuation. For me, it’s mesh radio systems, which first captivated me while I was zipping around downtown Las Vegas back in 2004. In that pre-smartphone, practically pre-3G era, I was testing a mesh ne
     

Build Long-Range IoT Applications Fast With Meshtastic

29. Květen 2024 v 17:00


Oh me, oh mesh! Many journalists in this business have at least one pet technology that’s never taken off in the way they think it should. Hypersonic passenger planes, deep-sea thermal-energy power plants, chording keyboards—all have their adherents, eager to jump at the chance of covering their infatuation. For me, it’s mesh radio systems, which first captivated me while I was zipping around downtown Las Vegas back in 2004. In that pre-smartphone, practically pre-3G era, I was testing a mesh network deployed by a local startup, downloading files at what was then a mind-boggling rate of 1.5 megabits per second in a moving car. Clearly, mesh and its ad hoc decentralized digital architecture were the future of wireless comms!

Alas, in the two decades since, mesh networking has been slow to displace conventional radio systems. It’s popped up on a small scale in things like the Zigbee wireless protocol for the Internet of Things, and in recent years it’s become common to see Wi-Fi networks extended using mesh-based products such as the Eero. But it’s still a technology that I think has yet to fulfill its potential. So I’ve been excited to see the emergence of the open-source Meshtastic protocol, and the proliferation of maker-friendly hardware around it. I had to try it out myself.

Meshtastic is built on top of the increasingly popular LoRa (long-range) technology, which relies on spread-spectrum methods to send low-power, low-bandwidth signals over distances up to about 16 kilometers (in perfect conditions) using unlicensed radio bands. Precise frequencies vary by region, but they’re in the 863- to 928-megahertz range. You’re not going to use a Meshtastic network for 1.5-Mb/s downloads, or even voice communications. But you can use it to exchange text messages, location data, and the like in the absence of any other communications infrastructure.

Various small breakout boards above a standalone communicator with keyboard and screen The stand-alone communicator [bottom of illustration] can be ordered assembled, or you can build your own from open-source design files. The RAKwireless Meshtastic development board is based around plug-in modules, including the carrier board, an environmental sensor, I/O expander board, radio module, OLED screen, and LoRa and Bluetooth modules.James Provost

To test out text messaging, I bought three HelTXT handheld communicators for US $85 each on Tindie. These are essentially just a battery, keyboard, small screen, ESP32-based microcontroller, and a LoRa radio in a 3D-printed enclosure. My original plan was to coerce a couple of my fellow IEEE Spectrum editors to spread out around Manhattan to get a sense of the range of the handhelds in a dense urban environment. By turning an intermediate device on and off, we would demonstrate the relaying of signals between handhelds that would otherwise be out of range of each other.

This plan was rendered moot within a few minutes of turning the handhelds on. A test “hello” transmission was greeted by an unexpected “hey.” The handhelds’ default setting is to operate on a public channel, and my test message had been received by somebody with a Meshtastic setup about 4 kilometers away, across the East River. Then I noticed my handheld had detected a bunch of other Meshtastic nodes, including one 5 km away at the southern tip of Manhattan. Clearly, range was not going to be an issue, even with a forest of skyscrapers blocking the horizon. Indeed, given the evident popularity of Meshtastic, it was going to be impossible to test the communicators in isolation! (Two Spectrum editors live in Minnesota, so I hope to persuade them to try the range tests with fewer Meshtastic users per square kilometer.)

I turned to my next test idea—exchanging real-time data and commands via the network. I bought a $25 WisBlock meshtastic starter kit from RAKwireless, which marries a LoRA radio/microcontroller and an expansion board. This board can accommodate a selection of cleverly designed and inexpensive plug-in hardware modules, including sensors and displays. The radio has both LoRa and Bluetooth antennas, and there’s a nice smartphone app that uses the Bluetooth connection to relay text messages through the radio and configure many settings. You can also configure the radios via a USB cable and a Python command-line-interface program.

In addition to basic things like establishing private encrypted channels, you can enable a number of software modules in the firmware. These modules are designed to accomplish common tasks, such as periodically reading and transmitting data from an attached environmental sensor plug-in. Probably the most useful software module is the serial module, which lets the Meshtastic hardware act as a gateway between the radio network and a second microcontroller running your own custom IoT application, communicating via a two- or three-wire connection.

James Provost The Meshtastic protocol has seen significant evolution. In the initial system, any node that heard a broadcast would rebroadcast it, leading to local congestion [top row]. But now, signal strength is used as a proxy for distance, with more-distant nodes broadcasting first. Nodes that hear a broadcast twice will not rebroadcast it, reducing congestion [bottom row].James Provost

For my demo, I wired up a button and an LED to an Adafruit Grand Central board running CircuitPython. (I chose this board because its 3.3-volt levels are compatible with the RAKwireless hardware.) I programmed the Grand Central to send an ASCII-encoded message to the RAKwireless radio over a serial connection when I pressed the button, and to illuminate the LED if it received an ASCII string containing the word “btn.”

On the radio side, I used a plug-in I/O expander to connect the serial transmit and receive wires. The tricky part was mapping the pin names as labeled on the adapter with the corresponding microcontroller pins. You need to know the microcontroller pins when setting up the receive and transmit pins with the serial module, as it doesn’t know how the adapter is set up. But after some paging through the documentation, I eventually found the mapping.

I pressed the button connected to my Grand Central microcontroller, and “button down” instantly popped up on my handheld communicators. Then I sent “btn,” and the LED lit up. Success! With that proof of concept done, pretty much anything else is doable as well.

Will makers building applications on top of Meshtastic lead to the mesh renaissance I’ve been waiting for? With more hands on deck, I hope to see some surprising uses emerge that will make the case for mesh better than any starry-eyed argument from me.

  • ✇IEEE Spectrum
  • How to EMP-Proof a BuildingEmily Waltz
    This year, the sun will reach solar maximum, a period of peak magnetic activity that occurs approximately once every 11 years. That means more sunspots and more frequent intense solar storms. Here on Earth, these result in beautiful auroral activity, but also geomagnetic storms and the threat of electromagnetic pulses (EMPs), which can bring widespread damage to electronic equipment and communications systems.Yilu LiuYilu Liu is a Governor’s Chair/Professor at the University of Tennessee, in Kno
     

How to EMP-Proof a Building

25. Květen 2024 v 15:00


This year, the sun will reach solar maximum, a period of peak magnetic activity that occurs approximately once every 11 years. That means more sunspots and more frequent intense solar storms. Here on Earth, these result in beautiful auroral activity, but also geomagnetic storms and the threat of electromagnetic pulses (EMPs), which can bring widespread damage to electronic equipment and communications systems.

Yilu Liu


Yilu Liu is a Governor’s Chair/Professor at the University of Tennessee, in Knoxville, and Oak Ridge National Laboratory.

And the sun isn’t the only source of EMPs. Human-made EMP generators mounted on trucks or aircraft can be used as tactical weapons to knock out drones, satellites, and infrastructure. More seriously, a nuclear weapon detonated at a high altitude could, among its more catastrophic effects, generate a wide-ranging EMP blast. IEEE Spectrum spoke with Yilu Liu, who has been researching EMPs at Oak Ridge National Laboratory, in Tennessee, about the potential effects of the phenomenon on power grids and other electronics.

What are the differences between various kinds of EMPs?

Yilu Liu: A nuclear explosion at an altitude higher than 30 kilometers would generate an EMP with a much broader spectrum than one from a ground-level weapon or a geomagnetic storm, and it would arrive in three phases. First comes E1, a powerful pulse that brings very fast high-frequency waves. The second phase, E2, produces current similar to that of a lightning strike. The third phase, E3, brings a slow, varying waveform, kind of like direct current [DC], that can last several minutes. A ground-level electromagnetic weapon would probably be designed for emitting high-frequency waves similar to those produced by an E1. Solar magnetic disturbances produce a slow, varying waveform similar to that of E3.

How do EMPs damage power grids and electronic equipment?

Liu: Phase E1 induces current in conductors that travels to sensitive electronic circuits, destroying them or causing malfunctions. We don’t worry about E2 much because it’s like lightning, and grids are protected against that. Phase E3 and solar magnetic EMPs inject a foreign, DC-like current into transmission lines, which saturates transformers, causing a lot of high-frequency currents that have led to blackouts.

How do you study the effects of an EMP without generating one?

Liu: We measured the propagation into a building of low-level electromagnetic waves from broadcast radio. We wanted to know if physical structures, like buildings, could act as a filter, so we took measurements of radio signals both inside and outside a hydropower station and other buildings to figure out how much gets inside. Our computer models then amplified the measurements to simulate how an EMP would affect equipment.

What did you learn about protecting buildings from damage by EMPs?

Liu: When constructing buildings, definitely use rebar in your concrete. It’s very effective as a shield against electromagnetic waves. Large windows are entry points, so don’t put unshielded control circuits near them. And if there are cables coming into the building carrying power or communication, make sure they are well-shielded; otherwise, they will act like antennas.

Have solar EMPs caused damage in the past?

Liu: The most destructive recent occurrence was in Quebec in 1989, which resulted in a blackout. Once a transformer is saturated, the current flowing into the grid is no longer just 60 hertz but multiples of 60 Hz, and it trips the capacitors, and then the voltage collapses and the grid experiences an outage. The industry is better prepared now. But you never know if the next solar storm will surpass those of the past.

This article appears in the June 2024 issues as “5 Questions for Yilu Liu.”

  • ✇IEEE Spectrum
  • Move Over, Tractor—the Farmer Wants a Crop-Spraying DroneEdd Gent
    Arthur Erickson discovered drones during his first year at college studying aerospace engineering. He immediately thought the sky was the limit for how the machines could be used, but it took years of hard work and some nimble decisions to turn that enthusiasm into a successful startup. Today, Erickson is the CEO of Houston-based Hylio, a company that builds crop-spraying drones for farmers. Launched in 2015, the company has its own factory and employs more than 40 people. Arthur Erickson Occ
     

Move Over, Tractor—the Farmer Wants a Crop-Spraying Drone

Od: Edd Gent
22. Květen 2024 v 17:00


Arthur Erickson discovered drones during his first year at college studying aerospace engineering. He immediately thought the sky was the limit for how the machines could be used, but it took years of hard work and some nimble decisions to turn that enthusiasm into a successful startup.

Today, Erickson is the CEO of Houston-based Hylio, a company that builds crop-spraying drones for farmers. Launched in 2015, the company has its own factory and employs more than 40 people.

Arthur Erickson


Occupation:

Aerospace engineer and founder, Hylio

Location:

Houston

Education:

Bachelor’s degree in aerospace, specializing in aeronautics, from the University of Texas at Austin

Erickson founded Hylio with classmates while they were attending the University of Texas at Austin. They were eager to quit college and launch their business, which he admits was a little presumptuous.

“We were like, ‘Screw all the school stuff—drones are the future,’” Erickson says. “I already thought I had all the requisite technical skills and had learned enough after six months of school, which obviously was arrogant.”

His parents convinced him to finish college, but Erickson and the other cofounders spent all their spare time building a multipurpose drone from off-the-shelf components and parts they made using their university’s 3D printers and laser cutters.

By the time he graduated in 2017 with a bachelor’s degree in aerospace, specializing in aeronautics, the group’s prototype was complete, and they began hunting for customers. The next three years were a wild ride of testing their drones in Costa Rica and other countries across Central America.

A grocery delivery service

A promotional video about the company that Erickson posted on Instagram led to the first customer, the now-defunct Costa Rican food and grocery delivery startup GoPato. The company wanted to use the drones to make deliveries in the capital, San José, but rather than purchase the machines, GoPato offered to pay for the founders’ meals and lodging and give them a percentage of delivery fees collected.

For the next nine months, Hylio’s team spent their days sending their drones on deliveries and their nights troubleshooting problems in a makeshift workshop in their shared living room.

“We had a lot of sleepless nights,” Erickson says. “It was a trial by fire, and we learned a lot.”

One lesson was the need to build in redundant pieces of key hardware, particularly the GPS unit. “When you have a drone crash in the middle of a Costa Rican suburb, the importance of redundancy really hits home,” Erickson says.

“Drones are great for just learning, iterating, crashing things, and then rebuilding them.”

The small cut of delivery fees Hylio received wasn’t covering costs, Erickson says, so eventually the founders parted ways with GoPato. Meanwhile, they had been looking for new business opportunities in Costa Rica. They learned from local farmers that the terrain was too rugged for tractors, so most sprayed crops by hand. This was both grueling and hazardous because it brought the farmers into close proximity to the pesticides.

The Hylio team realized its drones could do this type of work faster and more safely. They designed a spray system and made some software tweaks, and by 2018 the company began offering crop-spraying services, Erickson says. The company expanded its business to El Salvador, Guatemala, and Honduras, starting with just a pair of drones but eventually operating three spraying teams of four drones each.

The work was tough, Erickson says, but the experience helped the team refine their technology, working out which sensors operated best in the alternately dusty and moist conditions found on farms. Even more important, by the end of 2019 they were finally turning a profit.

Drones are cheaper than tractors

In hindsight, agriculture was an obvious market, Erickson says, even in the United States, where spraying with herbicides, pesticides, and fertilizers is typically done using large tractors. These tractors can cost up to half a million dollars to purchase and about US $7 a hectare to operate.

A pair of Hylio’s drones cost a fifth of that, Erickson says, and operating them costs about a quarter of the price. The company’s drones also fly autonomously; an operator simply marks GPS waypoints on a map to program the drone where to spray and then sits back and lets it do the job. In this way, one person can oversee multiple drones working at once, covering more fields than a single tractor could.

A dark haired beard man in glasses reaches down to a large white drone that is as tall as his mid-thighs and has multiple rotors. Arthur Erickson inspects the company’s largest spray drone, the AG-272. It can cover thousands of hectares per day.Hylio

Convincing farmers to use drones instead of tractors was tough, Erickson says. Farmers tend to be conservative and are wary of technology companies that promise too much.

“Farmers are used to people coming around every few years with some newfangled idea, like a laser that’s going to kill all their weeds or some miracle chemical,” he says.

In 2020, Hylio opened a factory in Houston and started selling drones to American farmers. The first time Hylio exhibited its machines at an agricultural trade show, Erickson says, a customer purchased one on the spot.

“It was pretty exciting,” he says. “It was a really good feeling to find out that our product was polished enough, and the pitch was attractive enough, to immediately get customers.”

Today, selling farmers on the benefits of drones is a big part of Erickson’s job. But he’s still involved in product development, and his daily meetings with the sales team have become an invaluable source of customer feedback. “They inform a lot of the features that we add to the products,” he says.

He’s currently leading development of a new type of drone—a scout—designed to quickly inspect fields for pest infestations or poor growth or to assess crop yields. But these days his job is more about managing his team of engineers than about doing hands-on engineering himself. “I’m more of a translator between the engineers and the market needs,” he says.

Focus on users’ needs

Erickson advises other founders of startups not to get too caught up in the excitement of building cutting-edge technology, because you can lose sight of what the user actually needs.

“I’ve become a big proponent of not trying to outsmart the customers,” he says. “They tell us what their pain points are and what they want to see in the product. Don’t overengineer it. Always check with the end users that what you’re building is going to be useful.”

Working with drones forces you to become a generalist, Erickson says. You need a basic understanding of structural mechanics and aerodynamics to build something airworthy. But you also need to be comfortable working with sensors, communications systems, and power electronics, not to mention the software used to control and navigate the vehicles.

Erickson advises students who want to get into the field to take courses in mechatronics, which provide a good blend of mechanical and electrical engineering. Deep knowledge of the individual parts is generally not as important as understanding how to fit all the pieces together to create a system that works well as a whole.

And if you’re a tinkerer like he is, Erickson says, there are few better ways to hone your engineering skills than building a drone. “It’s a cheap, fast way to get something up in the air,” he says. “They’re great for just learning, iterating, crashing things, and then rebuilding them.”

This article appears in the June 2024 print issue as “Careers: Arthur Erickson.”

  • ✇IEEE Spectrum
  • A Brief History of the World’s First PlanetariumAllison Marsh
    In 1912, Oskar von Miller, an electrical engineer and founder of the Deutsches Museum, had an idea: Could you project an artificial starry sky onto a dome, as a way of demonstrating astronomical principles to the public?It was such a novel concept that when von Miller approached the Carl Zeiss company in Jena, Germany, to manufacture such a projector, they initially rebuffed him. Eventually, they agreed, and under the guidance of lead engineer Walther Bauersfeld, Zeiss created something amazing.
     

A Brief History of the World’s First Planetarium

1. Květen 2024 v 17:00


In 1912, Oskar von Miller, an electrical engineer and founder of the Deutsches Museum, had an idea: Could you project an artificial starry sky onto a dome, as a way of demonstrating astronomical principles to the public?

It was such a novel concept that when von Miller approached the Carl Zeiss company in Jena, Germany, to manufacture such a projector, they initially rebuffed him. Eventually, they agreed, and under the guidance of lead engineer Walther Bauersfeld, Zeiss created something amazing.

The use of models to show the movements of the planets and stars goes back centuries, starting with mechanical orreries that used clockwork mechanisms to depict our solar system. A modern upgrade was Clair Omar Musser’s desktop electric orrery, which he designed for the Seattle World’s Fair in 1962.

The projector that Zeiss planned for the Deutsches Museum would be far more elaborate. For starters, there would be two planetariums. One would showcase the Copernican, or heliocentric, sky, displaying the stars and planets as they revolved around the sun. The other would show the Ptolemaic, or geocentric, sky, with the viewer fully immersed in the view, as if standing on the surface of the Earth, seemingly at the center of the universe.

The task of realizing those ideas fell to Bauersfeld, a mechanical engineer by training and a managing director at Zeiss.

On the left, a 1927 black and white photo of a man wearing a suit and tie. On the right, a rough sketch of an apparatus with handwritten notes. Zeiss engineer Walther Bauersfeld worked out the electromechanical details of the planetarium. In this May 1920 entry from his lab notebook [right], he sketched the two-axis system for showing the daily and annual motions of the stars.ZEISS Archive

At first, Bauersfeld focused on projecting just the sun, moon, and planets of our solar system. At the suggestion of his boss, Rudolf Straubel, he added stars. World War I interrupted the work, but by 1920 Bauersfeld was back at it. One entry in May 1920 in Bauersfeld’s meticulous lab notebook showed the earliest depiction of the two-axis design that allowed for the display of the daily as well as the annual motions of the stars. (The notebook is preserved in the Zeiss Archive.)

The planetarium projector was in fact a concatenation of many smaller projectors and a host of gears. According to the Zeiss Archive, a large sphere held all of the projectors for the fixed stars as well as a “planet cage” that held projectors for the sun, the moon, and the planets Mercury, Venus, Mars, Jupiter, and Saturn. The fixed-star sphere was positioned so that it projected outward from the exact center of the dome. The planetarium also had projectors for the Milky Way and the names of major constellations.

The projectors within the planet cage were organized in tiers with complex gearing that allowed a motorized drive to move them around one axis to simulate the annual rotations of these celestial objects against the backdrop of the stars. The entire projector could also rotate around a second axis, simulating the Earth’s polar axis, to show the rising and setting of the sun, moon, and planets over the horizon.

Black and white photo of a domed structure under construction with workers standing on the top. The Zeiss planetarium projected onto a spherical surface, which consisted of a geodesic steel lattice overlaid with concrete.Zeiss Archive

Bauersfeld also contributed to the design of the surrounding projection dome, which achieved its exactly spherical surface by way of a geodesic network of steel rods covered by a thin layer of concrete.

Planetariums catch on worldwide

The first demonstration of what became known as the Zeiss Model I projector took place on 21 October 1923 before the Deutsches Museum committee in their not-yet-completed building, in Munich. “This planetarium is a marvel,” von Miller declared in an administrative report.

A photo of a crowd of people on the roof on a building. In 1924, public demonstrations of the Zeiss planetarium took place on the roof of the company’s factory in Jena, Germany.ZEISS Archive

The projector then returned north to Jena for further adjustments and testing. The company also began offering demonstrations of the projector in a makeshift dome on the roof of its factory. From July to September 1924, more than 30,000 visitors experienced the Zeisshimmel (Zeiss sky) this way. These demonstrations became informal visitor-experience studies and allowed Zeiss and the museum to make refinements and improvements.

On 7 May 1925, the world’s first projection planetarium officially opened to the public at the Deutsches Museum. The Zeiss Model I displayed 4,500 stars, the band of the Milky Way, the sun, moon, Mercury, Venus, Mars, Jupiter, and Saturn. Gears and motors moved the projector to replicate the changes in the sky as Earth rotated on its axis and revolved around the sun. Visitors viewed this simulation of the night sky from the latitude of Munich and in the comfort of a climate-controlled building, although at first chairs were not provided. (I get a crick in the neck just thinking about it.) The projector was bolted to the floor, but later versions were mounted on rails to move them back and forth. A presenter operated the machine and lectured on astronomical topics, pointing out constellations and the orbits of the planets.

Illustration showing a cutaway of a planetarium dome with a crowd of people waiting to enter. Word of the Zeiss planetarium spread quickly, through postcards and images.ZEISS Archive

The planetarium’s influence quickly extended far beyond Germany, as museums and schools around the world incorporated the technology into immersive experiences for science education and public outreach. Each new planetarium was greeted with curiosity and excitement. Postcards and images of planetariums (both the distinctive domed buildings and the complicated machines) circulated widely.

In 1926, Zeiss opened its own planetarium in Jena based on Bauersfeld’s specifications. The first city outside of Germany to acquire a Zeiss planetarium was Vienna. It opened in a temporary structure on 7 May 1927 and in a permanent structure four years later, only to be destroyed during World War II.

The Zeiss planetarium in Rome, which opened in 1928, projected the stars onto the domed vault of the 3rd-century Aula Ottagona, part of the ancient Baths of Diocletian.

The first planetarium in the western hemisphere opened in Chicago in May 1930. Philanthropist Max Adler, a former executive at Sears, contributed funds to the building that now bears his name. He called it a “classroom under the heavens.”

Japan’s first planetarium, a Zeiss Model II, opened in Osaka in 1937 at the Osaka City Electricity Science Museum. As its name suggests, the museum showcased exhibits on electricity, funded by the municipal power company. The city council had to be convinced of the educational value of the planetarium. But the mayor and other enthusiasts supported it. The planetarium operated for 50 years.

Who doesn’t love a planetarium?

After World War II and the division of Germany, the Zeiss company also split in two, with operations continuing at Oberkochen in the west and Jena in the east. Both branches continued to develop the planetarium through the Zeiss Model VI before shifting the nomenclature to more exotic names, such as the Spacemaster, Skymaster, and Cosmorama.

A black and white photo of a large complex dumbbell-shaped apparatus mounted on a wheeled cart. The two large spheres of the Zeiss Model II, introduced in 1926, displayed the skies of the northern and southern hemispheres, respectively. Each sphere contained a number of smaller projectors.ZEISS Archive

Over the years, refinements included increased precision, the addition of more stars, automatic controls that allowed the programming of complete shows, and a shift to fiber optics and LED lighting. Zeiss still produces planetariums in a variety of configurations for different size domes.

Today more than 4,000 planetariums are in operation globally. A planetarium is often the first place where children connect what they see in the night sky to a broader science and an understanding of the universe. My hometown of Richmond, Va., opened its first planetarium in April 1983 at the Science Museum of Virginia. That was a bit late in the big scheme of things, but just in time to wow me as a kid. I still remember the first show I saw, narrated by an animatronic Mark Twain with a focus on the 1986 visit of Halley’s Comet.

By then the museum also had a giant OmniMax screen that let me soar over the Grand Canyon, watch beavers transform the landscape, and swim with whale sharks, all from the comfort of my reclining seat. No wonder the museum is where I got my start as a public historian of science and technology. I began volunteering there at age 14 and have never looked back.

Part of a continuing series looking at historical artifacts that embrace the boundless potential of technology.

An abridged version of this article appears in the May 2024 print issue as “A Planetarium Is Born.”

References


In 2023, the Deutsches Museum celebrated the centennial of its planetarium, with the exhibition 100 Years of the Planetarium, which included artifacts such as astrolabes and armillary spheres as well as a star show in a specially built planetarium dome.

I am always appreciative of corporations that recognize their own history and maintain robust archives. Zeiss has a wonderful collection of historic photos online with detailed descriptions.

I also consulted The Zeiss Works and the Carl Zeiss Foundation in Jena by Felix Auerbach, although I read an English translation that was in the Robert B. Ariail Collection of Historical Astronomy, part of the University of South Carolina’s special collections.


  • ✇IEEE Spectrum
  • An Engineer Who Keeps Meta’s AI infrastructure HummingEdd Gent
    Making breakthroughs in artificial intelligence these days requires huge amounts of computing power. In January, Meta CEO Mark Zuckerberg announced that by the end of this year, the company will have installed 350,000 Nvidia GPUs—the specialized computer chips used to train AI models—to power its AI research. As a data-center network engineer with Meta’s network infrastructure team, Susana Contrera is playing a leading role in this unprecedented technology rollout. Her job is about “bringing
     

An Engineer Who Keeps Meta’s AI infrastructure Humming

Od: Edd Gent
29. Duben 2024 v 17:00


Making breakthroughs in artificial intelligence these days requires huge amounts of computing power. In January, Meta CEO Mark Zuckerberg announced that by the end of this year, the company will have installed 350,000 Nvidia GPUs—the specialized computer chips used to train AI models—to power its AI research.

As a data-center network engineer with Meta’s network infrastructure team, Susana Contrera is playing a leading role in this unprecedented technology rollout. Her job is about “bringing designs to life,” she says. Contrera and her colleagues take high-level plans for the company’s AI infrastructure and turn those blueprints into reality by working out how to wire, power, cool, and house the GPUs in the company’s data centers.

Susana Contrera


Employer:

Meta

Occupation:

Data-center network engineer

Education:

Bachelor’s degree in telecommunications engineering, Andrés Bello Catholic University in Caracas, Venezuela

Contrera, who now works remotely from Florida, has been at Meta since 2013, spending most of that time helping to build the computer systems that support its social media networks, including Facebook and Instagram. But she says that AI infrastructure has become a growing priority, particularly in the past two years, and represents an entirely new challenge. Not only is Meta building some of the world’s first AI supercomputers, it is racing against other companies like Google and OpenAI to be the first to make breakthroughs.

“We are sitting right at the forefront of the technology,” Contrera says. “It’s super challenging, but it’s also super interesting, because you see all these people pushing the boundaries of what we thought we could do.”

Cisco Certification Opened Doors

Growing up in Caracas, Venezuela, Contrera says her first introduction to technology came from playing video games with her older brother. But she decided to pursue a career in engineering because of her parents, who were small-business owners.

“They were always telling me how technology was going to be a game changer in the future, and how a career in engineering could open many doors,” she says.

She enrolled at Andrés Bello Catholic University in Caracas in 2001 to study telecommunications engineering. In her final year, she signed up for the training and certification program to become a Cisco Certified Network Associate. The program covered topics such as the fundamentals of networking and security, IP services, and automation and programmability.

The certificate opened the door to her first job in 2006—managing the computer network of a business-process outsourcing company, Atento, in Caracas.

“Getting your hands dirty can give you a lot of perspective.”

“It was a very large enterprise network that had just the right amount of complexity for a very small team,” she says. “That gave me a lot of freedom to put my knowledge into practice.”

At the time, Venezuela was going through a period of political unrest. Contrera says she didn’t see a future for herself in the country, so she decided to leave for Europe.

She enrolled in a master’s degree program in project management in 2009 at Spain’s Pontifical University of Salamanca, continuing to collect additional certifications through Cisco in her free time. In 2010, partway through the program, she left for a job as a support engineer at the Madrid-based law firm Ecija, which provides legal advice to technology, media, and telecommunications companies. Following that with a stint as a network engineer at Amazon’s facility in Dublin from 2011 to 2013, she then joined Meta and “the rest is history,” she says.

Starting From the Edge Network

Contrera first joined Meta as a network deployment engineer, helping build the company’s “edge” network. In this type of network design, user requests go out to small edge servers dotted around the world instead of to Meta’s main data centers. Edge systems can deal with requests faster and reduce the load on the company’s main computers.

After several years traveling around Europe setting up this infrastructure, she took a managerial position in 2016. But after a couple of years she decided to return to a hands-on role at the company.

“I missed the satisfaction that you get when you’re part of a project, and you can clearly see the impact of solving a complex technical problem,” she says.

Because of the rapid growth of Meta’s services, her work primarily involved scaling up the capacity of its data centers as quickly as possible and boosting the efficiency with which data flowed through the network. But the work she is doing today to build out Meta’s AI infrastructure presents very different challenges, she says.

Designing Data Centers for AI

Training Meta’s largest AI models involves coordinating computation over large numbers of GPUs split into clusters. These clusters are often housed in different facilities, often in distant cities. It’s crucial that messages passing back and forth have very low latency and are lossless—in other words, they move fast and don’t drop any information.

Building data centers that can meet these requirements first involves Meta’s network engineering team deciding what kind of hardware should be used and how it needs to be connected.

“They have to think about how those clusters look from a logical perspective,” Contrera says.

Then Contrera and other members of the network infrastructure team take this plan and figure out how to fit it into Meta’s existing data centers. They consider how much space the hardware needs, how much power and cooling it will require, and how to adapt the communications systems to support the additional data traffic it will generate. Crucially, this AI hardware sits in the same facilities as the rest of Meta’s computing hardware, so the engineers have to make sure it doesn’t take resources away from other important services.

“We help translate these ideas into the real world,” Contrera says. “And we have to make sure they fit not only today, but they also make sense for the long-term plans of how we are scaling our infrastructure.”

Working on a Transformative Technology

Planning for the future is particularly challenging when it comes to AI, Contrera says, because the field is moving so quickly.

“It’s not like there is a road map of how AI is going to look in the next five years,” she says. “So we sometimes have to adapt quickly to changes.”

With today’s heated competition among companies to be the first to make AI advances, there is a lot of pressure to get the AI computing infrastructure up and running. This makes the work much more demanding, she says, but it’s also energizing to see the entire company rallying around this goal.

While she sometimes gets lost in the day-to-day of the job, she loves working on a potentially transformative technology. “It’s pretty exciting to see the possibilities and to know that we are a tiny piece of that big puzzle,” she says.

Hands-on Data Center Experience

For those interested in becoming a network engineer, Contrera says the certification programs run by companies like Cisco are useful. But she says it’s also important not to focus just on simply ticking boxes or rushing through courses just to earn credentials. “Take your time to understand the topics because that’s where the value is,” she says.

It’s good to get some experience working in data centers on infrastructure deployment, she says, because “getting your hands dirty can give you a lot of perspective.” And increasingly, coding can be another useful skill to develop to complement more traditional network engineering capabilities.

Mainly, she says, just “enjoy the ride” because networking can be a truly fascinating topic once you delve in. “There’s this orchestra of protocols and different technologies playing together and interacting,” she says. “I think that’s beautiful.”

  • ✇IEEE Spectrum
  • Electronically Assisted Astronomy on the CheapDavid Schneider
    I hate the eye strain that often comes with peering through a telescope at the night sky—I’d rather let a camera capture the scene. But I’m too frugal to sink thousands of dollars into high-quality astrophotography gear. The Goldilocks solution for me is something that goes by the name of electronically assisted astronomy, or EAA.EAA occupies a middle ground in amateur astronomy: more involved than gazing through binoculars or a telescope, but not as complicated as using specialized cameras, exp
     

Electronically Assisted Astronomy on the Cheap

28. Duben 2024 v 17:00


I hate the eye strain that often comes with peering through a telescope at the night sky—I’d rather let a camera capture the scene. But I’m too frugal to sink thousands of dollars into high-quality astrophotography gear. The Goldilocks solution for me is something that goes by the name of electronically assisted astronomy, or EAA.

EAA occupies a middle ground in amateur astronomy: more involved than gazing through binoculars or a telescope, but not as complicated as using specialized cameras, expensive telescopes, and motorized tracking mounts. I set about exploring how far I could get doing EAA on a limited budget.

Photo of the moon.

Photo of a sun.

Photo of a nebula. Electronically-assisted-astronomy photographs captured with my rig: the moon [top], the sun [middle], and the Orion Nebula [bottom] David Schneider

First, I purchased a used Canon T6 DSLR on eBay. Because it had a damaged LCD viewscreen and came without a lens, it cost just US $100. Next, rather than trying to marry this camera to a telescope, I decided to get a telephoto lens: Back to eBay for a 40-year-old Nikon 500-mm F/8 “mirror” telephoto lens for $125. This lens combines mirrors and lenses to create a folded optical path. So even though the focal length of this telephoto is a whopping 50 centimeters, the lens itself is only about 15 cm long. A $20 adapter makes it work with the Canon.

The Nikon lens lacks a diaphragm to adjust its aperture and hence its depth of field. Its optical geometry makes things that are out of focus resemble doughnuts. And it can’t be autofocused. But these shortcomings aren’t drawbacks for astrophotography. And the lens has the big advantage that it can be focused beyond infinity. This allows you to adjust the focus on distant objects accurately, even if the lens expands and contracts with changing temperatures.

Getting the focus right is one of the bugaboos of using a telephoto lens for astrophotography, because the focus on such lenses is touchy and easily gets knocked off kilter. To avoid that, I built something (based on a design I found in an online astronomy forum) that clamps to the focus ring and allows precise adjustments using a small knob.

My next purchase was a modified gun sight to make it easier to aim the camera. The version I bought (for $30 on Amazon) included an adapter that let me mount it to my camera’s hot shoe. You’ll also need a tripod, but you can purchase an adequate one for less than $30.

Getting the focus right is one of the bugaboos of using a telephoto lens

The only other hardware you need is a laptop. On my Windows machine, I installed four free programs: Canon’s EOS Utility (which allows me to control the camera and download images directly), Canon’s Digital Photo Professional (for managing the camera’s RAW format image files), the GNU Image Manipulation Program (GIMP) photo editor, and a program called Deep Sky Stacker, which lets me combine short-exposure images to enhance the results without having Earth’s rotation ruin things.

It was time to get started. But focusing on astronomical objects is harder than you might think. The obvious strategy is to put the camera in “live view” mode, aim it at Jupiter or a bright star, and then adjust the focus until the object is as small as possible. But it can still be hard to know when you’ve hit the mark. I got a big assist from what’s known as a Bahtinov mask, a screen with angled slats you temporarily stick in front of the lens to create a diffraction pattern that guides focusing.

A set of images showing dim celestial objects transiting across the frames. A final synthesized frame shows a clear, sharp image. Stacking software takes a series of images of the sky, compensates for the motion of the stars, and combines the images to simulate long exposures without blurring.

After getting some good shots of the moon, I turned to another easy target: the sun. That required a solar filter, of course. I purchased one for $9 , which I cut into a circle and glued to a candy tin from which I had cut out the bottom. My tin is of a size that slips perfectly over my lens. With this filter, I was able to take nice images of sunspots. The challenge again was focusing, which required trial and error, because strategies used for stars and planets don’t work for the sun.

With focusing down, the next hurdle was to image a deep-sky object, or DSO—star clusters, galaxies, and nebulae. To image these dim objects really well requires a tracking mount, which turns the camera so that you can take long exposures without blurring from the motion of the Earth. But I wanted to see what I could do without a tracker.

I first needed to figure out how long of an exposure was possible with my fixed camera. A common rule of thumb is to take the focal length of your telescope in millimeters and divide by 500 to give you the maximum exposure duration in seconds. For my setup, that would be 1 second. A more sophisticated approach, called the NPF rule, factors in additional details regarding your imaging sensor. Using an online NPF-rule calculator gave me a slightly lower number: 0.8 seconds. To be even more conservative, I used 0.6-second exposures.

My first DSO target was the Orion Nebula, of which I shot 100 images from my suburban driveway. No doubt, I would have done better from a darker spot. I was mindful, though, to acquire calibration frames—“flats” and “darks” and “bias images”—which are used to compensate for imperfections in the imaging system. Darks and bias images are easy enough to obtain by leaving the lens cap on. Taking flats, however, requires an even, diffuse light source. For that I used a $17 A5-size LED tracing pad placed on a white T-shirt covering the lens.

With all these images in hand, I fired up the Deep Sky Stacker program and put it to work. The resultant stack didn’t look promising, but postprocessing in GIMP turned it into a surprisingly detailed rendering of the Orion Nebula. It doesn’t compare, of course, with what somebody can do with a better gear. But it does show the kinds of fascinating images you can generate with some free software, an ordinary DSLR, and a vintage telephoto lens pointed at the right spot.

This article appears in the May 2024 print issue as “Electronically Assisted Astronomy.”

  • ✇IEEE Spectrum
  • Why One Man Spent 12 Years Fighting RobocallsMichael Koziol
    At some point, our phone habits changed. It used to be that if the phone rang, you answered it. With the advent of caller ID, you’d only pick up if it was someone you recognized. And now, with spoofing and robocalls, it can seem like a gamble to pick up the phone, period. In 2023, robocall blocking service Youmail estimates there were more than 55 billion robocalls in the United States. How did robocalls proliferate so much that now they seem to be dominating phone networks? And can any of thi
     

Why One Man Spent 12 Years Fighting Robocalls

24. Duben 2024 v 18:00


At some point, our phone habits changed. It used to be that if the phone rang, you answered it. With the advent of caller ID, you’d only pick up if it was someone you recognized. And now, with spoofing and robocalls, it can seem like a gamble to pick up the phone, period. In 2023, robocall blocking service Youmail estimates there were more than 55 billion robocalls in the United States. How did robocalls proliferate so much that now they seem to be dominating phone networks? And can any of this be undone? IEEE Spectrum spoke with David Frankel of ZipDX, who’s been fighting robocalls for over a decade, to find out.


David Frankel is the founder of ZipDX, a company that provides audioconferencing solutions. He also created the Rraptor automated robocall surveillance system.

How did you get involved in trying to stop robocalls?

David Frankel: Twelve years ago, I was working in telecommunications and a friend of mine called me about a contest that the Federal Trade Commission (FTC) was starting. They were seeking the public’s help to find solutions to the robocall problem. I spent time and energy putting together a contest entry. I didn’t win, but I became so engrossed in the problem, and like a dog with a bone, I just haven’t let go of it.

How can we successfully combat robocalls?

Frankel: Well, I don’t know the answer, because I don’t feel like we’ve succeeded yet. I’ve been very involved in something called traceback—in fact, it was my FTC contest entry. It’s a semiautomated process where, in fact, with the cooperation of individual phone companies, you go from telco A to B to C to D, until you ultimately get somebody that sent that call. And then you can find the customer who paid them to put this call on the network.

I’ve got a second tool—a robocall surveillance network. We’ve got tens of thousands of telephone numbers that just wait for robocalls. We can correlate that with other data and reveal where these calls are coming from. Ideally, we stop them at the source. It’s a sort of sewage that’s being pumped into the telephone network. We want to go upstream to find the source of the sewage and deal with it there.

Can more regulation help?

Frankel: Well, regulations are really, really tough for a couple of reasons. One is, it’s a bureaucratic, slow-moving process. It’s also a cat-and-mouse game, because, as quick as you start talking about new regulations, people start talking about how to circumvent them.

There’s also this notion of regulatory capture. At the Federal Communications Committee, the loudest voices come from the telecommunications operators. There’s an imbalance in the control that the consumer ultimately has over who gets to invade their telephone versus these other interests.

Is the robocall situation getting better or worse?

Frankel: It’s been fairly steady state. I’m just disappointed that it’s not substantially reduced from where it’s been. We made progress on explicit fraud calls, but we still have too many of these lead-generation calls. We need to get this whacked down by 80 percent. I always think that we’re on the cusp of doing that, that this year is going to be the year. There are people attacking this from a number of different angles. Everybody says there’s no silver bullet, and I believe that, but I hope that we’re about to crest the hill.

Is this a fight that’s ultimately winnable?

Frankel: I think we’ll be able to take back our phone network. I’d love to retire, having something to show for our efforts. I don’t think we’ll get it to zero. But I think that we’ll be able to push the genie a long way back into the bottle. The measure of success is that we all won’t be scared to answer our phone. It’ll be a surprise that it’s a robocall—instead of the expectation that it’s a robocall.

This article appears in the May 2024 issue as “5 Questions for David Frankel.”

  • ✇IEEE Spectrum
  • Turn a Vintage Hi-Fi Into a Modern Entertainment CenterStephen Cass
    Sometimes extreme procrastination works in your favor. Procrastination certainly played a role in this month’s Hands On, which was 20 years in the making. So, too, did family, and place, and what meaning might be found in bringing silent circuits to life. This then is a story that ends with me watching Interstellar and listening to its soaring soundtrack in glorious high fidelity, but begins with my wife’s childhood in North Carolina. Regular readers will know that I take particular delight in a
     

Turn a Vintage Hi-Fi Into a Modern Entertainment Center

28. Únor 2024 v 20:00


Sometimes extreme procrastination works in your favor. Procrastination certainly played a role in this month’s Hands On, which was 20 years in the making. So, too, did family, and place, and what meaning might be found in bringing silent circuits to life. This then is a story that ends with me watching Interstellar and listening to its soaring soundtrack in glorious high fidelity, but begins with my wife’s childhood in North Carolina.

Regular readers will know that I take particular delight in anything that combines old and new tech. So when my wife and I were newlyweds two decades ago, and my wife’s parents gifted us with an early 1960s General Electric wood-cabinet stereo hi-fi, the wheels started turning in my head. My wife grew up listening to records on this stereo, but now it was 2004, and vinyl was clearly dead and never coming back. Instead, I connected one of our new-fangled iPods via the set of RCA audio inputs at the back (fortunately, these had become standard just a few years before the hi-fi was made). We filled our small New York City apartment with the latest hits from the Black Eyed Peas and Arcade Fire—only to discover that the left-hand stereo channel didn’t work.

Identifying the problem was quick and easy. Peering into the gloom of the cabinet’s sprawling circuitry, I spotted the one vacuum tube not emitting a tell-tale orange glow. But fixing the problem was neither quick nor easy. The years ticked on, and we moved from apartment to apartment, and city to city, taking the hi-fi with us. But it sat silent, a convenient place to display photos and stash bottles of liquor. I made fitful attempts to find a replacement tube, searching eBay and scouring the formidable MIT Radio Society Swapfests.

Perfectionism was a big part of my procrastination, as I hoped to find a matched pair—that is, two tubes that came from the same production batch. Prized among vintage audio enthusiasts, a matched pair would ensure that manufacturing variations didn’t leave one stereo channel with a different frequency response than the other. But I never found a pair, at least not at a price I was willing to pay. About two years ago, I finally gave in and spent US $55 for a single replacement GE 7189A tube from KCA NOS Tubes.

An illustration of a vacuum tube; a squat metal can with electrode tabs; and a long cardboard tube with dangling leads. Replacing a blown tube [left] was relatively straightforward, as the stereo was designed to permit their replacement, and many tubes are still easily obtainable online. However, the original wax-paper capacitor used to filter noise from the AC power supply had failed, so I had to splice in a custom-made substitute.James Provost

I popped in the 7189A, tuned in a radio station, and music boomed from both speakers. Yay! Yay? No yay. There was sound, all right, but it was bad sound. A harsh hum bullied its way through the music, the dread drone of 60 hertz. I was hearing the AC power frequency.

This would be a much more involved job than simply pulling a blown tube out of a socket and pushing a new one in. Flashlight in hand, I surveyed the hi-fi’s circuits. It has two chassis, one for the radio tuner and controls and one for the stereo amplifier. As I pondered what it would take to extract them for diagnosis and repair, my flashlight fell on a mysterious behemoth: a tube about 10 centimeters long and 3 cm in diameter that was screwed to a side panel and connected to the amplifier board by four leads. What the heck was this?

It was a wax paper multicapacitor, a very obsolete component combining a 70-microfarad capacitor rated for 400 volts, a 100-μF capacitor also rated at 400 V, and a 70-μF capacitor rated at 25 V, with a common negative terminal. Such a device was typically used to filter noise from AC supplies, and prone to long-term failure. I’d found my culprit.

When it comes to dealing with voltages higher than 24 V, I am a complete wuss, but this is where my years of procrastination paid off. After a few more months of nervous delay, I began researching the problem and discovered that rather than having to cobble together a homebrew multicapacitor, I could turn to the pros!

An illustration showing the flatscreen television acting as a hub. Thanks to the stability of the analog RCA audio connector standard since the 1950s, it was possible to use modern entertainment technology with the vintage hi-fi’s warm-sounding speakers. The flatscreen accepts digital HDMI signals and outputs audio via an optical-fiber connection. A digital-to-analog converter then creates left- and right-channel audio signals to feed into the hi-fi. A selector on the hi-fi’s front panel (originally reserved for connecting a tape player) pipes the audio through the speakers.James Provost

In the last few years, a number of outfits have cropped up to assist people in the repair of vintage radios, supplying original service manuals and providing drop-in substitutes for components you can’t buy any more. I sent off specs to Hayseed Hamfest, which specializes in replacement capacitors. For $43, I soon had a bespoke replacement in a metal can about half the length of my waxy original. I simply had to excise the old monster and wire in its successor.

And then things stalled again—until my father passed away last November. He had spent decades working as an engineer at the Irish national broadcaster, RTÉ, and in his youth he had helped my grandfather in their radio and television rental shop in Dublin. After his funeral, I glared at the stereo, as it awaited a repair that my father could have once done practically blindfolded.

I took off the back and cut the monster out. Splicing in its replacement without removing the amplifier chassis was tricky, but it turned out to be a perfect application for some Kuject connectors I had knocking around. These connectors are short lengths of transparent heat-shrink tubing with some solder inside. Rather than guddling around inside the confined space with a soldering iron, I was able to twist the ends of my leads together, slide the Kuject connector into place, and give it a short blast with a heat gun. Before I knew it, I was done.

And time had helped in other ways too: Instead of just hooking up an iPod, now I could use HDMI cables to connect an Apple TV and a Blu-ray player to a television—which, thanks to modern flat-screen technology, could now be perched on top of the cabinet—and then feed the television’s optical audio output into a converter box and then on into the stereo’s RCA inputs. I tested everything by listening to the elegiac organ rolls of Interstellar swelling out into our apartment.

And there the hi-fi stands now, more than just a photo stand and more than a way to watch “Stranger Things” in style. It’s a reminder of my wife’s upbringing and family, our recollections of the places we’ve lived our lives together, and there, in the currents and voltages being shunted around the circuitry, the echoes of my family, too, and the memory of the hands that taught me my first lessons in electronics.

  • ✇IEEE Spectrum
  • What is CMOS 2.0?Samuel K. Moore
    CMOS, the silicon logic technology behind decades and decades of smaller transistors and faster computers, is entering a new phase. CMOS uses two types of transistors in pairs to limit a circuit’s power consumption. In this new phase, “CMOS 2.0,” that part’s not going to change, but how processors and other complex CMOS chips are made will. Julien Ryckaert, vice president of logic technologies at Imec, the Belgium-based nanotechnology research center, told IEEE Spectrum where things are headed
     

What is CMOS 2.0?

26. Únor 2024 v 17:00


CMOS, the silicon logic technology behind decades and decades of smaller transistors and faster computers, is entering a new phase. CMOS uses two types of transistors in pairs to limit a circuit’s power consumption. In this new phase, “CMOS 2.0,” that part’s not going to change, but how processors and other complex CMOS chips are made will. Julien Ryckaert, vice president of logic technologies at Imec, the Belgium-based nanotechnology research center, told IEEE Spectrum where things are headed.

Julien Ryckaert


Julien Ryckaert is vice president of logic technologies at Imec, in Belgium, where he’s been involved in exploring new technologies for 3D chips, among other topics.

Why is CMOS entering a new phase?

Julien Ryckaert: CMOS was the technology answer to build microprocessors in the 1960s. Making things smaller—transistors and interconnects—to make them better worked for 60, 70 years. But that has started to break down.

Why has CMOS scaling been breaking down?

Ryckaert: Over the years, people have made system-on-chips (SoCs)—such as CPUs and GPUs—more and more complex. That is, they have integrated more and more operations onto the same silicon die. That makes sense, because it is so much more efficient to move data on a silicon die than to move it from chip to chip in a computer.

For a long time, the scaling down of CMOS transistors and interconnects made all those operations work better. But now, it’s starting to be difficult to build the whole SoC, to make all of it better by just scaling the device and the interconnect. For example, SRAM [the system’s cache memory] no longer scales as well as logic.

What’s the solution?

Ryckaert: Seeing that something different needs to happen, we at Imec asked: Why do we scale? At the end of the day, Moore’s law is not about delivering smaller transistors and interconnects, it’s about achieving more functionality per unit area.

So what you are starting to see is breaking out certain functions, such as logic and SRAM, building them on separate chiplets using technologies that give each the best advantage, and then reintegrating them using advanced 3D packaging technologies. You can connect two functions that are built on the different substrates and achieve an efficiency in communication between those two functions that is competitive with how efficient they were when the two functions were on the same substrate. This is an evolution to what we call smart disintegration, or system technology co-optimization.

So is that CMOS 2.0?

Ryckaert: What we’re doing in CMOS 2.0 is pushing that idea further, with much finer-grained disintegration of functions and stacking of many more dies. A first sign of CMOS 2.0 is the imminent arrival of backside-power-delivery networks. On chips today, all interconnects—both those carrying data and those delivering power—are on the front side of the silicon [above the transistors]. Those two types of interconnect have different functions and different requirements, but they have had to exist in a compromise until now. Backside power moves the power-delivery interconnects to beneath the silicon, essentially turning the die into an active transistor layer which is sandwiched between two interconnect stacks, each stack having a different functionality.

Will transistors and interconnects still have to keep scaling in CMOS 2.0?

Ryckaert: Yes, because somewhere in that stack, you will still have a layer that still needs more transistors per unit area. But now, because you have removed all the other constraints that it once had, you are letting that layer nicely scale with the technology that is perfectly suited for it. I see fascinating times ahead.

This article appears in the March print issue as “5 Questions for Julien Ryckaert.”

  • ✇IEEE Spectrum
  • The Scoop on Keeping an Ice Cream Factory CoolEdd Gent
    Working in an ice cream factory is a dream for anyone who enjoys the frozen dessert. For control systems engineer Patryk Borkowski, a job at the biggest ice cream company in the world is also a great way to put his automation expertise to use. Patryk Borkowski Employer: Unilever, Colworth Science Park, in Sharnbrook, England Occupation: Control systems engineer Education: Bachelor’s degree in automation and robotics from the West Pomeranian University of Technology in Szczecin, Poland
     

The Scoop on Keeping an Ice Cream Factory Cool

Od: Edd Gent
25. Únor 2024 v 17:00


Working in an ice cream factory is a dream for anyone who enjoys the frozen dessert. For control systems engineer Patryk Borkowski, a job at the biggest ice cream company in the world is also a great way to put his automation expertise to use.

Patryk Borkowski


Employer:

Unilever, Colworth Science Park, in Sharnbrook, England

Occupation:

Control systems engineer

Education:

Bachelor’s degree in automation and robotics from the West Pomeranian University of Technology in Szczecin, Poland

Borkowski works at the Advanced Prototype and Engineering Centre of the multinational consumer goods company Unilever. Unilever’s corporate umbrella covers such ice cream brands as Ben & Jerry’s, Breyers, Good Humor, Magnum, and Walls.

Borkowski maintains and updates equipment at the innovation center’s pilot plant at Colworth Science Park in Sharnbrook, England. The company’s food scientists and engineers use this small-scale factory to experiment with new ice cream formulations and novel production methods.

The reality of the job might not exactly live up to an ice cream lover’s dream. For safety reasons, eating the product in the plant is prohibited.

“You can’t just put your mouth underneath the nozzle of an ice cream machine and fill your belly,” he says.

For an engineer, though, the complex chemistry and processing required to create ice cream products make for fascinating problem-solving. Much of Borkowski’s work involves improving the environmental impact of ice cream production by cutting waste and reducing the amount of energy needed to keep products frozen.

And he loves working on a product that puts a smile on the faces of customers. “Ice cream is a deeply indulgent and happy product,” he says. “We love working to deliver a superior taste and a superior way to experience ice cream.”

Ice Cream Innovation

Borkowski joined Unilever as a control systems engineer in 2021. While he’s not allowed to discuss many of the details of his research, he says one of the projects he has worked on is a modular manufacturing line that the company uses to develop new kinds of ice cream. The setup allows pieces of equipment such as sauce baths, nitrogen baths for quickly freezing layers, and chocolate deposition systems to be seamlessly switched in and out so that food scientists can experiment and create new products.

Ice cream is a fascinating product to work on for an engineer, Borkowski says, because it’s inherently unstable. “Ice cream doesn’t want to be frozen; it pretty much wants to be melted on the floor,” he says. “We’re trying to bend the chemistry to bind all the ingredients into a semistable mixture that gives you that great taste and feeling on the tongue.”

Making Production More Sustainable

Helping design new products is just one part of Borkowski’s job. Unilever is targeting sustainability across the company, so cutting waste and improving energy efficiency are key. He recently helped develop a testing rig to simulate freezer doors being repeatedly opened and closed in shops. This helped collect temperature data that was used to design new freezers that run at higher temperatures to save electricity.

In 2022, he was temporarily transferred to one of Unilever’s ice cream factories in Hellendoorn, Netherlands, to uncover inefficiencies in the production process. He built a system that collected and collated operational data from all the factory’s machines to identify the causes of stoppages and waste.

“There’s a deep pride in knowing the machines that we’ve programmed make something that people buy and enjoy.”

It wasn’t easy. Some of the machines were older and no longer supported by their manufacturers. Also, they ran legacy code written in Dutch—a language Borkowski doesn’t speak.

Borkowski ended up reverse-engineering the machines to figure out their operating systems, then reprogrammed them to communicate with the new data-collection system. Now the data-collection system can be easily adapted to work at any Unilever factory.

Discovering a Love for Technology

As a child growing up in Stargard, Poland, Borkowski says there was little to indicate that he would become an engineer. At school, he loved writing, drawing, and learning new languages. He imagined himself having a career in the creative industries.

But in the late 1990s, his parents got a second-hand computer and a modem. He quickly discovered online communities for technology enthusiasts and began learning about programming.

Because of his growing fascination with technology, at 16, Borkowski opted to attend a technical high school, pursuing a technical diploma in electronics and learning about components, soldering, and assembly language. In 2011, he enrolled at the West Pomeranian University of Technology in Szczecin, Poland, where he earned a bachelor’s degree in automation and robotics.

When he graduated in 2015, there were few opportunities in Poland to put his skills to use, so he moved to London. There, Borkowski initially worked odd jobs in warehouses and production facilities. After a brief stint as an electronic technician assembling ultrasonic scanners, he joined bakery company Brioche Pasquier in Milton Keynes, England, as an automation engineer.

This was an exciting move, Borkowski says, because he was finally doing control engineering, the discipline he’d always wanted to pursue. Part of his duties involved daily maintenance, but he also joined a team building new production lines from the ground up, linking together machinery such as mixers, industrial ovens, coolers, and packaging units. They programmed the machines so they all worked together seamlessly without human intervention.

When the COVID-19 pandemic struck, new projects went on hold and work slowed down, Borkowski says. There seemed to be little opportunity to advance his career at Brioche Pasquier, so he applied for the control systems job at Unilever.

“When I was briefed on the work, they told me it was all R&D and every project was different,” he says. “I thought that sounded like a challenge.”

The Importance of a Theoretical Foundation

Control engineers require a broad palette of skills in both electronics and programming, Borkowski says. Some of these can be learned on the job, he says, but a degree in subjects like automation or robotics provides an important theoretical foundation.

The biggest piece of advice he has for fledgling control engineers is to stay calm, which he admits can be difficult when a manager is pressuring you to quickly get a line back up to avoid production delays.

“Sometimes it’s better to step away and give yourself a few minutes to think before you do anything,” he says. Rushing can often result in mistakes that cause more problems in the long run.

While working in production can sometimes be stressful, “There’s a deep pride in knowing the machines that we’ve programmed make something that people buy and enjoy,” Borkowski says.

  • ✇IEEE Spectrum
  • Build the Most Accurate DIY Quartz Clock YetGavin Watkins
    Accurate timing is something that’s always been of interest to me. These days we rely heavily on time delivered to us over the Internet, through radio waves from GPS satellites, or broadcast stations. But I wanted a clock that would keep excellent time without relying on the outside world—certainly something better than the time provided by the quartz crystal oscillator used in your typical digital clock or microcontroller, which can drift by about 1.7 seconds per day, or over 10 minutes in the
     

Build the Most Accurate DIY Quartz Clock Yet

15. Únor 2024 v 16:00


Accurate timing is something that’s always been of interest to me. These days we rely heavily on time delivered to us over the Internet, through radio waves from GPS satellites, or broadcast stations. But I wanted a clock that would keep excellent time without relying on the outside world—certainly something better than the time provided by the quartz crystal oscillator used in your typical digital clock or microcontroller, which can drift by about 1.7 seconds per day, or over 10 minutes in the course of a year.

Of course, I could buy an atomic clock—that is, one with a rubidium oscillator inside, of the sort used onboard GPS satellites. (Not the kind that’s marketed as an “atomic clock” but one that actually relies on picking up radio time signals.) Rubidium clocks provide incredible accuracy, but cost thousands of U.S. dollars. I needed something in between, and salvation was found in the form of the oven-controlled crystal oscillator, invariably known as an OCXO for historical reasons. With one of these, I could build my own clock for around US $200—and one that’s about 200 times as accurate as a typical quartz clock.

Temperature changes are the biggest source of error in conventional crystal oscillators. They cause the quartz to expand or shrink, which alters its resonance frequency. One solution is to track the temperature and compensate for the changes in frequency. But it would be better not to have the frequency change in the first place, and this is where the OCXO comes in.

A printed circuit board surrounded by components including LED display matrices, a nano microcontroller, and a rotary controller. The printed circuit board [center] can be cut into two pieces, with the timing-related components mounted on the lower section, and the control and display components mounted on the upper section.James Provost

The OCXO keeps the crystal at a constant temperature. To avoid the complexity of having to both heat and cool a crystal in response to ambient fluctuations, the crystal is kept heated close to 80 °C or so, well above any environmental temperatures it’s likely to experience. In the past, OCXOs were power hungry and bulky or expensive, but in the last few years miniature versions have appeared that are much cheaper and draw way less power. The Raltron OCXO I chose for my clock costs $58, operates at 3.3 volts, and draws 400 milliamperes in steady-state operation.

The OCXO resonates at 10 megahertz. In my clock, this signal is fed into a 4-bit counter, which outputs a pulse every time it counts from 0000 to 1111 in binary, effectively dividing the 10-MHz signal by 16. This 625-kilohertz (kHz) signal then drives a hardware timer in an Arduino Nano microcontroller, which triggers a program interrupt every tenth of a second to update the clock’s time base. (Full details on how the timing chain and software work are available in an accompanying post on IEEE Spectrum’s website , along with a bill of materials and printed circuit board files.) A rotary controller connected directly to the Nano lets you set the time.

The Nano keeps track of the time, advancing seconds, minutes, and hours, and it also drives the display. This display is created using six Adafruit “CharliePlex FeatherWings,” which are 15 by 7 LED matrices with controllable brightness that come in a variety of colors. Each one is controlled via the addressable I2C serial bus protocol. A problem arises because a CharliePlex is hardwired to have only one of two possible I2C addresses, making it impossible to address six clock digits individually on a single bus. My solution was to use an I2C multiplexer, which takes incoming I2C data and switches it between six separate buses.

A block diagram connecting the OCXO, a 4bit counter, a Nano microcontroller, an I2C multiplexer and six display digits. The timing chain begins with the OCXO oscillator and its 10-megahertz signal and ends with the display being updated once every second. The timing signal synchronizes a hardware timer in the Nano microcontroller so that it triggers an interrupt handler in the Nano’s software 10 times a second. Consequently, you can make many modifications or add new features via software changes.James Provost

Using a microcontroller—rather than, say, discrete logic chips—simplified the design and allows for easy modification and expansion. It’s trivial to tweak the software to substitute your own font design for the numbers, for example, or adjust the brightness of the display. Connector blocks for serial interfaces are directly available on the Nano, meaning you could use the clock as an timer or trigger for some other device.

For such a purpose you could omit the display entirely, reducing the clock’s size considerably (although you’ll have to modify the software to override the startup verification of the display). The clock’s printed circuit board is designed so that it can be cut into two pieces, with the lower third holding the microcontroller, OCXO, and other supporting electronics. The upper two thirds hold the display and the rotary encoder. By adding four headers and running two cables between the pieces to connect them, you can arrange the boards to form a wide range of physical configurations, giving you a lot of freedom in designing the form factor of any enclosure you might choose to build for the clock. Indeed, creating the PCB so this was possible was probably the most challenging part of the whole process. But the resulting hardware and software flexibility of the final design was worth it.

The whole device is powered through the Nano’s USB-C port. USB-C was needed in order to provide enough current, as the clock, OCXO, and display all together need more than the 500-mA nominal maximum current of earlier USB ports. A battery backup connected to this port is needed to prevent resets due to power loss—using one of the popular coin-cell-based real-time backup clocks would be pointless due to their relative inaccuracy.

And as for that goal of creating an accurate clock with a great bang for the buck, I cross-checked my OCXO’s output in circuit with an HP 53150A frequency counter. The result is that the clock drifts no more than 0.00864 seconds per day, or less than 3.15 seconds in a year. In fact, its accuracy is probably better than that, but I’d reached the limit of what I could measure with my frequency counter! I hope you’ll build one of your own—it takes just a few hours of soldering, and I think you’ll agree it would be time well spent.

❌
❌