FreshRSS

Normální zobrazení

Jsou dostupné nové články, klikněte pro obnovení stránky.
PředevčíremHlavní kanál
  • ✇XDA
  • Someone combined a Raspberry Pi with an eInk display to make the perfect photo frameSimon Batt
    Why make do with a static photo frame when you can own one that changes automatically? While there are plenty of digital photo frames out there that can bring your snaps to life, nothing feels better than making your own. One person on Reddit has shown just how cool a DIY photo frame can be, displaying the power of combining a Raspberry Pi, a color eInk display, and Google Photos into one amazing project.
     

Someone combined a Raspberry Pi with an eInk display to make the perfect photo frame

21. Srpen 2024 v 04:33

Why make do with a static photo frame when you can own one that changes automatically? While there are plenty of digital photo frames out there that can bring your snaps to life, nothing feels better than making your own. One person on Reddit has shown just how cool a DIY photo frame can be, displaying the power of combining a Raspberry Pi, a color eInk display, and Google Photos into one amazing project.

  • ✇Boing Boing
  • 10 PRINT "NOSTALGIA", 20 GOTO 10 – Wired reminisces about BASICYoy Luadha
    When I was a kid, I briefly had a friend who built the first computer I ever saw. I long ago forgot the friend's name, but I remember the name he gave the computer: Laurie (after Laurie Partridge, natch). It had one simple Star Trek game that somehow involved acquiring and shooting photon torpedoes. — Read the rest The post 10 PRINT "NOSTALGIA", 20 GOTO 10 – Wired reminisces about BASIC appeared first on Boing Boing.
     

10 PRINT "NOSTALGIA", 20 GOTO 10 – Wired reminisces about BASIC

4. Srpen 2024 v 18:58
Computer PET (dean bertoncelj / Shutterstock.com)

When I was a kid, I briefly had a friend who built the first computer I ever saw. I long ago forgot the friend's name, but I remember the name he gave the computer: Laurie (after Laurie Partridge, natch). It had one simple Star Trek game that somehow involved acquiring and shooting photon torpedoes. — Read the rest

The post 10 PRINT "NOSTALGIA", 20 GOTO 10 – Wired reminisces about BASIC appeared first on Boing Boing.

  • ✇XDA
  • Someone made a cool USB sniffer out of a Raspberry Pi Pico, and you can tooSimon Batt
    If you want to keep an eye on how your USB devices are interacting with your PC, you could do worse than get a USB sniffer. These keep tabs on what data is going through your USB ports, which is handy for developing tools or just making sure your devices are behaving themselves. Until now, USB sniffers were pretty niche, but someone has cracked the code and found a way to turn a Raspberry Pi Pico into your very own detector.
     

Someone made a cool USB sniffer out of a Raspberry Pi Pico, and you can too

4. Srpen 2024 v 03:10

If you want to keep an eye on how your USB devices are interacting with your PC, you could do worse than get a USB sniffer. These keep tabs on what data is going through your USB ports, which is handy for developing tools or just making sure your devices are behaving themselves. Until now, USB sniffers were pretty niche, but someone has cracked the code and found a way to turn a Raspberry Pi Pico into your very own detector.

  • ✇Liliputing
  • Waveshare UPS HAT easily adds battery backup to a Raspberry PiLee Mathews
    The Waveshare UPS HAT (E) offers an inexpensive, easy way to add battery backup to a Raspberry Pi project. It’s compatible with Raspberry Pi 5, 4B and 3B+ boards and accepts four 21700 lithium-ion batteries (not included). Just like a traditional UPS from a company like APC or CyberPower, the UPS HAT (E) springs into […] The post Waveshare UPS HAT easily adds battery backup to a Raspberry Pi appeared first on Liliputing.
     

Waveshare UPS HAT easily adds battery backup to a Raspberry Pi

3. Srpen 2024 v 19:00

The Waveshare UPS HAT (E) offers an inexpensive, easy way to add battery backup to a Raspberry Pi project. It’s compatible with Raspberry Pi 5, 4B and 3B+ boards and accepts four 21700 lithium-ion batteries (not included). Just like a traditional UPS from a company like APC or CyberPower, the UPS HAT (E) springs into […]

The post Waveshare UPS HAT easily adds battery backup to a Raspberry Pi appeared first on Liliputing.

  • ✇Liliputing
  • UP Squared i12 Edge with Intel Core i7-1260P is now shippingLee Mathews
    AAEON first showed off the Intel Core-powered UP Squared i12 Edge in October of last year. Now the company has units in stock and ready to ship. The i12 Edge is a compact PC that measures 130 x 94 x 68mm. It’s designed to be deployed in demanding environments, with operating temperatures between 32º and […] The post UP Squared i12 Edge with Intel Core i7-1260P is now shipping appeared first on Liliputing.
     

UP Squared i12 Edge with Intel Core i7-1260P is now shipping

2. Srpen 2024 v 22:30

AAEON first showed off the Intel Core-powered UP Squared i12 Edge in October of last year. Now the company has units in stock and ready to ship. The i12 Edge is a compact PC that measures 130 x 94 x 68mm. It’s designed to be deployed in demanding environments, with operating temperatures between 32º and […]

The post UP Squared i12 Edge with Intel Core i7-1260P is now shipping appeared first on Liliputing.

  • ✇IEEE Spectrum
  • Giant Chips Give Supercomputers a Run for Their MoneyDina Genkina
    As large supercomputers keep getting larger, Sunnyvale, California-based Cerebras has been taking a different approach. Instead of connecting more and more GPUs together, the company has been squeezing as many processors as it can onto one giant wafer. The main advantage is in the interconnects—by wiring processors together on-chip, the wafer-scale chip bypasses many of the computational speed losses that come from many GPUs talking to each other, as well as losses from loading data to and from
     

Giant Chips Give Supercomputers a Run for Their Money

12. Červen 2024 v 16:00


As large supercomputers keep getting larger, Sunnyvale, California-based Cerebras has been taking a different approach. Instead of connecting more and more GPUs together, the company has been squeezing as many processors as it can onto one giant wafer. The main advantage is in the interconnects—by wiring processors together on-chip, the wafer-scale chip bypasses many of the computational speed losses that come from many GPUs talking to each other, as well as losses from loading data to and from memory.

Now, Cerebras has flaunted the advantages of their wafer-scale chips in two separate but related results. First, the company demonstrated that its second generation wafer-scale engine, WSE-2, was significantly faster than world’s fastest supercomputer, Frontier, in molecular dynamics calculations—the field that underlies protein folding, modeling radiation damage in nuclear reactors, and other problems in material science. Second, in collaboration with machine learning model optimization company Neural Magic, Cerebras demonstrated that a sparse large language model could perform inference at one-third of the energy cost of a full model without losing any accuracy. Although the results are in vastly different fields, they were both possible because of the interconnects and fast memory access enabled by Cerebras’ hardware.

Speeding Through the Molecular World

“Imagine there’s a tailor and he can make a suit in a week,” says Cerebras CEO and co-founder Andrew Feldman. “He buys the neighboring tailor, and she can also make a suit in a week, but they can’t work together. Now, they can now make two suits in a week. But what they can’t do is make a suit in three and a half days.”

According to Feldman, GPUs are like tailors that can’t work together, at least when it comes to some problems in molecular dynamics. As you connect more and more GPUs, they can simulate more atoms at the same time, but they can’t simulate the same number of atoms more quickly.

Cerebras’ wafer-scale engine, however, scales in a fundamentally different way. Because the chips are not limited by interconnect bandwidth, they can communicate quickly, like two tailors collaborating perfectly to make a suit in three and a half days.

“It’s difficult to create materials that have the right properties, that have a long lifetime and sufficient strength and don’t break.” —Tomas Oppelstrup, Lawrence Livermore National Laboratory

To demonstrate this advantage, the team simulated 800,000 atoms interacting with each other, calculating the interactions in increments of one femtosecond at a time. Each step took just microseconds to compute on their hardware. Although that’s still 9 orders of magnitude slower than the actual interactions, it was also 179 times as fast as the Frontier supercomputer. The achievement effectively reduced a year’s worth of computation to just two days.

This work was done in collaboration with Sandia, Lawrence Livermore, and Los Alamos National Laboratories. Tomas Oppelstrup, staff scientist at Lawrence Livermore National Laboratory, says this advance makes it feasible to simulate molecular interactions that were previously inaccessible.

Oppelstrup says this will be particularly useful for understanding the longer-term stability of materials in extreme conditions. “When you build advanced machines that operate at high temperatures, like jet engines, nuclear reactors, or fusion reactors for energy production,” he says, “you need materials that can withstand these high temperatures and very harsh environments. It’s difficult to create materials that have the right properties, that have a long lifetime and sufficient strength and don’t break.” Being able to simulate the behavior of candidate materials for longer, Oppelstrup says, will be crucial to the material design and development process.

Ilya Sharapov, principal engineer at Cerebras, say the company is looking forward to extending applications of its wafer-scale engine to a larger class of problems, including molecular dynamics simulations of biological processes and simulations of airflow around cars or aircrafts.

Downsizing Large Language Models

As large language models (LLMs) are becoming more popular, the energy costs of using them are starting to overshadow the training costs—potentially by as much as a factor of ten in some estimates. “Inference is is the primary workload of AI today because everyone is using ChatGPT,” says James Wang, director of product marketing at Cerebras, “and it’s very expensive to run especially at scale.”

One way to reduce the energy cost (and speed) of inference is through sparsity—essentially, harnessing the power of zeros. LLMs are made up of huge numbers of parameters. The open-source Llama model used by Cerebras, for example, has 7 billion parameters. During inference, each of those parameters is used to crunch through the input data and spit out the output. If, however, a significant fraction of those parameters are zeros, they can be skipped during the calculation, saving both time and energy.

The problem is that skipping specific parameters is a difficult to do on a GPU. Reading from memory on a GPU is relatively slow, because they’re designed to read memory in chunks, which means taking in groups of parameters at a time. This doesn’t allow GPUs to skip zeros that are randomly interspersed in the parameter set. Cerebras CEO Feldman offered another analogy: “It’s equivalent to a shipper, only wanting to move stuff on pallets because they don’t want to examine each box. Memory bandwidth is the ability to examine each box to make sure it’s not empty. If it’s empty, set it aside and then not move it.”

“There’s a million cores in a very tight package, meaning that the cores have very low latency, high bandwidth interactions between them.” —Ilya Sharapov, Cerebras

Some GPUs are equipped for a particular kind of sparsity, called 2:4, where exactly two out of every four consecutively stored parameters are zeros. State-of-the-art GPUs have terabytes per second of memory bandwidth. The memory bandwidth of Cerebras’ WSE-2 is more than one thousand times as high, at 20 petabytes per second. This allows for harnessing unstructured sparsity, meaning the researchers can zero out parameters as needed, wherever in the model they happen to be, and check each one on the fly during a computation. “Our hardware is built right from day one to support unstructured sparsity,” Wang says.

Even with the appropriate hardware, zeroing out many of the model’s parameters results in a worse model. But the joint team from Neural Magic and Cerebras figured out a way to recover the full accuracy of the original model. After slashing 70 percent of the parameters to zero, the team performed two further phases of training to give the non-zero parameters a chance to compensate for the new zeros.

This extra training uses about 7 percent of the original training energy, and the companies found that they recover full model accuracy with this training. The smaller model takes one-third of the time and energy during inference as the original, full model. “What makes these novel applications possible in our hardware,” Sharapov says, “Is that there’s a million cores in a very tight package, meaning that the cores have very low latency, high bandwidth interactions between them.”

  • ✇Semiconductor Engineering
  • Characterizing and Evaluating A Quantum Processor Unit In A HPC CenterTechnical Paper Link
    A new technical paper titled “Calibration and Performance Evaluation of a Superconducting Quantum Processor in an HPC Center” was published by researchers at Leibniz Supercomputing Centre, IQM Quantum Computers, and Technical University of Munich. Abstract “As quantum computers mature, they migrate from laboratory environments to HPC centers. This movement enables large-scale deployments, greater access to the technology, and deep integration into HPC in the form of quantum acceleration. In labo
     

Characterizing and Evaluating A Quantum Processor Unit In A HPC Center

11. Červen 2024 v 04:36

A new technical paper titled “Calibration and Performance Evaluation of a Superconducting Quantum Processor in an HPC Center” was published by researchers at Leibniz Supercomputing Centre, IQM Quantum Computers, and Technical University of Munich.

Abstract

“As quantum computers mature, they migrate from laboratory environments to HPC centers. This movement enables large-scale deployments, greater access to the technology, and deep integration into HPC in the form of quantum acceleration. In laboratory environments, specialists directly control the systems’ environments and operations at any time with hands-on access, while HPC centers require remote and autonomous operations with minimal physical contact. The requirement for automation of the calibration process needed by all current quantum systems relies on maximizing their coherence times and fidelities and, with that, their best performance. It is, therefore, of great significance to establish a standardized and automatic calibration process alongside unified evaluation standards for quantum computing performance to evaluate the success of the calibration and operation of the system. In this work, we characterize our in-house superconducting quantum computer, establish an automatic calibration process, and evaluate its performance through quantum volume and an application-specific algorithm. We also analyze readout errors and improve the readout fidelity, leaning on error mitigation.”

Find the technical paper here. Published May 2024.

X. Deng, S. Pogorzalek, F. Vigneau, P. Yang, M. Schulz and L. Schulz, “Calibration and Performance Evaluation of a Superconducting Quantum Processor in an HPC Center,” ISC High Performance 2024 Research Paper Proceedings (39th International Conference), Hamburg, Germany, 2024, pp. 1-9, doi: 10.23919/ISC.2024.10528924.

The post Characterizing and Evaluating A Quantum Processor Unit In A HPC Center appeared first on Semiconductor Engineering.

  • ✇Boing Boing
  • Greatest mechanical calculating devices of all timeJason Weisberger
    Mechanical calculators are fascinating. They represent the early ingenuity and groundbreaking prowess that paved the way for modern computing. While the iPhone or Android device you are likely reading this on is exponentially more powerful and useful, these devices are where it all began! — Read the rest The post Greatest mechanical calculating devices of all time appeared first on Boing Boing.
     

Greatest mechanical calculating devices of all time

31. Květen 2024 v 16:52

Mechanical calculators are fascinating. They represent the early ingenuity and groundbreaking prowess that paved the way for modern computing. While the iPhone or Android device you are likely reading this on is exponentially more powerful and useful, these devices are where it all began! — Read the rest

The post Greatest mechanical calculating devices of all time appeared first on Boing Boing.

  • ✇XDA
  • You can make this cool Fallout-inspired PipWatch for yourselfSimon Batt
    With the Fallout TV series in full swing, we've seen a ton of people get back into the iconic post-apocalyptic video game series. Given how far SBCs and 3D printing have come since the release of Fallout 4, we've seen a resurgence of cool homemade projects based around the franchise. Now, someone has shown off their PipWatch, a passion project you can make yourself.
     

You can make this cool Fallout-inspired PipWatch for yourself

20. Květen 2024 v 00:30

With the Fallout TV series in full swing, we've seen a ton of people get back into the iconic post-apocalyptic video game series. Given how far SBCs and 3D printing have come since the release of Fallout 4, we've seen a resurgence of cool homemade projects based around the franchise. Now, someone has shown off their PipWatch, a passion project you can make yourself.

How to protect your privacy with an SBC-powered VPN server

20. Květen 2024 v 00:00

Virtual Private Networks (VPNs) are an effective means to enhance your privacy. By disguising your IP address, a VPN prevents third-parties from tracking your online activities on top of protecting your data from network-based hacking attacks.

  • ✇IEEE Spectrum
  • How to Put a Data Center in a ShoeboxAnna Herr
    Scientists have predicted that by 2040, almost 50 percent of the world’s electric power will be used in computing. What’s more, this projection was made before the sudden explosion of generative AI. The amount of computing resources used to train the largest AI models has been doubling roughly every 6 months for more than the past decade. At this rate, by 2030 training a single artificial-intelligence model would take one hundred times as much computing resources as the combined annual resourc
     

How to Put a Data Center in a Shoebox

Od: Anna Herr
15. Květen 2024 v 17:00


Scientists have predicted that by 2040, almost 50 percent of the world’s electric power will be used in computing. What’s more, this projection was made before the sudden explosion of generative AI. The amount of computing resources used to train the largest AI models has been doubling roughly every 6 months for more than the past decade. At this rate, by 2030 training a single artificial-intelligence model would take one hundred times as much computing resources as the combined annual resources of the current top ten supercomputers. Simply put, computing will require colossal amounts of power, soon exceeding what our planet can provide.

One way to manage the unsustainable energy requirements of the computing sector is to fundamentally change the way we compute. Superconductors could let us do just that.

Superconductors offer the possibility of drastically lowering energy consumption because they do not dissipate energy when passing current. True, superconductors work only at cryogenic temperatures, requiring some cooling overhead. But in exchange, they offer virtually zero-resistance interconnects, digital logic built on ultrashort pulses that require minimal energy, and the capacity for incredible computing density due to easy 3D chip stacking.

Are the advantages enough to overcome the cost of cryogenic cooling? Our work suggests they most certainly are. As the scale of computing resources gets larger, the marginal cost of the cooling overhead gets smaller. Our research shows that starting at around 10 16 floating-point operations per second (tens of petaflops) the superconducting computer handily becomes more power efficient than its classical cousin. This is exactly the scale of typical high-performance computers today, so the time for a superconducting supercomputer is now.

At Imec, we have spent the past two years developing superconducting processing units that can be manufactured using standard CMOS tools. A processor based on this work would be one hundred times as energy efficient as the most efficient chips today, and it would lead to a computer that fits a data-center’s worth of computing resources into a system the size of a shoebox.

The Physics of Energy-Efficient Computation

Superconductivity—that superpower that allows certain materials to transmit electricity without resistance at low enough temperatures—was discovered back in 1911, and the idea of using it for computing has been around since the mid-1950s. But despite the promise of lower power usage and higher compute density, the technology couldn’t compete with the astounding advance of CMOS scaling under Moore’s Law. Research has continued through the decades, with a superconducting CPU demonstrated by a group at Yokohama National University as recently as 2020. However, as an aid to computing, superconductivity has stayed largely confined to the laboratory.

To bring this technology out of the lab and toward a scalable design that stands a chance of being competitive in the real world, we had to change our approach here at Imec. Instead of inventing a system from the bottom up—that is, starting with what works in a physics lab and hoping it is useful—we designed it from the top down—starting with the necessary functionality, and working directly with CMOS engineers and a full-stack development team to ensure manufacturability. The team worked not only on a fabrication process, but also software architectures, logic gates, and standard-cell libraries of logic and memory elements to build a complete technology.

The foundational ideas behind energy-efficient computation, however, have been developed as far back as 1991. In conventional processors, much of the power consumed and heat dissipated comes from moving information among logic units, or between logic and memory elements rather than from actual operations. Interconnects made of superconducting material, however, do not dissipate any energy. The wires have zero electrical resistance, and therefore, little energy is required to move bits within the processor. This property of having extremely low energy losses holds true even at very high communication frequencies, where losses would skyrocket ordinary interconnects.

Further energy savings come from the way logic is done inside the superconducting computer. Instead of the transistor, the basic element in superconducting logic is the Josephson-junction.

A Josephson junction is a sandwich—a thin slice of insulating material squeezed between two superconductors. Connect the two superconductors, and you have yourself a Josephson-junction loop.

Under normal conditions, the insulating “meat” in the sandwich is so thin that it does not deter a supercurrent—the whole sandwich just acts as a superconductor. However, if you ramp up the current past a threshold known as a critical current, the superconducting “bread slices” around the insulator get briefly knocked out of their superconducting state. In this transition period, the junction emits a tiny voltage pulse, lasting just a picosecond and dissipating just 2 x 10 -20 joules, a hundred-billionth of what it takes to write a single bit of information into conventional flash memory.

Three blue loops, one with nothing inside, one with a red bump and an arrow, and one with a circular arrow. A single flux quantum develops in a Josephson-junction loop via a three-step process. First, a current just above the critical value is passed through the junction. The junction then emits a single-flux-quantum voltage pulse. The voltage pulse passes through the inductor, creating a persistent current in the loop. A Josephson junction is indicated by an x on circuit diagrams. Chris Philpot

The key is that, due to a phenomenon called magnetic flux quantization in the superconducting loop, this pulse is always exactly the same. It is known as a “single flux quantum” (SFQ) of magnetic flux, and it is fixed to have a value of 2.07 millivolt-picoseconds. Put an inductor inside the Josephson-junction loop, and the voltage pulse drives a current. Since the loop is superconducting, this current will continue going around the loop indefinitely, without using any further energy.

Logical operations inside the superconducting computer are made by manipulating these tiny, quantized voltage pulses. A Josephson-junction loop with an SFQ’s worth of persistent current acts as a logical 1, while a current-free loop is a logical 0.

To store information, the Josephson-junction-based version of SRAM in CPU cache, also uses single flux quanta. To store one bit, two Josephson-junction loops need to be placed next to each other. An SFQ with a persistent current in the left-hand loop is a memory element storing a logical 0, whereas no current in the left but a current in the right loop is a logical 1.

A technical illustration of a chip. Designing a superconductor-based data center required full-stack innovation. Imec’s board design contains three main elements: the input and output, leading data to the room temperature world, the conventional DRAM, stacked high and cooled to 77 kelvins, and the superconducting processing units, also stacked, and cooled to 4 K. Inside the superconducting processing unit, basic logic and memory elements are laid out to perform computations. A magnification of the chip shows the basic building blocks: For logic, a Josephson-junction loop without a persistent current indicates a logical 0, while a loop with one single flux quantum’s worth of current represents a logical 1. For memory, two Josephson junction loops are connected together. An SFQ’s worth of persistent current in the left loop is a memory 0, and a current in the right loop is a memory 1. Chris Philpot

Progress Through Full-Stack Development

To go from a lab curiosity to a chip prototype ready for fabrication, we had to innovate the full stack of hardware. This came in three main layers: engineering the basic materials used, circuit development, and architectural design. The three layers had to go together—a new set of materials requires new circuit designs, and new circuit designs require novel architectures to incorporate them. Codevelopment across all three stages, with a strict adherence to CMOS manufacturing capabilities, was the key to success.

At the materials level, we had to step away from the previous lab-favorite superconducting material: niobium. While niobium is easy to model and behaves very well under predictable lab conditions, it is very difficult to scale down. Niobium is sensitive to both process temperature and its surrounding materials, so it is not compatible with standard CMOS processing. Therefore, we switched to the related compound niobium titanium nitride for our basic superconducting material. Niobium titanium nitride can withstand temperatures used in CMOS fabrication without losing its superconducting capabilities, and it reacts much less with its surrounding layers, making it a much more practical choice.

black background with white shape with one black line through it. The basic building block of superconducting logic and memory is the Josephson junction. At Imec, these junctions have been manufactured using a new set of materials, allowing the team to scale down the technology without losing functionality. Here, a tunneling electron microscope image shows a Josephson junction made with alpha-silicon insulator sandwiched between niobium titanium nitride superconductors, achieving a critical dimension of 210 nanometers. Imec

Additionally, we employed a new material for the meat layer of the Josephson-junction sandwich—amorphous, or alpha, silicon. Conventional Josephson-junction materials, most notably aluminum oxide, didn’t scale down well. Aluminum was used because it “wets” the niobium, smoothing the surface, and the oxide was grown in a well-controlled manner. However, to get to the ultrahigh densities that we are targeting, we would have to make the oxide too thin to be practically manufacturable. Alpha silicon, in contrast, allowed us to use a much thicker barrier for the same critical current.

We also had to devise a new way to power the Josephson junctions that would scale down to the size of a chip. Previously, lab-based superconducting computers used transformers to deliver current to their circuit elements. However, having a bulky transformer near each circuit element is unworkable. Instead, we designed a way to deliver power to all the elements on the chip at once by creating a resonant circuit, with specialized capacitors interspersed throughout the chip.

At the circuit level, we had to redesign the entire logic and memory structure to take advantage of the new materials’ capabilities. We designed a novel logic architecture that we call pulse-conserving logic. The key requirement for pulse-conserving logic is that the elements have as many inputs as outputs and that the total number of single flux quanta is conserved. The logic is performed by routing the SFQs through a combination of Josephson-junction loops and inductors to the appropriate outputs, resulting in logical ORs and ANDs. To complement the logic architecture, we also redesigned a compatible Josephson-junction-based SRAM.

Lastly, we had to make architectural innovations to take full advantage of the novel materials and circuit designs. Among these was cooling conventional silicon DRAM down to 77 kelvins and designing a glass bridge between the 77-K section and the main superconducting section. The bridge houses thin wires that allow communication without thermal mixing. We also came up with a way of stacking chips on top of each other and are developing vertical superconducting interconnects to link between circuit boards.

A Data Center the Size of a Shoebox

The result is a superconductor-based chip design that’s optimized for AI processing. A zoom in on one of its boards reveals many similarities with a typical 3D CMOS system-on-chip. The board is populated by computational chips: We call it a superconductor processing unit (SPU), with embedded superconducting SRAM, DRAM memory stacks, and switches, all interconnected on silicon interposer or on glass-bridge advanced packaging technologies.

But there are also some striking differences. First, most of the chip is to be submerged in liquid helium for cooling to a mere 4 K. This includes the SPUs and SRAM, which depend on superconducting logic rather than CMOS, and are housed on an interposer board. Next, there is a glass bridge to a warmer area, a balmy 77 K that hosts the DRAM. The DRAM technology is not superconducting, but conventional silicon cooled down from room temperature, making it more efficient. From there, bespoke connectors lead data to and from the room-temperature world.

An illustration of purple stacked squares with snow on it.  Davide Comai

Moore’s law relies on fitting progressively more computing resources into the same space. As scaling down transistors gets more and more difficult, the semiconductor industry is turning toward 3D stacking of chips to keep up the density gains. In classical CMOS-based technology, it is very challenging to stack computational chips on top of each other because of the large amount of power, and therefore heat, that is dissipated within the chips. In superconducting technology, the little power that is dissipated is easily removed by the liquid helium. Logic chips can be directly stacked using advanced 3D integration technologies resulting in shorter and faster connections between the chips, and a smaller footprint.

It is also straightforward to stack multiple boards of 3D superconducting chips on top of each other, leaving only a small space between them. We modeled a stack of 100 such boards, all operating within the same cooling environment and contained in a 20- by 20- by 12-centimeter volume, roughly the size of a shoebox. We calculated that this stack can perform 20 exaflops (in BF16 number format), 20 times the capacity of the largest supercomputer today. What’s more, the system promises to consume only 500 kilowatts of total power. This translates to energy efficiency one hundred times as high as the most efficient supercomputer today.

So far, we’ve scaled down Josephson junctions and interconnect dimensions over three succeeding generations. Going forward, Imec’s road map includes tackling 3D superconducting chip-integration and cooling technologies. For the first generation, the road map envisions the stacking of about 100 boards to obtain the target performance of 20 exaflops. Gradually, more and more logic chips will be stacked, and the number of boards will be reduced. This will further increase performance while reducing complexity and cost.

The Superconducting Vision

We don’t envision that superconducting digital technology will replace conventional CMOS computing, but we do expect it to complement CMOS for specific applications and fuel innovations in new ones. For one, this technology would integrate seamlessly with quantum computers that are also built upon superconducting technology. Perhaps more significantly, we believe it will support the growth in AI and machine learning processing and help provide cloud-based training of big AI models in a much more sustainable way than is currently possible.

In addition, with this technology we can engineer data centers with much smaller footprints. Drastically smaller data centers can be placed close to their target applications, rather than being in some far-off football-stadium-size facility.

Such transformative server technology is a dream for scientists. It opens doors to online training of AI models on real data that are part of an actively changing environment. Take potential robotic farms as an example. Today, training these would be a challenging task, where the required processing capabilities are available only in far-away, power-hungry data centers. With compact, nearby data centers, the data could be processed at once, allowing an AI to learn from current conditions on the farm

Similarly, these miniature data centers can be interspersed in energy grids, learning right away at each node and distributing electricity more efficiently throughout the world. Imagine smart cities, mobile health care systems, manufacturing, farming, and more, all benefiting from instant feedback from adjacent AI learners, optimizing and improving decision making in real time.

This article appears in the June 2024 print issue as “A Data Center in a Shoebox.”

  • ✇XDA
  • Top 5 weekly: Raspberry Pi hacking devices, new iPads, and moreSimon Batt
    Did you miss out on the news this week? If you did, there were some really good parts for Apple fans as the company announced several products during its Let Loose event. But even if you're not interested in new iPads, we still saw a ton of cool Raspberry Pi projects. So, here are all the cool news stories we saw this week.
     

Top 5 weekly: Raspberry Pi hacking devices, new iPads, and more

12. Květen 2024 v 02:49

Did you miss out on the news this week? If you did, there were some really good parts for Apple fans as the company announced several products during its Let Loose event. But even if you're not interested in new iPads, we still saw a ton of cool Raspberry Pi projects. So, here are all the cool news stories we saw this week.

Overclocking your Raspberry Pi 5 is easy, heres how I did it

12. Květen 2024 v 00:00

The Raspberry Pi 5 is the fastest Raspberry Pi made yet. Its CPU runs at 2.4 GHz compared to the 1.8 GHz of its predecessor, the Raspberry Pi 4, and its GPU runs at 910MHz, over 80% faster than the RPi4. But it could be faster.

  • ✇XDA
  • You can now get your fortune told by a Raspberry PiSimon Batt
    You may not be able to cross a fortune teller's palm with silver like you could in the olden days, but that doesn't mean you can't just make your own. In fact, with the rise of generative AI, you can now have silicon try to predict your future and warn you of any obstacles you may face. Turns out, that's exactly what someone has done by turning their Raspberry Pi into a fortune teller.
     

You can now get your fortune told by a Raspberry Pi

11. Květen 2024 v 23:24

You may not be able to cross a fortune teller's palm with silver like you could in the olden days, but that doesn't mean you can't just make your own. In fact, with the rise of generative AI, you can now have silicon try to predict your future and warn you of any obstacles you may face. Turns out, that's exactly what someone has done by turning their Raspberry Pi into a fortune teller.

  • ✇IEEE Spectrum
  • Brain-Inspired Computer Approaches Brain-Like SizeDina Genkina
    Today Dresden, Germany–based startup SpiNNcloud Systems announced that its hybrid supercomputing platform, the SpiNNcloud Platform, is available for sale. The machine combines traditional AI accelerators with neuromorphic computing capabilities, using system-design strategies that draw inspiration from the human brain. Systems for purchase vary in size, but the largest commercially available machine can simulate 10 billion neurons, about one-tenth the number in the human brain. The announcement
     

Brain-Inspired Computer Approaches Brain-Like Size

8. Květen 2024 v 16:38


Today Dresden, Germany–based startup SpiNNcloud Systems announced that its hybrid supercomputing platform, the SpiNNcloud Platform, is available for sale. The machine combines traditional AI accelerators with neuromorphic computing capabilities, using system-design strategies that draw inspiration from the human brain. Systems for purchase vary in size, but the largest commercially available machine can simulate 10 billion neurons, about one-tenth the number in the human brain. The announcement was made at the ISC High Performance conference in Hamburg, Germany.

“We’re basically trying to bridge the gap between brain inspiration and artificial systems.” —Hector Gonzalez, SpiNNcloud Systems

SpiNNcloud Systems was founded in 2021 as a spin-off of the Dresden University of Technology. Its original chip, the SpiNNaker1, was designed by Steve Furber, the principal designer of the ARM microprocessor—the technology that now powers most cellphones. The SpiNNaker1 chip is already in use by 60 research groups in 23 countries, SpiNNcloud Systems says.

Human Brain as Supercomputer

Brain-emulating computers hold the promise of vastly lower energy computation and better performance on certain tasks. “The human brain is the most advanced supercomputer in the universe, and it consumes only 20 watts to achieve things that artificial intelligence systems today only dream of,” says Hector Gonzalez, cofounder and co-CEO of SpiNNcloud Systems. “We’re basically trying to bridge the gap between brain inspiration and artificial systems.”

Aside from sheer size, a distinguishing feature of the SpiNNaker2 system is its flexibility. Traditionally, most neuromorphic computers emulate the brain’s spiking nature: Neurons fire off electrical spikes to communicate with the neurons around them. The actual mechanism of these spikes in the brain is quite complex, and neuromorphic hardware often implements a specific simplified model. The SpiNNaker2 can implement a broad range of such models however, as they are not hardwired into its architecture.

Instead of looking how each neuron and synapse operates in the brain and trying to emulate that from the bottom up, Gonzalez says, the his team’s approach involved implementing key performance features of the brain. “It’s more about taking a practical inspiration from the brain, following particularly fascinating aspects such as how the brain is energy proportional and how it is simply highly parallel,” Gonzalez says.

To build hardware that is energy proportional—each piece draws power only when it’s actively in use and highly parallel—the company started with the building blocks. The basic unit of the system is the SpiNNaker2 chip, which hosts 152 processing units. Each processing unit has an ARM-based microcontroller, and unlike its predecessor the SpiNNaker1, also comes equipped with accelerators for use on neuromorphic models and traditional neural networks.

Vertical grey bars alternating with bright green lights The SpiNNaker2 supercomputer has been designed to model up to 10 billion neurons, about one-tenth the number in the human brain. SpiNNCloud Systems

The processing units can operate in an event-based manner: They can stay off unless an event triggers them to turn on and operate. This enables energy-proportional operation. The events are routed between units and across chips asynchronously, meaning there is no central clock coordinating their movements—which can allow for massive parallelism. Each chip is connected to six other chips, and the whole system is connected in the shape of a torus to ensure all connecting wires are equally short.

The largest commercially offered system is not only capable of emulating 10 billion neurons, but also performing 0.3 billion billion operations per second (exaops) of more traditional AI tasks, putting it on a comparable scale with the top 10 largest supercomputers today.

Among the first customers of the SpiNNaker2 system is a team at Sandia National Labs, which plans to use it for further research on neuromorphic systems outperforming traditional architectures and performing otherwise inaccessible computational tasks.

“The ability to have a general programmable neuron model lets you explore some of these more complex learning rules that don’t necessarily fit onto older neuromorphic systems,” says Fred Rothganger, senior member of technical staff at Sandia. “They, of course, can run on a general-purpose computer. But those general-purpose computers are not necessarily designed to efficiently handle the kind of communication patterns that go on inside a spiking neural network. With [the SpiNNaker2 system] we get the ideal combination of greater programmability plus efficient communication.”

  • ✇Kotaku
  • Apple’s New iPad Ad Sparks Loud, Immediate BacklashKenneth Shepard
    Sometimes, you get blatant reminders of how much disregard big tech has for, well, everything. Whether that’s exorbitant spending on acquisitions, layoffs of hundreds to thousands of people, and refusing to release or delist its employees’ hard work for a tax write-off. But now and then, you get all that callousness…Read more...
     

Apple’s New iPad Ad Sparks Loud, Immediate Backlash

8. Květen 2024 v 17:35

Sometimes, you get blatant reminders of how much disregard big tech has for, well, everything. Whether that’s exorbitant spending on acquisitions, layoffs of hundreds to thousands of people, and refusing to release or delist its employees’ hard work for a tax write-off. But now and then, you get all that callousness…

Read more...

Lilbits: Rabbit R1 handheld AI device runs Android (but its head is in the cloud), LastPass is an independent company again, and other tech news

1. Květen 2024 v 22:15

The Rabbit R1 is the second major gadget to launch this year as basically a portable device for interacting with cloud-based AI features. Unlike the Humane Ai Pin, the Rabbit R1 has a display that provides visual information. And with a $200 price tag, it’s a lot easier for forgive its shortcomings than the $699 […]

The post Lilbits: Rabbit R1 handheld AI device runs Android (but its head is in the cloud), LastPass is an independent company again, and other tech news appeared first on Liliputing.

  • ✇Boing Boing
  • Zilog to stop making classic Z80 8-bit CPU after 50 yearsRob Beschizza
    Zilog's Z80 was a lynchpin of the home computer revolution. The first two 8-bit machines I owned, the ZX Spectrum and the Amstrad CPC, both had one. The versatile chip was a regular co-processor in the 16-bit era, and enjoyed a long golden afternoon as a low-power CPU with an instruction set everyone and their gran knows by heart. — Read the rest The post Zilog to stop making classic Z80 8-bit CPU after 50 years appeared first on Boing Boing.
     

Zilog to stop making classic Z80 8-bit CPU after 50 years

22. Duben 2024 v 13:38

Zilog's Z80 was a lynchpin of the home computer revolution. The first two 8-bit machines I owned, the ZX Spectrum and the Amstrad CPC, both had one. The versatile chip was a regular co-processor in the 16-bit era, and enjoyed a long golden afternoon as a low-power CPU with an instruction set everyone and their gran knows by heart. — Read the rest

The post Zilog to stop making classic Z80 8-bit CPU after 50 years appeared first on Boing Boing.

  • ✇Boing Boing
  • What The Terminator's on-screen vision code really meansNatalie Dressed
    Typically, computer stuff in film and tv looks like and is a whole bunch of gobbledygook. Getting someone to successfully look really cool while they're typing on the computer is a miracle. Usually, though, you get those really silly scenes from Swordfish and NCIS. — Read the rest The post What The Terminator's on-screen vision code really means appeared first on Boing Boing.
     

What The Terminator's on-screen vision code really means

21. Duben 2024 v 16:55

Typically, computer stuff in film and tv looks like and is a whole bunch of gobbledygook. Getting someone to successfully look really cool while they're typing on the computer is a miracle. Usually, though, you get those really silly scenes from Swordfish and NCIS. — Read the rest

The post What The Terminator's on-screen vision code really means appeared first on Boing Boing.

  • ✇XDA
  • How to make a print server with a Raspberry PiAyush Pande
    While digital media options have surpassed the popularity of old-fashioned printed materials, you'll still find plenty of reasons to use physical documents and images. With the price of printers going down in recent years, it's easy to grab a solid printer that’s equipped with cutting-edge features at bargain prices.
     

How to make a print server with a Raspberry Pi

21. Duben 2024 v 16:00

While digital media options have surpassed the popularity of old-fashioned printed materials, you'll still find plenty of reasons to use physical documents and images. With the price of printers going down in recent years, it's easy to grab a solid printer that’s equipped with cutting-edge features at bargain prices.

  • ✇XDA
  • From gaming to robotics, these are the top 5 DIY projects you should do with an SBCDylan Turck
    Modern SBCs have come a long way since their initial development as basic server PCs and have become extremely versatile machines that can be used in so many different ways. Since the popularization of the Raspberry Pi, which is considered the best SBC on the market, PC enthusiasts and hobbyists alike have been using these tiny computers to do all sorts of incredible things, from building retro gaming consoles to machine learning and home servers.
     

From gaming to robotics, these are the top 5 DIY projects you should do with an SBC

21. Duben 2024 v 13:00

Modern SBCs have come a long way since their initial development as basic server PCs and have become extremely versatile machines that can be used in so many different ways. Since the popularization of the Raspberry Pi, which is considered the best SBC on the market, PC enthusiasts and hobbyists alike have been using these tiny computers to do all sorts of incredible things, from building retro gaming consoles to machine learning and home servers.

  • ✇IEEE Spectrum
  • The Legacy of the Datapoint 2200 MicrocomputerQusi Alqarqaz
    As the history committee chair of the IEEE Lone Star Section, in San Antonio, Texas, I am responsible for documenting, preserving, and raising the visibility of technologies developed in the local area. One such technology is the Datapoint 2200, a programmable terminal that laid the foundation for the personal computer revolution. Launched in 1970 by Computer Terminal Corp. (CTC) in San Antonio, the machine played a significant role in the early days of microcomputers. The pioneering system inte
     

The Legacy of the Datapoint 2200 Microcomputer

16. Duben 2024 v 20:00


As the history committee chair of the IEEE Lone Star Section, in San Antonio, Texas, I am responsible for documenting, preserving, and raising the visibility of technologies developed in the local area. One such technology is the Datapoint 2200, a programmable terminal that laid the foundation for the personal computer revolution. Launched in 1970 by Computer Terminal Corp. (CTC) in San Antonio, the machine played a significant role in the early days of microcomputers. The pioneering system integrated a CPU, memory, and input/output devices into a single unit, making it a compact, self-contained device.

Apple, IBM, and other companies are often associated with the popularization of PCs; we must not overlook the groundbreaking innovations introduced by the Datapoint. The machine might have faded from memory, but its influence on the evolution of computing technology cannot be denied. The IEEE Region 5 life members committee honored the machine in 2022 with its Stepping Stone Award, but I would like to make more members aware of the innovations introduced by the machine’s design.

From mainframes to microcomputers

Before the personal computer, there were mainframe computers. The colossal machines, with their bulky, green monitors housed in meticulously cooled rooms, epitomized the forefront of technology at the time. I was fortunate to work with mainframes during my second year as an electrical engineering student in the United Arab Emirates University at Al Ain, Abu Dhabi, in 1986. The machines occupied entire rooms, dwarfing the personal computers we are familiar with today. Accessing the mainframes involved working with text-based terminals that lacked graphical interfaces and had limited capabilities.

Those relatively diminutive terminals that interfaced with the machines often provided a touch of amusement for the students. The mainframe rooms served as social places, fostering interactions, collaborations, and friendly competitions.

Operating the terminals required mastering specific commands and coding languages. The process of submitting computing jobs and waiting for results without immediate feedback could be simultaneously amusing and frustrating. Students often humorously referred to the “black hole,” where their jobs seemed to vanish until the results materialized. Decoding enigmatic error messages became a challenge, yet students found joy in deciphering them and sharing amusing examples.

Despite mainframes’ power, they had restricted processing capabilities and memory compared with today’s computers.

The introduction of personal computers during my senior year was a game-changer. Little did I know that it would eventually lead me to San Antonio, Texas, birthplace of the PC, where I would begin a new chapter of my life.

The first PC

In San Antonio, a group of visionary engineers from NASA founded CTC with the goal of revolutionizing desktop computing. They introduced the Datapoint 3300 as a replacement for Teletype terminals. Led by Phil Ray and Gus Roche, the company later built the first personal desktop computer, the Datapoint 2200. They also developed LAN technology and aimed to replace traditional office equipment with electronic devices operable from a single terminal.

The Datapoint 2200 introduced several design elements that later were adopted by other computer manufacturers. It was one of the first computers to use a keyboard similar to a typewriter’s, and a monitor for user interaction—which became standard input and output devices for personal computers. They set a precedent for user-friendly computer interfaces. The machine also had cassette tape drives for storage, predecessors of disk drives. The computer had options for networking, modems, interfaces, printers, and a card reader.

It used different memory sizes and employed an 8-bit processor architecture. The Datapoint’s CPU was initially intended to be a custom chip, which eventually came to be known as the microprocessor. At the time, no such chips existed, so CTC contracted with Intel to produce one. That chip was the Intel 8008, which evolved into the Intel 8080. Introduced in 1974, the 8080 formed the basis for small computers, according to an entry about early microprocessors in the Engineering and Technology History Wiki.

Those first 8-bit microprocessors are celebrating their 50th anniversary this year.

The 2200 was primarily marketed for business use, and its introduction helped accelerate the adoption of computer systems in a number of industries, according to Lamont Wood, author of Datapoint: The Lost Story of the Texans Who Invented the Personal Computer Revolution.

The machine popularized the concept of computer terminals, which allowed multiple users to access a central computer system remotely, Wood wrote. It also introduced the idea of a terminal as a means of interaction with a central computer, enabling users to input commands and receive output.

The concept laid the groundwork for the development of networking and distributed computing. It eventually led to the creation of LANs and wide-area networks, enabling the sharing of resources and information across organizations. The concept of computer terminals influenced the development of modern networking technologies including the Internet, Wood pointed out.

How Datapoint inspired Apple and IBM

Although the Datapoint 2200 was not a consumer-oriented computer, its design principles and influence played a role in the development of personal computers. Its compact, self-contained nature demonstrated the feasibility and potential of such machines.

The Datapoint sparked the imagination of researchers and entrepreneurs, leading to the widespread availability of personal computers.

Here are a few examples of how manufacturers built upon the foundation laid by the Datapoint 2200:

Apple drew inspiration from early microcomputers. The Apple II, introduced in 1977, was one of the first successful personal computers. It incorporated a keyboard, a monitor, and a cassette tape interface for storage, similar to the Datapoint 2200. In 1984 Apple introduced the Macintosh, which featured a graphical user interface and a mouse, revolutionizing the way users interacted with computers.

IBM entered the personal computer market in 1981. Its PC also was influenced by the design principles of microcomputers. The machine featured an open architecture, allowing for easy expansion and customization. The PC’s success established it as a standard in the industry.

Microsoft played a crucial role in software development for early microcomputers. Its MS-DOS provided a standardized platform for software development and was compatible with the IBM PC and other microcomputers. The operating system helped establish Microsoft as a dominant player in the software industry.

Commodore International, a prominent computer manufacturer in the 1980s, released the Commodore 64 in 1982. It was a successful microcomputer that built upon the concepts of the Datapoint 2200 and other early machines. The Commodore 64 featured an integrated keyboard, color graphics, and sound capabilities, making it a popular choice for gaming and home computing.

Xerox made significant contributions to the advancement of computing interfaces. Its Alto, developed in 1973, introduced the concept of a graphical user interface, with windows, icons, and a mouse for interaction. Although the Alto was not a commercial success, its influence was substantial, and it helped lay the groundwork for GUI-based systems including the Macintosh and Microsoft Windows.

The Datapoint 2200 deserves to be remembered for its contributions to computer history.

The San Antonio Museum of Science and Technology possesses a collection of Datapoint computers, including the original prototypes. The museum also houses a library of archival materials about the machine.

This article has been updated from an earlier version.

  • ✇IEEE Spectrum
  • Science Fiction Short: HijackKarl Schroeder
    Computers have grown more and more powerful over the decades by pushing the limits of how small their electronics can get. But just how big can a computer get? Could we turn a planet into a computer, and if so, what would we do with it? In considering such questions, we go beyond normal technological projections and into the realm of outright speculation. So IEEE Spectrum is making one of its occasional forays into science fiction, with a short story by Karl Schroeder about the unexpected
     

Science Fiction Short: Hijack

24. Únor 2024 v 17:00




Computers have grown more and more powerful over the decades by pushing the limits of how small their electronics can get. But just how big can a computer get? Could we turn a planet into a computer, and if so, what would we do with it?

In considering such questions, we go beyond normal technological projections and into the realm of outright speculation. So IEEE Spectrum is making one of its occasional forays into science fiction, with a short story by Karl Schroeder about the unexpected outcomes from building a computer out of planet Mercury. Because we’re going much farther into the future than a typical Spectrum article does, we’ve contextualized and annotated Schroeder’s story to show how it’s still grounded in real science and technology. This isn’t the first work of fiction to consider such possibilities. In “The Hitchhiker’s Guide to the Galaxy,” Douglas Adams famously imagined a world constructed to serve as a processor.

Real-world scientists are also intrigued by the idea. Jason Wright, director of the Penn State Extraterrestrial Intelligence Center, has given serious thought to how large a computer can get. A planet-scale computer, he notes, might feature in the search for extraterrestrial intelligence. “In SETI, we try to look for generic things any civilization might do, and computation feels pretty generic,” Wright says. “If that’s true, then someone’s got the biggest computer, and it’s interesting to think about how big it could be, and what limits they might hit.”

There are, of course, physical constraints on very large computers. For instance, a planet-scale computer probably could not consist of a solid ball like Earth. “It would just get too hot,” Wright says. Any computation generates waste heat. Today’s microchips and data centers “face huge problems with heat management.”

In addition, if too much of a planet-scale computer’s mass is concentrated in one place, “it could implode under its own weight,” says Anders Sandberg, a senior research fellow at the University of Oxford’s Future of Humanity Institute. “There are materials stronger than steel, but molecular bonds have a limit.”

Instead, creating a computer from a planet will likely involve spreading out a world’s worth of mass. This strategy would also make it easier to harvest solar energy. Rather than building a single object that would be subject to all kinds of mechanical stresses, it would be better to break the computer up into a globular flotilla of nodes, known as a Dyson swarm.

What uses might a planet-scale computer have? Hosting virtual realities for uploaded minds is one possibility, Sandberg notes. Quantum simulation of ecosystems is another, says Seth Lloyd, a quantum physicist at MIT.


Which brings us to our story…


An illustration of two men sitting on chairs looking out a window at space.


An illustration of two men sitting on chairs looking out a window at space.



Which brings us to our story…


Simon Okoro settled into a lawn chair in the Heaven runtime and watched as worlds were born.

“I suppose I should feel honored you chose to watch this with me,” said Martin as he sat down next to Simon. “Considering that you don’t believe I exist.”

“Can’t we just share a moment? It’s been years since we did anything together. And you worked toward this moment too. You deserve some recognition.”

A


Uploading is a hypothetical process in which brain scanning can help create emulations of human minds in computers. A large enough computer could potentially house a civilization. These uploads could then go on to live in computer-simulated virtual realities.


B

Chris Philpot

A typical satellite must orbit around a celestial object at a speed above a critical value to avoid being pulled into the surface of the object by gravity. A statite, a hypothetical form of satellite patented by physicist Robert L. Forward, uses a solar sail to help it hover above a star or planet, using radiation pressure from sunlight to balance the force of gravity.


“Ah. They sent you to acknowledge the Uploaded, is that it?” Martin turned his long, sad-eyed face to the sky and the drama playing out above. A The Heaven runtime was a fully virtual world, so Simon had converted the sky into a vast screen on which to project what was happening in the real world. The magnified surface of the sun made a curving arc from horizon to horizon. Jets and coronas rippled over it, and high, high above its incandescent surface hung thousands of solar statites shaped like mirrored flowers B.


They did not orbit, instead floating over a particular spot by light pressure alone. They formed a diffuse cloud, dwindling to invisibility before reaching the horizon. This telescope view showed the closest statite cores scattering fiery specks like spores into the overwhelming light. The specks blazed with light and shot away from the sun, accelerating.

This moment was the pinnacle of Simon’s career, the apex of his life’s work. Each of those specks was a solar sail C, kilometers wide, carrying a terraforming package D. Launched so close to the sun and supplemented with lasers powered by the statites, they would be traveling at 20 percent light speed by the time they left the solar system. At their destinations, they’d sundive and then deliver terraforming seeds to lifeless planets around the nearest stars.

C


Chris Philpot

Light has no mass, but it can exert pressure as photons exchange momentum with a surface as they reflect off it. A mirror that is thin and reflective enough can therefore serve as a solar sail, harnessing sunlight to generate thrust. In 2010, Japan’s Ikaros probe to Venus demonstrated the use of a solar sail for interplanetary travel for the first time. Because solar pressure is measured in micronewtons per square meter, solar sails must have large areas relative to their payloads, although the pressure from sunlight can be augmented with a laser beam for propulsion.


D

Terraforming is the hypothetical act of transforming a planet so as to resemble Earth, or at least make it suitable for life. Some terraforming proposals involve first seeding the planet with single-celled organisms that alter conditions to be more hospitable to multicellular life. This process would mimic the naturally occurring transformation of Earth that started about 2.3 billion years ago, when photosynthetic cyanobacteria created the oxygen-rich atmosphere we breathe today.


“So life takes hold in the galaxy,” said Simon. These were the first words of a speech he’d written and rehearsed long ago. He’d dreamed of saying them on a podium, with Martin standing with him. But Martin...well, Martin had been dead for 20 years now.

He remembered the rest of the speech, but there was no point in giving it when he was absolutely alone.

Martin sighed. “So this is all you’re going to do with my Heaven? A little gardening? And then what? An orderly shutdown of the Heaven runtime? Sell off the Paradise processor as scrap?”


“I knew this was a bad idea.” Simon raised his hand to exit the virtual world, but Martin quickly stood, looking sorry.

“It’s just hard,” Martin said. “Paradise was supposed to be the great project to unite humanity. Our triumph over death! Why did you let them hijack it for this?”

Simon watched the spores catch the light and flash away into interstellar space. “You know we won’t shut you down. Heaven will be kept running as long as Paradise exists. We built it together, Martin, and I’m proud of what we did.”

E


In a 2013 study, Sandberg and his colleague Stuart Armstrong suggested deploying automated self-replicating robots on Mercury to build a Dyson swarm. These robots would dismantle the planet to construct not only more of themselves but also the sunlight collectors making up the swarm. The more solar plants these robots built, the more energy they would have to mine Mercury and produce machines. Given this feedback loop, Sandberg and Armstrong argued, these robots could disassemble Mercury in a matter of decades. The solar plants making up this Dyson swarm could double as computers.


F

Solar power is exponentially more abundant at Mercury’s orbit compared with Earth’s. At its orbital distance of 1 astronomical unit from the sun, Earth receives about 1.4 kilowatts per square meter from sunlight. Mercury receives between 6.2 and 14.4 kW/m2. The range is because of Mercury’s high eccentricity—that is, it has the most elliptical orbit of all the planets in the solar system.


G

Whereas classical computers switch transistors on and off to symbolize data as either 1s and 0s, quantum computers use quantum bits, or qubits, which can exist in a state where they are both 1 and 0 at the same time. This essentially lets each qubit perform two calculations at once. As more qubits are added to a quantum computer, its computational power grows exponentially.


The effort had been mind-bogglingly huge. They’d been able to do it only because millions of people believed that in dismantling Mercury E and turning it into a sun-powered F quantum computer G there would be enough computing power for every living person to upload their consciousness into it. The goal had been to achieve eternal life in a virtual afterlife: the Heaven runtime.

Simon knit his hands together, lowering his eyes to the virtual garden. “Science happened, Martin. How were we to know Enactivism H would answer the ‘hard problem’ of consciousness? You and I had barely even heard of extended consciousness when we proposed Heaven. It was an old idea from cognitive science. Nobody was even studying it anymore except a few AIs, and we were sucking up all the resources they might have used to experiment.” He glanced ruefully at Martin. “We were all blindsided when they proved it. Consciousness can’t be just abstracted from a brain.”

Martin’s response was quick; this was an old argument between them. “Nothing’s ever completely proven in science! There’s always room for doubt—but you agreed with those AIs when they said that simulated consciousness can’t have subjective experiences. Conveniently after I died but before I got rebooted here. I wasn’t here to fight you.”

Martin snorted. “And now you think I’m a zimboe I: a mindless simulation of the old Martin so accurate that I act exactly how he would if you told him he wasn’t self-aware. I deny it! Of course I do, like everyone else from that first wave of uploads.” He gestured, and throughout the simulated mountain valley, thousands of other human figures were briefly highlighted. “But what did it matter what I said, once I was in here? You’d already repurposed Paradise from humanity’s chance at immortality to just a simulator, using it to mimic billions of years of evolution on alien planets. All for this ridiculous scheme to plant ready-made, complete biospheres on them in advance of human colonization.” J

H


Enactivism was first mooted in the 1990s. In a nutshell, it explains the mind as emerging from a brain’s dynamic interactions with the larger world. Thus, there can be no such thing as a purely abstract consciousness, completely distinct from the world it is embedded in.


I

A “philosophical zombie is a putative entity that behaves externally exactly like a being with consciousness but with no self-awareness, no “I”: It is a pure automata, even though it might itself say otherwise.


J

Chris Philpot

Living organisms are tremendously complex systems. This diagram shows just the core metabolic pathways for an organism known as JCVI-SYN3A. Each red dot represents a different biomolecule, and the arrows indicate the directions in which chemical reactions can proceed.

JCVI-SYN3A is a synthetic life-form, a cell genetically engineered to have the simplest possible biology. Yet even its metabolism is difficult to simulate accurately with current computational resources. When Nobel laureate Richard Feynman first proposed the idea of quantum computers, he envisioned them modeling quantum systems such as molecules. One could imagine that a powerful enough quantum computer could go on to model cells, organisms, and ecosystems, Lloyd says.


“We’d already played God with the inner solar system,” Simon reminded him. “The only way we could justify that after the Enactivism results was to find an even higher purpose than you and I started out with.

“Martin, I’m sorry you died before we discovered the truth. I fought to keep this subsystem running our original Heaven sim, because you’re right—there’s always a chance that the Enactivists are wrong. However slim.”

Martin snorted again. “I appreciate that. But things got very, very weird during your Enactivist rebellion. If I didn’t know better, I’d call this project”—he nodded at the sky—“the weirdest thing of all. Things are about to heat up now, though, aren’t they?”

“This was a mistake.” Simon sighed and flipped out of the virtual world. Let the simulated Martin rage in his artificial heaven; the science was unequivocal. In truth, Simon had been speaking only to himself for the entire conversation.

He stood now in the real world near the podium in a giant stadium, inside a wheel-shaped habitat 200 kilometers across. Hundreds of similar mini-ringworlds were spaced around the rim of Paradise.


A illustration of a person standing at a podium and looking out onto a


Paradise itself was a vast bowl-shaped object, more cloud than material, orbiting closer to the sun than Mercury had. Self-reproducing machines had eaten that planet in a matter of decades, transforming its usable elements into a solar-powered quantum computer tens of thousands of kilometers across. The bowl cupped a spherical cloud of iron that acted as a radiator for the waste heat emitted by Paradise’s quadrillions of computing modules. K

K


One design for planetary scale—and up!—computers is a Matrioshka brain. Proposed in 1997 by Robert Bradbury, it would consist of nested structures, like its namesake Russian doll. The outer layers would use the waste heat of the inner layers to power their computations, with the aim of making use of every bit of energy for processing. However, in a 2023 study, Wright suggests that this nested design may be unnecessary. “If you have multiple layers, shadows from the inner elements of the swarm, as well as collisions, could decrease efficiency,” he says. “The optimal design is likely the smallest possible sphere you can build given the mass you have.”


L

How much computation might a planet-size machine carry out? Earth has a mass of nearly 6 x 1024 kilograms. In a 2000 paper, Lloyd calculated that 1 kilogram of matter in 1 liter could support a maximum of roughly 5.4 x 1050 logical operations per second. However, at that rate, Lloyd noted, it would be operating at a temperature of 109 kelvins, resembling a small piece of the big bang.


M

Top to bottom: Proxima Centauri b, Ross 128 b, GJ 1061 d, GJ 1061 c, Luyten b, Teegarden’s Star b, Teegarden’s Star c, Wolf 1061c, GJ 1002 b, GJ 1002 c, Gliese 229 Ac, Gliese 625 b, Gliese 667 Cc, Gliese 514 b, Gliese 433 d

Potentially habitable planets have been identified within 30 light-years of Earth. Another 16 or so are within 100 light-years, with likely more yet to be identified. Many of them have masses considerably greater than Earth’s, indicating very different environmental conditions than those under which terrestrial organisms evolved.


The leaders of the terraforming project were on stage, taking their bows. The thousands of launches happening today were the culmination of decades of work: evolution on fast-forward, ecosystem after ecosystem, with DNA and seed designs for millions of new species fitted to thousands of worlds L.

It had to be done. Humans had never found another inhabited planet. That fact made life the most precious thing in the universe, and spreading it throughout the galaxy seemed a better ambition for humanity than building a false heaven. M

Simon had reluctantly come to accept this. Martin was right, though. Things had gotten weird. Paradise was such a good simulator that you could ask it to devise a machine to do X, and it would evolve its design in seconds. Solutions found through diffusion and selection were superior to algorithmically or human-designed ones, but it was rare that they could be reverse-engineered or their working principles even understood. And Paradise had computing power to spare, so in recent years, human and AI designers across the solar system had been idled as Paradise replaced their function. This, it was said, was the Technological Maximum; it was impossible for any civilization to attain a level of technological advancement beyond the point where any possible system could be instantly evolved.

Simon walked to where he could look past the open roof of the stadium to the dark azure sky. The vast sweep of the ring rose before and behind; in its center, a vast canted mirror reflected sunlight; to the left of that, he could see the milky white surface of the Paradise bowl. Usually, to the right, there was only blackness.

Today, he could see a sullen red glow. That would be Paradise’s radiator, expelling heat from the calculation of all those alien ecosystems. Except...

He found a quiet spot and sat, then reentered the Heaven simulation. Martin was still there, gazing at the sky.

Simon sat beside him. “What did you mean when you said things are heating up?”

Martin’s grin was slow and satisfied. “So you noticed.”

“Paradise isn’t supposed to be doing anything right now. All the terraforming packages were completed and copied to the sails—most of them years ago. Now they’re on their way, Paradise doesn’t have any duties, except maybe evolving better luxury yachts.”

Martin nodded. “Sure. And is it doing anything?”

Simon still had read-access to Paradise’s diagnostics systems. He summoned a board that showed what the planet-size computing system was doing.

Nothing. It was nearly idle.

“If the system is idle, why is the radiator approaching its working limit?”

Martin crossed his arms, grinning. Damn it, he was enjoying this! Or the real Martin would be enjoying it, if he were here.

“You remember when the first evolved machines started pouring out of the printers?” Martin said. “Each one was unique; each grown for one owner, one purpose, one place. You said they looked alien, and I laughed and said, ‘How would we even know if an alien invasion was happening, if no two things look or work the same anymore?’ ”

“That’s when it started getting weird,” admitted Simon. “Weirder, I mean, than building an artificial heaven by dismantling Mercury…” But Martin wasn’t laughing at his feeble joke. He was shaking his head.

N


Chris Philpot

In astrodynamics, unless an object is actively generating thrust, its trajectory will take the form of a conic section—that is, a circle, ellipse, parabola, or hyperbola. Even relatively few observations of an object anywhere along its trajectory can distinguish between these forms, with objects that are gravitationally bound following circular and elliptical trajectories. Objects on parabolic or hyperbolic trajectories, by contrast, are unbound. Therefore, any object found to be moving along a hyperbola relative to the sun must have come from interstellar space. This is how in 2017, astronomers identified ‘Oumuamua, a cigar-shaped object, as the first known interstellar visitor. It’s been estimated that each year, about seven interstellar objects pass through the inner solar system.


“No, that’s not when it got weird. It got weird when the telescopes we evolved to monitor the construction of Paradise noticed just how many objects pass through the solar system every year.”

“Interstellar wanderers? They’re just extrasolar comets,” said Simon. “You said yourself that rocks from other star systems must pass through ours all the time.” N

“Yes. But what I didn’t get to tell you—because I died—was that while we were building Paradise, several objects drifted from interstellar space into one side of the Paradise construction orbits...and didn’t come out the other side.”

Simon blinked. “Something arrived...and didn’t leave? Wouldn’t it have been eaten by the recycling planetoids?”

“You’d think. But there’s no record of it.”

“But what does this have to do with the radiator?”

Martin reached up and flicked through a few skies until he came to a view of the spherical iron cloud in the bowl of Paradise. “Remember why we even have a radiator?”

“Because there’s always excess energy left over from making a calculation. If it can’t be used for further calculations down the line, it’s literally meaningless, it has to be discarded.”

“Right. We designed Paradise in layers, so each layer would scavenge the waste from the previous one—optical computing on the sunward-facing skin, electronics further in. But inevitably, we ran out of architectures that could scavenge the excess. There is always an excess that is meaningless to the computing architecture at some point. So we built Paradise in the shape of a bowl, where all that extra heat would be absorbed by the iron cloud in its center. We couldn’t use that iron for transistors. The leftovers of Mercury were mostly a junk pile—but one we could use as a radiator.”

“But the radiator’s shedding heat like crazy! Where’s that coming from?” asked Simon.

“Let’s zoom in.” Martin put two fingers against the sky and pulled them apart. Whatever telescope he was linked to zoomed crazily; it felt like the whole world was getting yanked into the radiator. Simon was used to virtual worlds, so he just planted his feet and let the dizzying motion wash over him.

The radiator cloud filled the sky, at first just a dull red mist. But gradually Simon began to see structure to it: giant cells far brighter than the material around them. “Those look like...energy storage. Heat batteries. As if the radiator’s been storing some of the power coming through it. But why—”


An illustration of of a planet disappearing and showing machinery underneath.


Alerts from the real world suddenly blossomed in his visual field. He popped out of Martin’s virtual garden and into a confused roar inside the stadium.

The holographic image that filled the central space of the stadium showed the statite launchers hovering over the sun. One by one, they were folding in on themselves, falling silently into the incinerating heat below. The crowd was on its feet, people shouting in shock and fear. Now that the launchers had sent the terraforming systems, they were supposed to propel ships of colonists heading for the newly greened worlds. There were no more inner-solar-system resources left to build more.

O


Chris Philpot

“Mechanical computer” brings to mind the rotating cogwheels of Charles Babbage’s 19th-century Difference Engine, but other approaches exist. Here we show the heart of a logic gate made with moving rods. The green input rods can slide back and forth as desired, with a true value indicated by placing the rod into its forward position and false indicated by moving the rod into its back position. The blue output rod is blocked from advancing to its true position unless both input rods are set to true, so this represents an AND gate. Rod logic has been proposed as a mechanism for controlling nanotech-scale robots.

In space, one problem that a mechanical computer could face is a phenomenon called cold welding. That occurs when two flat, clean pieces of metal come in contact, and they fuse together. Cold welding is not usually seen in everyday life on Earth because metals are often coated in layers of oxides and other contaminants that keep them from fusing. But it has led to problems in space (cold welding has been implicated in the deployment failure of the main antenna of the Galileo probe to Jupiter, for example). Some of the oxygen or other elements found in a rocky world would have to be used in the coatings for components in an iron or other metal-based mechanical computer.


Simon jumped back into VR. Martin was standing calmly in the garden, smiling at the intricate depths of the red-hot radiator that filled the sky. Simon followed his gaze and saw...

“Gears?” The radiator was a cloud, but only now was it revealing itself to be a cloud of clockwork elements that, when thermal energy brought them together, spontaneously assembled into more complex arrangements. And those were spinning and meshing in an intricate dance that stretched away into amber depths in all directions. O

“It’s a dissipative system,” said Martin. “Sure, it radiates the heat our quantum computers can no longer use. But along the way, it’s using that energy to power an entirely different kind of computer. A Babbage engine the size of the moon.”

“But, Martin, the launchers—they’re all collapsing.”

Martin nodded. “Makes sense. The launchers accomplished their mission. Now they don’t want us following the seeds.”

“Not follow them? What do you mean?” An uneasy thought came to Simon; he tried to avoid it, but there was only one way this all made sense. “If the radiator was built to compute something, it must have been built with a way to output the result. This ‘they’ you’re talking about added a transmitter to the radiator. Then the radiator sent a virus or worm to the statites. The worm includes the radiator’s output. It hacked the statites’ security, and now that the seeds are in flight, it’s overwriting their code.”

Martin nodded.

“But why?” asked Simon.

Again, the answer was clear; Simon just didn’t want to admit it to himself. Martin waited patiently to hear Simon say it.

“They gave the terraformers new instructions.”

Martin nodded. “Think about it, Simon! We designed Paradise as a quantum computer that would be provably secure. We made it impossible to infect, and it is. Whatever arrived while we were building it didn’t bother to mess with it, where our attention was. It just built its own system where we wouldn’t even think to look. Made out of and using our garbage. Probably modified the maintenance robots tending the radiator into making radical changes.

“And what’s it been doing? I should think that was obvious. It’s been designing terraforming systems for the exoplanets, just like you have, but to make them habitable for an entirely different kind of colonist.”

Simon looked aghast at Martin. “And you knew?”

“Well.” Martin slouched, looked askance at Simon. “Not the details, until just now. But listen: You abandoned us—all who died and were uploaded before the Enactivist experiments ‘proved’ we aren’t real. All us zimboes, trapped here now for eternity. Even if I’m just a simulation of your friend Martin, how do you think he’d feel in this situation? He’d feel betrayed. Maybe he couldn’t escape this virtual purgatory, but if he knew something that you didn’t—that humanity’s new grand project had been hijacked by a virus from somewhere else—why would he tell you?”

No longer hiding his anger, Martin came up to Simon and jabbed a virtual finger at his chest. “Why would I tell you when I could just stand back and watch all of this unfold?” He spread his arms, as if to embrace the clockwork sky, and laughed.

On thousands of sterile exoplanets, throughout all the vast sphere of stars within a hundred light-years of the sun, life was about to blossom—life, or something else. Whatever it would be, humanity would never be welcome on those worlds. “If they had any interest in talking to us, they would have, wouldn’t they?” sighed Simon.

“I guess you’re not real to them, Simon. I wonder, how does that feel?”

Martin was still talking as Simon exited the virtual heaven where his best friend was trapped, and he knew he would never go back. Still, ringing in his ears as the stadium of confused, shouting people rose up around him were Martin’s last, vicious words:

“How does it feel to be left behind, Simon?

“How does it feel?”


Illustration of planets, a star, and ring-shaped habitats floating in space.

Story by KARL SCHROEDER

Annotations by CHARLES Q. CHOI

Illustrations by ANDREW ARCHER

Edited by STEPHEN CASS


Illustration of planets, a star, and ring-shaped habitats floating in space.

Story by KARL SCHROEDER

Annotations by CHARLES Q. CHOI

Illustrations by ANDREW ARCHER

Edited by STEPHEN CASS

This article appears in the March 2024 print issue.

❌
❌