Listen to These Photographs of Sparkling Galaxies

Most celestial objects—from stars and nebulas to quasars and galaxies—emit light at a range of wavelengths. Some include visible light, which is how astronomers are able to photograph them with space telescopes like Hubble. But the James Webb Space Telescope and the Chandra X-ray Observatory peer at heavenly objects in infrared and x-ray wavelengths that are invisible to the human eye. That data is often translated into visible colors to produce spectacular space images. Now, a group of astronomers is making those images accessible to a wider audience that includes visually impaired people—by turning the data into almost musical sequences of sounds.

“If you only make a visual of a Chandra image or another NASA image, you can be leaving people behind,” says Kim Arcand, a visualization scientist who collaborates with a small, independent group of astronomers and musicians on a science and art project called SYSTEM Sounds. Arcand, who describes herself as a former choir and band geek, is also the the emerging tech lead for NASA’s Chandra observatory. Until a few years ago, this meant activities like adding sound to virtual- and augmented-reality science outreach programs. Then, along with a few others who became the SYSTEM Sounds group, Arcand began converting x-ray data into audio. “We have had such a positive response from people, both sighted and blind or low vision, that it’s the project that keeps on giving,” she says. Today, the group also works with NASA’s Universe of Learning, a program that provides science education resources.

Visual images from the JWST or Chandra instruments are artificial, in a sense, because they use false colors to represent invisible frequencies. (If you actually traveled to these deep-space locations, they’d look different.) Similarly, Arcand and the SYSTEM Sounds team translate image data at infrared and x-ray wavelengths into sounds, rather than into optical colors. They call these “sonifications,” and they are meant to offer a new way to experience cosmic phenomena, like the birth of stars or the interactions between galaxies.

Translating a 2D image into sounds starts with the image’s individual pixels. Each can contain several kinds of data—like x-ray frequencies from Chandra and infrared frequencies from Webb. These can then be mapped onto sound frequencies. Anyone—even a computer program—can make a 1-to-1 conversion between pixels and simple beeps and boops. “But when you’re trying to tell a scientific story of the object,” Arcand says, “music can help tell that story.”

That’s where Matt Russo, an astrophysicist and musician, comes in. He and his colleagues pick a particular image and then feed the data into sound-editing software that they’ve written in Python. (It works a bit like GarageBand.) Like cosmic conductors, they have to make musical choices: They select instruments to represent particular wavelengths (like an oboe or flute, say, to represent the near-infrared or mid-infrared), and which objects to draw the listener’s attention to, in which order, and at which speed—similar to panning across a landscape.

They lead the listener through the image by focusing attention on one object at a time, or a selected group, so that they can be distinguished from other things in the frame. “You can’t represent everything that’s in the image through sound,” Russo says. “You have to accentuate the things that are most important.” For example, they might highlight a particular galaxy within a cluster, a spiral galaxy’s arm unfurling, or a bright star exploding. They also try to differentiate between a scene’s foreground and background: A bright Milky Way star might set off a crash cymbal, while the light from distant galaxies would trigger more muted notes.

Article Categories:
Science