Harvard scientist turns space images into music
Team uses data from space telescopes to create music.
Think of it as cosmic synesthesia. Listen to a picture of the galaxy, sounds that jingle and chime, bellow and creak in a mix that sounds at times vaguely New Age, at others like John Cage performing amid a forest nor’easter.
In a new project to make images of space more accessible, Kimberly Kowal Arcand, a visualization researcher from the Center for Astrophysics | Harvard & Smithsonian , and a team of scientists and sound engineers worked with NASA to turn images of the cosmos into music. The team uses a new technique called data sonification that takes the information captured from space telescopes and translates it into sound.
Already, the project has turned some of the most famous deep-space images, like the center of the Milky Way and the “Pillars of Creation,” into music. The main goal is to make the visuals more accessible to the blind or visually impaired, said Arcand, who’s made accessibility a focus of her career. She works on virtual reality and 3D simulations that can help the visually impaired experience space through other senses, such as touch. She’s found it’s helped able-bodied people experience images in new ways.
“When you help make something accessible for a community, you can help make it accessible for other communities as well. That kind of more rounded thought into the design of the materials and the intended outcomes for audiences can have very positive benefits,” Arcand said.
The audio interpretations are created by translating the data scripts from the telescopes into notes and rhythms. In some cases, the data are assigned specific musical instruments, like piano, violin, and bells. The melodies are harmonized to produce something cohesive and enjoyable. The different volumes, pitches, and drones are based on factors such as brightness, distance, and what the image depicts. Brighter stars, for instance, get more intense notes, and the sound depicting an exploding star grows loud, then slowly calms as the blast wave diminishes.
“It is a way of taking the data and trying to mimic the various parts of the image into sounds that would express it, but also still be pleasing together so that you could listen to it as one mini composition, one mini symphony,” Arcand said.
Many of the renditions combine data from multiple telescopes, including NASA’s Chandra X-ray Observatory (which is headquartered at the CfA), the Hubble Space Telescope, and the Spitzer Space Telescope. The devices capture different types of light — X-ray or infrared, for instance — so the data in these sonifications produce different types of sound. These separate layers can be listened to together or as solos.
So far, the team has released two installments from the data sonification project. Here are a few examples.
This supernova remnant, 11,000 light-years away, has musical signatures assigned to the four elements released in the explosion. The cursors move outward from the center and the colors and sound represent debris and other high-energy data.
Pillars of Creation
Moving left to right representing both optical and X-ray light, the alien sweeps of low to high whistles and pitches would be at home in any eerie sci-fi film. It is one of the most iconic regions of the galaxy where stars are still forming.
This image provided the first proof of dark matter. It’s 3.4 billion light-years away and the X-ray data from Chandra, represented in pink, shows where the hot gas in two merging galaxy clusters contorted away from dark matter through a process known as gravitational lensing. It produces an intense low and high pitch mix.
The Crab Nebula
A region with a quickly spinning neutron star, formed when a larger star collapsed, is portrayed through brass, strings, and woodwind in suddenly loud and high-pitched rise and fall.
A circular time-lapse of one of the brightest supernovas in centuries brought to audible life by a crystal singing bowl, the sequence comes from data collected by Chandra and Hubble between 1999 and 2013.