In our last article on sound, we mentioned that there’s a vast field of content to unpack when it comes to understanding sound, how it’s generated and how we perceive it. This time, we’ll shed a little more light (and colour) on sonic territory by delving into the world of sound visualisation. Be wary of wild spectrograms and cymatics, and prepare to have your eyes, ears and minds blown.
For those electronic music enthusiasts who have been with the genre since its infancy, this image might be familiar:
This is the face of renowned electronic music producer Richard D. James, more commonly known as Apex Twin. It’s technically a self-portrait of the producer’s face, but it wasn’t painted on a canvas or designed on a graphics editor; the image is wholly contained within the audio of one of James’ tracks. When isolating the last 10 seconds of the song (which is called ” Δ Mi − 1 = −αΣ n=1NDi[n][ Σ j ∈ C[i]Fji[n − 1] +Fexti[n − 1]]”, by the way, but most people just call it “Equation”) and transposing it through a spectrogram, we can see a relatively clear image of the producer’s complacent face. Apex Twin managed to troll us visually, using science and sound.
He certainly wasn’t the first person to toy around with this technique. A spectrogram is the fundamental tool scientists use to visualise sound. It’s an image that represents the spectrum of frequencies in a sound as they vary over time (or other variables). Spectrograms are used to analyse frequency patterns in several fields such as seismology, oral communication , and the various animal calls found in the wild.
Here’s a spectrogram you might have seen before:
This image is known as Bloop. It’s an immensely powerful underwater sound with an ultra-low frequency discovered by the U.S. National Oceanic and Atmospheric Administration (NOAA) in 1997. For a long time, the origin of Bloop was a complete mystery. Its audio profiling suggests that it could have come from a living creature, however its source was much, much louder than the blue whale, which is the loudest recorded animal to date. Considering that the vast majority of the ocean is unmapped, us wishful thinkers imagined the sound to have originated from a mountainous, undiscovered underwater beast, something like the Cthulu from the Lovecraft novels. Sadly (or, rather, thankfully), our dreams were put to rest when the NOAA conclusively attributed the sound as originating from a large ice quake.
Compared to Bloop, the image of Apex Twin’s self-portrait is vivid and very detailed. That’s because the producer knew exactly what he was after, and he has a great understanding of what frequencies his sounds give out. Every bit of sound in the 10-second mapping of his image is specifically tailored to produce the end result. It’s not the most serene piece of music (at least in our opinion), but it makes for one hell of a face. Bloop, meanwhile, was sourced from a random natural occurrence, and the spectrogram we get out of it is a run-of-the-mill wave pattern.
That’s not to say that all images sourced from natural sounds don’t come with their own aesthetic allure, and it’s not to say that spectrograms are the only way to visualise sound. Here is the sound an Atlantic Spotted dolphin makes:
The image, sourced by Mark Fischer, was transposed through Fischer’s modification of the traditional spectrograph formula. “Spectrograms are infinite; they have no beginning and no end,” Fischer says. “They work well for visualising musical scores or mechanical sounds, but there are very few sounds in nature that they can express in detail.” Fischer’s visualisation technique is the result of a software program he wrote himself, which applies the mathematical concept of wavelets to recordings of animal calls in the wild, and processes the wavelets into colour-coded visuals. This “remix” of the traditional spectrogram reveals the variety of sonic patterns in a given animal’s call.
Spectrograms and their offspring are one way to visualise sound. Within the field of sound visualisation, as you move away from spectrograms, you’ll eventually arrive at cymatics. And they’re really, really cool:
What you’re seeing there is cymatics at work. Cymatics show the contrast between objects that are
stationary (such as the salt particles shown in the video above) and the vibrating surface supporting them. Depending on the frequency with which the surface is vibrated, the particles will group together and pull apart to reveal the physical manifestation of the frequency itself. The
higher the frequency, the more visually complex the pattern. In a way, cymatics are even more tangible visual representations of sound than spectrograms, since they operate through a physical medium.
Here is a piece of artwork by Nigel Stanford based on cymatics:
Pretty cool, and we’ve only scratched the surface of what audiovisual art can accomplish. Can you think of anything to add to the field? While you mull it over, you can click here (https://www.auditoryneuroscience.com/quickguide/visualizing-speech) and see your speech transposed through a live spectrogram. Find out what your name looks like, or perhaps your favourite song. Maybe screech a few wild frequencies into the mic and you’ll see a grinning Aphex Twin trolling you over the graph. Make sure you troll that bastard right back!