Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Guardian - UK
The Guardian - UK
Environment
Alice Hutton

Listen to a toadfish’s grunt! AI helps decode a ‘symphony’ of ocean sounds

A diver places an underwater recorder in a coral reef in Moorea, French Polynesia to capture the sounds of fishes and invertebrates, such as snapping shrimp.
A diver places an underwater recorder in a coral reef in Moorea, French Polynesia, to capture the sounds of fishes and invertebrates, such as snapping shrimp. Photograph: Xavier Raick

On Goa’s coral reef in India, the marine scientists lowered their underwater microphones beneath the waves and recorded a complex cacophony of swirling currents, fish and plantlife.

But rather than spend months deciphering it using human ears, arguing over which click was a snapping shrimp and which snort a type of grunter fish, they plugged the sounds into an algorithm that correctly identified four species in a matter of minutes.

Spectrograms produced from ocean recordings
Spectrograms produced from recordings of: humpback whale song in 20m (A) and 40m (B) depth waters off Okinawa, Japan; different sounds from a gulf toadfish (C and D); a sooty grunter (E); spangled grunter (F); a crawling kina urchin (G) and a New Zealand paddle crab (H). Photograph: Frontiers in Ecology and Evolution

The discovery, published in The Journal of the Acoustical Society of America, was hailed last month as groundbreaking by the International Quiet Ocean Experiment (IQOE), a small but growing group of scientists around the globe who record the sounds of the sea.

“Machine-learning is a major breakthrough,” says Jesse Ausubel, co-founder of the IQOE, a collaboration of scientists from the UK, US, Canada, Australia, Germany, Norway, Iceland and South Africa founded in 2015 to carry out the world’s first sound survey of the ocean.

“In the old days, you could put a microphone in the water for a year. Then it would take me three years to listen to the tapes: it’s one thing to listen to Ed Sheeran or Mozart and spot the difference, but our ears are not attuned to the difference between waves breaking, humpback whales, ships or snapping shrimp.

“Play a computer a few hours of snapping shrimp and it can become an expert very quickly. Play it years, and it can digest it in minutes.”

In late April, the researchers gathered at a conference at the Woods Hole Oceanographic Institute on Cape Cod, Massachusetts, to discuss their findings as their decade-long collaboration draws to an end by 2025.

During nearly 10 years of research costing an estimated $50m (£40m), the scientists, working across more than a dozen international organisations, universities and even military, have collected up to 4,000 series of recordings from the Atlantic, Pacific and Antarctica as well as the Australian and New Zealand coasts.

They use underwater microphones known as hydrophones, which make no additional noise, allowing for passive acoustic monitoring.

The sounds help researchers measure the marine impact of noise, from coral reefs to mangrove forests, orcas to plankton, oil and gas exploration, shipping, tourism, storms and even nuclear explosions. They have looked into whether ships may be interfering with humpback whale calls off Colombia and collaborated with the US navy to study seven US marine sanctuaries, including Hawaii.

The goal is to capture an acoustic baseline of different soundscapes against which change can be measured by future generations. The researchers also want to establish sound as an essential variable in ocean science, using it to monitor species distribution, to identify new ones, and to try to spot looming climate-related disasters, such as reef bleaching, years before we can see it.

“Up until the 1860s, when we started industrialising the ocean, the soundscape was relatively unchanged,” Ausubel says. “In the last 30 years or so, we have had an explosion in underwater sound: gigantic ships, gas and oil, seabed mining, cruise ships. We are changing the soundscape in the ocean a lot. It would be mad to assume that has had no impact on the wildlife within it.”

A snapping shrimp, one of the species sounds identified by AI.
A snapping shrimp, one of the species sounds identified by AI. Listen to it here. Photograph: Oksana Maksymova/Alamy

But it was the paper from Goa, and the future possibilities of machine-learning, that was headline news at the conference. Dr Bishwajit Chakraborty, co-author and emeritus professor at Goa’s CSIR-National Institute of Oceanography, told the Guardian how the team used the programming language Python to correctly match archival sounds or data points of known species to new recordings.

Among the 21 species recorded, it correctly identified, with an 89.4% success rate, drum fish (Sciaenidae), grunter fish (Terapon theraps), snapping shrimp, and planktivores.

Researchers hope the technology could evolve even further to help them identify as yet undiscovered species – for example, a “mystery buzz” that the Indian team could not name – by spotting patterns and matching up new sounds to noises made by known or similar species.

Chakraborty says machine-learning has shown “immense capability” in bio-acoustics since he started using it 20 years ago when it was “only used by individuals … who understood how they could navigate the complex mathematical stages of AI/ML algorithms”. He adds: “Needless to say, it has become rampant and can be used to solve a wide spectrum of problems, [including] real-time classification.”

Underwater hot spring eruptions near Japan’s Suyao Seamount and Taiwan’s Guishan Island
Recording the unique soundscape of hot spring eruptions around Japan’s Suyao Seamount and Taiwan’s Guishan Island. The sulphide deposits formed by the vents are considered to be high-value mining sites. Photograph: Su Huai/Marine Ecoacoustics and Informatics Lab

Many of his IQOE colleagues consider this a modest assessment, claiming it has the ability to revolutionise the speed of research by shaving years off data analysis, reducing human error, and allowing them to decode historic recordings from more than 50 years ago, and to make comparisons.

Peter Tyack, an animal behaviourist at the Sea Mammal Research Unit at the University of St Andrews, Scotland, jokes that he got into bio-acoustics so he could “spend less time listening to humans”. He adds: “It’s not crazy that by 2100 we might have decoded dolphin language, [and] we might be able to understand what they’re saying. If you look to the future, sound is one of the ways that science can learn the sea’s mysteries.”

A key IQOE legacy is the launch in 2021 of the world’s first international underwater sound library (appropriately nicknamed Glubs), led by Miles Parsons, a British-born scientist at the Australian Institute of Marine Science.

As an open-access umbrella platform, it will catalogue millions of hours of recordings in an attempt to “democratise underwater sound”, as well as hopefully allowing citizen scientists on holiday scuba dives to upload recordings, helping them to identify the estimated 20,000 noise-making species still to be discovered. (Another mystery species has been recorded off Florida Keys, making a racket that sounds like a cross between a light drizzle and the squelch of a mop.)

Another breakthrough is the plummeting price of hydrophones, which cost millions of dollars 50 years ago but can now be bought for £4,000 for a year-long device that can be attached to a buoy.

Parsons adds: “Even Go-Pros pick up higher quality sound than expected. Soon it could be an app on your phone.”

A bocon toadfish
AI identified the strange ‘boop, grunt, swoop’ call of the bocon toadfish (Amphichthys cryptocentrus). Photograph: Seaphotoart/Alamy

At a round table, scientists swapped their big dreams for the next few years, including a museum of underwater sound, wet headphones with built-in hydrophones and AI that could tell snorkellers what fish they’re hearing in real time, similar to the birdsong app Merlin. Some scientists speculated that although it was “very cool to us”, it might scare some snorkellers to hear: “that’s an eel!” or “be careful, there’s a barracuda coming!”.

A key focus is attracting the next generation of biostatisticians to join what remains a small, international community, which started collaborating only eight years ago.

Many members brought up the extraordinary effect that the Oscar-winning documentary Silent World by Jacques Cousteau had on attracting new talent. Released in 1956, it capitalised on relatively new technology to show underwater cinematography in Technicolor (as well as controversially blowing up part of a reef).

A coral reef
Coral reefs are acoustically rich environments, producing sounds of sea life during courtship, reproduction and feeding. Rain and boats also can be heard underwater. Photograph: Xavier Raick

Steve Simpson, a marine biologist at the University of Bristol, says: “We’ve probably got some of the last frontiers on the planet in terms of taking recordings of natural ecosystems. Far from being this ‘silent world’, we need to find ways to connect people to the sounds of the ocean.”

Ausubel concludes: “There’s a whole discovery aspect to this. It’s beautiful underwater. You can’t see very far, but you can hear this concerto, this symphony. It’s like learning a foreign language.”

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.