But that’s not as easy as it sounds—literally—because it’s hard to anchor a floating microphone without creating background noise. “Moorings are typically made from chain, so they clank a lot,” says Mark Baumgartner, whale ecologist and senior scientist at the Woods Hole Oceanographic Institution, who helped develop the technology. “And that’s not really good when you’re trying to hear animals that are many miles away making sounds.” So Baumgartner and his colleagues made the first 100 feet of mooring out of a rubbery “stretch hose.” When the buoy bobs on waves and tugs on the mooring, that stretchy bit coming off the instrument stays silent, allowing the hydrophone to listen for whales undisturbed.
Transmitting the data is another hurdle: Audio files take up a lot of space, and the connection that sends them from the buoy to a satellite to Baumgartner’s lab is maddeningly slow. Like, worse than 1X, the primitive cell phone technology, and way worse than LTE or 3G. “You have to squeeze them through this tiny, tiny, slow, really expensive data pipe to get home,” says Baumgartner. “And so one way around that problem is to not send the audio home, but to send representations of the audio home.”
Think about sheet music: The notes and other symbols are a distillation of the extremely complex sounds of the orchestra, but musicians can still read and play them. “It faithfully captures sound, if you know what you’re looking at, but it doesn’t actually contain the sound,” Baumgartner says. “This instrument does exactly that—it almost deconstructs the sounds into musical notes.”
They call these representations “pitch tracks,” which document the changes in the sounds the hydrophone is detecting, creating a sort of sheet music for whale song. A small computer in the instrument contains a database of whale calls, so it can take an educated guess at which species it might be hearing. But the instrument doesn’t make the final interpretation. “We have an analyst who, like the musician, can look at those pitch tracks and interpret what sounds are there,” Baumgartner says. And the analyst is damn good at it, nailing just about 100 percent of the species identifications during tests as the team evaluated their instrument in the real world.
But why not just fully automate the system and let the instrument do all the identifying? Because it’ll tally a lot of false positives by counting other noises as whale calls, Baumgartner says, and that’s unacceptable given what’s at stake: voluntary buy-in from the shipping industry. If you’ve got a lot of false alarms, and ships have to keep slowing for whales that aren’t there, you’re not doing the animals or the boat captains any service—crying whale instead of crying wolf.
“When the stakes are high, you want to be really careful,” Baumgartner says. “Like the systems, for instance, that I imagine the Air Force has for detecting incoming nuclear bombs. You probably don’t want to have a fully automated system, because if they did it wrong, there are big consequences. And so if you’re using a system to regulate interactions between an industry and an endangered species, automation is great, but I think accuracy might be more important than expediency or cost.”
So in addition to working with a dedicated whale listener, the Whale Safe team also relies on trained observers on whale watch and tourism boats off the coast of Southern California, who detect cetaceans the old-fashioned way and log them in a mobile app. The Whale Safe oceanographic modeling uses sea temperatures and other data to predict where whales’ favorite food, tiny crustaceans known as krill, are likely to show up. This is another data point that helps them tell cargo ships which spots whales may frequent. “So the acoustic data, the sightings, and the model data are integrated into the Whale Safe platform and then communicated out to the shipping industry and to government to help drive better decisionmaking to try to reduce the risk of ship strikes,” says Visalli, of the Benioff Ocean Initiative.