I have nowhere near the technical capability to have anything like a clear plan, and your response is basically what I expected. I was just curious. Seems like it could be another cheap “Who knows? Let’s see what happens” thing to try, with little to lose when it doesn’t help anyone with anything. Still, can we distinguish individuals in unlabeled recordings? Can we learn about meaning and grammar (or its equivalent) based in part on differences between languages and dialects?
At root my thought process amounted to: we have a technology that learns complex structures including languages from data without the benefit of the structural predispositions of human brains. If we could get a good enough corpus of data, it can also learn things other than human languages, and find approximate mappings to human languages. I assumed we wouldn’t have such data in this case. That’s as far as I got before I posted.
Currently we basically don’t have any datasets where it’s labelled what orca says what. When I listen to recordings, I cannot distinguish voices, though idk it’s possible that people who listened a lot more can. I think just unsupervised voice clustering would probably not work very accurately. I’d guess it’s probably possible to get data on who said what by using an array of hydrophones to infer the location of the sound, but we need very accurate position inference because different orcas are often just 1-10m distance from each other, and for this we might need to get/infer decent estimates of how water temperature varies by depth, and generally there have not yet been attempts to get high precision through this method. (It’s definitely harder in water than in air.)
Yeah basically I initially also had rough thoughts into this direction, but I think the create-and-teach language way is probably a lot faster.
I think the earth species project is trying to use AI to decode animal communication, though they don’t focus on orcas in particular, but many species including e.g. beluga whales. Didn’t look into it a lot but seems possible I could do sth like this in a smarter and more promising way, but probably still would take long.
I have nowhere near the technical capability to have anything like a clear plan, and your response is basically what I expected. I was just curious. Seems like it could be another cheap “Who knows? Let’s see what happens” thing to try, with little to lose when it doesn’t help anyone with anything. Still, can we distinguish individuals in unlabeled recordings? Can we learn about meaning and grammar (or its equivalent) based in part on differences between languages and dialects?
At root my thought process amounted to: we have a technology that learns complex structures including languages from data without the benefit of the structural predispositions of human brains. If we could get a good enough corpus of data, it can also learn things other than human languages, and find approximate mappings to human languages. I assumed we wouldn’t have such data in this case. That’s as far as I got before I posted.
Currently we basically don’t have any datasets where it’s labelled what orca says what. When I listen to recordings, I cannot distinguish voices, though idk it’s possible that people who listened a lot more can. I think just unsupervised voice clustering would probably not work very accurately. I’d guess it’s probably possible to get data on who said what by using an array of hydrophones to infer the location of the sound, but we need very accurate position inference because different orcas are often just 1-10m distance from each other, and for this we might need to get/infer decent estimates of how water temperature varies by depth, and generally there have not yet been attempts to get high precision through this method. (It’s definitely harder in water than in air.)
Yeah basically I initially also had rough thoughts into this direction, but I think the create-and-teach language way is probably a lot faster.
I think the earth species project is trying to use AI to decode animal communication, though they don’t focus on orcas in particular, but many species including e.g. beluga whales. Didn’t look into it a lot but seems possible I could do sth like this in a smarter and more promising way, but probably still would take long.