for years I think we’ve been in a state where people could have taken off-the-shelf method A and done something interesting with it on a huge music dataset
Absolutely. I got decent enough results just tinkering with GPT-2, and OpenAI’s Jukebox could have been done at smaller scale years ago, and OA could presumably do a lot better right now if they had a few million to spare (Jukebox has only ~7b parameters, while GPT-3 has 175b, and Jukebox is pretty close to human-level so just another 10x seems like it’d make it an extremely useful tool commercially).
Absolutely. I got decent enough results just tinkering with GPT-2, and OpenAI’s Jukebox could have been done at smaller scale years ago, and OA could presumably do a lot better right now if they had a few million to spare (Jukebox has only ~7b parameters, while GPT-3 has 175b, and Jukebox is pretty close to human-level so just another 10x seems like it’d make it an extremely useful tool commercially).