Sure, you can model music composition as a RL task. The AI composes a song, then predicts how much a human will like it. It then tries to produce songs that are more and more likely to be liked.
Another interesting thing that alphago did, was start by predicting what moves a human would make. Then it switched to reinforcement learning. So for a music AI, you would start with one that can predict the next note in a song. Then you switch to RL, and adjust it’s predictions so that it is more likely to produce songs humans like, and less likely to produce ones we don’t like.
However automated composition is something that a lot of people have experimented with before. So far there is nothing that works really well.
One difference is that you can’t get feedback as fast when dealing with human judgement rather than win/lose in a game (where AlphaGo can play millions of games against itself).
However the AI could learn to predict what humans like, and then use that as it’s judge. Trying to produce songs that it predicts humans will like. Then when it tests it on actual humans, it can see if it’s predictions were right and improve them.
This is also a domain with vast amounts of unsupervised data available. We’ve created millions of songs, which it can learn from. Out of the space of all possible sounds, we’ve decided that this tiny subset is pleasing to listen to. There’s a lot of information in that.
You can get fast feedback by reusing existing databases if your RL agent can do off-policy learning. (You can consider this what the supervised pre-learning phase is ‘really’ doing.) Your agent doesn’t have to take an action before it can learn from it. Consider the experience replay buffers. You could imagine a song-writing RL agent which has a huge experience replay buffer which is made just of fragments of songs you grabbed online (say, from the Touhou megatorrent with its 50k tracks).
I think what Vaniver means is: It seems that Emily Howell works pretty damn well, contrary to your claim that nothing does. (By, so far as I understand, means very different from any sort of neural network.)
I know the conversation here has run its course, but I just wanted to add: whether or not Emily Howell is seen as something that “works really well” as an automated system is probably up for debate. It seems to require quite a bit of input from Cope himself in order to come up with sensible, interesting music. For example, one of the most popular pieces from Emily Howell is this fugue: https://www.youtube.com/watch?v=jLR-_c_uCwI—we really don’t know how much influence Cope had in creating this piece of music, because the process of composition was not transparent at all.
Sure, you can model music composition as a RL task. The AI composes a song, then predicts how much a human will like it. It then tries to produce songs that are more and more likely to be liked.
Another interesting thing that alphago did, was start by predicting what moves a human would make. Then it switched to reinforcement learning. So for a music AI, you would start with one that can predict the next note in a song. Then you switch to RL, and adjust it’s predictions so that it is more likely to produce songs humans like, and less likely to produce ones we don’t like.
However automated composition is something that a lot of people have experimented with before. So far there is nothing that works really well.
One difference is that you can’t get feedback as fast when dealing with human judgement rather than win/lose in a game (where AlphaGo can play millions of games against itself).
Yes it would require a lot of human input.
However the AI could learn to predict what humans like, and then use that as it’s judge. Trying to produce songs that it predicts humans will like. Then when it tests it on actual humans, it can see if it’s predictions were right and improve them.
This is also a domain with vast amounts of unsupervised data available. We’ve created millions of songs, which it can learn from. Out of the space of all possible sounds, we’ve decided that this tiny subset is pleasing to listen to. There’s a lot of information in that.
You can get fast feedback by reusing existing databases if your RL agent can do off-policy learning. (You can consider this what the supervised pre-learning phase is ‘really’ doing.) Your agent doesn’t have to take an action before it can learn from it. Consider the experience replay buffers. You could imagine a song-writing RL agent which has a huge experience replay buffer which is made just of fragments of songs you grabbed online (say, from the Touhou megatorrent with its 50k tracks).
Emily Howell?
I was thinking more like these examples:
https://ericye16.com/music-rnn/
http://www.hexahedria.com/2015/08/03/composing-music-with-recurrent-neural-networks/
https://www.youtube.com/watch?v=0VTI1BBLydE
https://highnoongmt.wordpress.com/2015/05/22/lisls-stis-recurrent-neural-networks-for-folk-music-generation/
I think what Vaniver means is: It seems that Emily Howell works pretty damn well, contrary to your claim that nothing does. (By, so far as I understand, means very different from any sort of neural network.)
I know the conversation here has run its course, but I just wanted to add: whether or not Emily Howell is seen as something that “works really well” as an automated system is probably up for debate. It seems to require quite a bit of input from Cope himself in order to come up with sensible, interesting music. For example, one of the most popular pieces from Emily Howell is this fugue: https://www.youtube.com/watch?v=jLR-_c_uCwI—we really don’t know how much influence Cope had in creating this piece of music, because the process of composition was not transparent at all.