Western art music (i.e., the sort commonly described as “classical”).
Well, first of all, note that the music that system generates is entirely derived from the model’s understanding of a dataset/repertoire of “Western art music”. (Do you have any specific preferences about style/period too? That would be useful to know!)
For a start, note that you should not expect the system to capture any structure beyond the phrase level—since it’s trained from snippets which are a mere 8 bars long, the average history seen in training is just four bars. Within that limited scope, however, the model reaches quite impressive results. Next, one should know that every note in the pieces is generated by the exact same rules: the original model has no notion of “bass” or “lead”, nor does it generate ‘chords’ and then voice them in a later step. It’s entirely contrapuntal, albeit in the “keyboard” sort of counterpoint which does not organize the music as a fixed ensemble of ‘voices’ or ‘lines’. Somewhat more interestingly, the network architecture implies nothing whatsoever about such notions as “diatonicism” or “tonality”: Every hint of these things you hear in the music is a result of what the model has learned. Moreover, there’s basically no pre-existing notion that pieces should “stay within the same key” except when they’re “modulating” towards some other area: what the system does is freely driven by what the music itself has been implying about e.g. “key” and “scale degrees”, as best judged by the model. If the previous key no longer fits at any given point, this can cue a ‘modulation’. In a way, this means that the model is actually generating “freely atonal” music along “tonal” lines!
In my opinion, the most impressive parts are the transitions from one “musical idea” to the “next”, which would surely be described as “admirably flowing” and “lyrical” if similarly-clean transitions were found in a real piece of music. The same goes for the free “modulations” and changes in “key”: the underlying ‘force’ that made the system modulate can often be heard musically, and this also makes for a sense of lyricism combined with remote harmonic relationships that’s quite reminiscent of “harmonically-innovative” music from, say, the late 19th c. (Note that, by and large, this late-romantic music was not in the dataset! It’s simply an ‘emergent’ feature of what the model is doing.).
Something that I had not heard in previous music is this model’s eclecticism in style (“classical” vs. “romantic”, with a pinch of “impressionist” added at the last minute) and texture. Even more interesting than the clean transitions involving these elements, there is quite a bit of “creative” generalization arising from the fact that all of these styles were encompassed in the same model. So, we sometimes hear some more-or-less ‘classical’ elements thrown in a very ‘romantic’ spot, or vice versa, or music that sounds intermediate between the two. Finally, the very fact that this model is improvising classical music is worth noting persay. We know that improvisation has historically been a major part of the art-music tradition, and the output of such a system can at least give us some hint of what sort of ‘improvisation’ can even be musically feasible within that tradition, even after the practice itself has disappeared.
Well, first of all, note that the music that system generates is entirely derived from the model’s understanding of a dataset/repertoire of “Western art music”. (Do you have any specific preferences about style/period too? That would be useful to know!)
For a start, note that you should not expect the system to capture any structure beyond the phrase level—since it’s trained from snippets which are a mere 8 bars long, the average history seen in training is just four bars. Within that limited scope, however, the model reaches quite impressive results.
Next, one should know that every note in the pieces is generated by the exact same rules: the original model has no notion of “bass” or “lead”, nor does it generate ‘chords’ and then voice them in a later step. It’s entirely contrapuntal, albeit in the “keyboard” sort of counterpoint which does not organize the music as a fixed ensemble of ‘voices’ or ‘lines’.
Somewhat more interestingly, the network architecture implies nothing whatsoever about such notions as “diatonicism” or “tonality”: Every hint of these things you hear in the music is a result of what the model has learned. Moreover, there’s basically no pre-existing notion that pieces should “stay within the same key” except when they’re “modulating” towards some other area: what the system does is freely driven by what the music itself has been implying about e.g. “key” and “scale degrees”, as best judged by the model. If the previous key no longer fits at any given point, this can cue a ‘modulation’. In a way, this means that the model is actually generating “freely atonal” music along “tonal” lines!
In my opinion, the most impressive parts are the transitions from one “musical idea” to the “next”, which would surely be described as “admirably flowing” and “lyrical” if similarly-clean transitions were found in a real piece of music. The same goes for the free “modulations” and changes in “key”: the underlying ‘force’ that made the system modulate can often be heard musically, and this also makes for a sense of lyricism combined with remote harmonic relationships that’s quite reminiscent of “harmonically-innovative” music from, say, the late 19th c. (Note that, by and large, this late-romantic music was not in the dataset! It’s simply an ‘emergent’ feature of what the model is doing.).
Something that I had not heard in previous music is this model’s eclecticism in style (“classical” vs. “romantic”, with a pinch of “impressionist” added at the last minute) and texture. Even more interesting than the clean transitions involving these elements, there is quite a bit of “creative” generalization arising from the fact that all of these styles were encompassed in the same model. So, we sometimes hear some more-or-less ‘classical’ elements thrown in a very ‘romantic’ spot, or vice versa, or music that sounds intermediate between the two.
Finally, the very fact that this model is improvising classical music is worth noting persay. We know that improvisation has historically been a major part of the art-music tradition, and the output of such a system can at least give us some hint of what sort of ‘improvisation’ can even be musically feasible within that tradition, even after the practice itself has disappeared.