How hard would it be to project them? There was a screen, and it should be possible to project at least two lines with music large enough for people to read. Is the problem that we don’t have sheet music that’s digitized in a way to make this feasible for all of the songs?
It depends a lot on the musician and their skillset.
For me: I don’t really speak fluent sheet music. When I write music, I do it entirely by ear. I record it. I have musicians listen to the record and imitate it by ear. Later on, if I want sheet music, I hire someone to listen to the record and transcribe it into sheet music after-the-fact, a process which costs like $200 per song (or, free, if I do it myself or get a volunteer, but it’s a couple hours per song and there are like 30 songs so this is not a quick/easy volunteer process)
Some musicians “think primarily in sheet music”, and then they would do it with sheet music from the get-go as part of the creation process. Some solstice songs already have sheet music for this reason.
I’ve paid money to transcribe ~3-5 solstice songs with sheet music so far.
Can the process not be automated? Like, sheet music specifies notes, right? And notes are frequencies. And frequencies can be determined by examining a recording by means of appropriate hardware/software (very easily, in the case of digital recordings, I should think). Right? So, is there not some software or something that can do this?
One thing that makes this hard to automate is human imprecision in generating a recording, espeically with rhythm: notes encode frequencies but also timings and durations, and humans performing a song will never get those things exactly precise (nor should they—good performance tends to involve being a little free with rhythms in ways that shouldn’t be directly reflected in the sheet music), so any automatic transcriber will get silly-looking slightly off rhythms that still need judgment to adjust.
This seems solvable by using multiple recordings and averaging, yes?
Also, if the transcription to sheet-music form is accurate w.r.t. the recording, and the recording is acceptable w.r.t. the intended notes, then the transcription ought to be close enough to the intended notes. Or am I misunderstanding?
[edit: one issue is that some irregularities will in fact be correlated across takes and STILL shouldn’t be written down—like, sometimes a song will slow down gradually over the course of a couple measures, and the way to deal with that is to write the notes as though no slowdown is happening and then write “rit.” (means “slow down”) over the staff, NOT to write gradually longer notes; this might be tunable post facto but I think that itself would take human (or really good AI) judgment that’s not necessarily much easier than just transcribing it manually to start]
re point 2 - the thing is you’d get a really irregular-looking hard to read thing that nobody could sightread. (actually this is already somewhat true for a lot of folk-style songs that sound intuitive but look really confusing when written down)
You’d think, but I wasn’t been able to find such a thing despite looking pretty hard a few years ago; there might be a more recent AI approach to this though. A useful search term might be “audio to midi conversion”. (Stem separation, for which Spleeter works well, might be a necessary preprocessing step.)
2) if you ask me to transcribe a song I will often say yes (if it’s not very frequent) (it costs time but not that much cognitive work for me so I experience reasonable amounts of this as fun)
This plus “also it’s a lot more work to setup” are my own main cruxes. (If either were false I’d consider it much more strongly).
That’s right: if it were free to include then sure, even if only 5% of attendees can read it. But it’s actually quite a lot of work.
How hard would it be to project them? There was a screen, and it should be possible to project at least two lines with music large enough for people to read. Is the problem that we don’t have sheet music that’s digitized in a way to make this feasible for all of the songs?
We do not currently have sheet music for most songs. It’s also extra labor to arrange the slides (though this isn’t that big a part of the problem)
What exactly does the process of generating sheet music involve? Like, how does sheet music happen, in general?
It depends a lot on the musician and their skillset.
For me: I don’t really speak fluent sheet music. When I write music, I do it entirely by ear. I record it. I have musicians listen to the record and imitate it by ear. Later on, if I want sheet music, I hire someone to listen to the record and transcribe it into sheet music after-the-fact, a process which costs like $200 per song (or, free, if I do it myself or get a volunteer, but it’s a couple hours per song and there are like 30 songs so this is not a quick/easy volunteer process)
Some musicians “think primarily in sheet music”, and then they would do it with sheet music from the get-go as part of the creation process. Some solstice songs already have sheet music for this reason.
I’ve paid money to transcribe ~3-5 solstice songs with sheet music so far.
Can the process not be automated? Like, sheet music specifies notes, right? And notes are frequencies. And frequencies can be determined by examining a recording by means of appropriate hardware/software (very easily, in the case of digital recordings, I should think). Right? So, is there not some software or something that can do this?
One thing that makes this hard to automate is human imprecision in generating a recording, espeically with rhythm: notes encode frequencies but also timings and durations, and humans performing a song will never get those things exactly precise (nor should they—good performance tends to involve being a little free with rhythms in ways that shouldn’t be directly reflected in the sheet music), so any automatic transcriber will get silly-looking slightly off rhythms that still need judgment to adjust.
This seems solvable by using multiple recordings and averaging, yes?
Also, if the transcription to sheet-music form is accurate w.r.t. the recording, and the recording is acceptable w.r.t. the intended notes, then the transcription ought to be close enough to the intended notes. Or am I misunderstanding?
re point 1 - maybe? unsure
[edit: one issue is that some irregularities will in fact be correlated across takes and STILL shouldn’t be written down—like, sometimes a song will slow down gradually over the course of a couple measures, and the way to deal with that is to write the notes as though no slowdown is happening and then write “rit.” (means “slow down”) over the staff, NOT to write gradually longer notes; this might be tunable post facto but I think that itself would take human (or really good AI) judgment that’s not necessarily much easier than just transcribing it manually to start]
re point 2 - the thing is you’d get a really irregular-looking hard to read thing that nobody could sightread. (actually this is already somewhat true for a lot of folk-style songs that sound intuitive but look really confusing when written down)
You’d think, but I wasn’t been able to find such a thing despite looking pretty hard a few years ago; there might be a more recent AI approach to this though. A useful search term might be “audio to midi conversion”. (Stem separation, for which Spleeter works well, might be a necessary preprocessing step.)
As someone who likes transcribing songs,
1) I endorse the above
2) if you ask me to transcribe a song I will often say yes (if it’s not very frequent) (it costs time but not that much cognitive work for me so I experience reasonable amounts of this as fun)