My music theory is rusty and anyway underdeveloped. But I don’t think individual notes can be disturbingly off key. It is the relationship between notes that takes them out of key. A single note of any frequency will produce harmonics with anything in the environment that is capable of responding, and thus create its own meager, on key accompaniment.
I think MIDI keeps you from even approaching the kind of terrible close but not quite right tones you want to reproduce.
Changing one individual note in a monophonic tune absolutely can be horribly off key. Melody is harmony, and harmony is counterpoint; even with a single voice humming, if the tune is “classical” enough your brain understands intuitively where the chord changes are and what the bass line should be.
You don’t need microtonal pitches to violently defy people’s expectations.
(EDIT: Though you almost certainly do need microtonal pitches to precisely mimic the effects described in the text. But I think you certainly could do something horrible without them.)
I don’t think you need to even venture into the world of quarter pitches in order to create horrible humming. To give an idea of a song that twists your expectations of keys and time signatures and melodic progression, and breaks it in specific ways to ramp tension, check the epiphany from sweeney todd.
I forget that when I listen to it, I have the background of the story and buildup already, so I start with different expectations- perhaps not the best example.
There’s a continuous spectrum of pitch. The character is kind of showing off, like he always kind of is.
He’s probably hitting notes that are multiples of irrational numbers when described in Hertz.
Retracted because it seemed the best way to acknowledge the correction: the vast majority of common musical notes are multiples of irrational numbers when described in Hertz.
We think of intervals between tones as being “the same” when there is a constant ratio between them. For instance, if two notes are an octave apart, the frequency of one is twice the other.
Thus, if we want to divide the octave into twelve semitones (which we do have twelve of: C, C#, D, D#, E, F, F#, G, G#, A, A#, B) and we want all of these twelve semitones to be the same intervals, then we want each interval to multiply the frequency by 2^(1/12).
Every part of that makes sense except for the lack of E# and B#, and why x2 is called an octave. Thanks for the info, and for reminding me why musical theory is one of three fields I have ever given up on learning.
The reason we avoid E# and B# is to get nice-sounding chords by only using the white keys. This way, the C-E chord has a ratio of 2^(4/12) which is approximately 5⁄4; the C-F chord has a ratio of 2^(5/12) which is approximately 4⁄3; and the C-G chord has a ratio of 2^(7/12) which is approximately 3⁄2.
In fact, before we understood twelfth roots, people used to tune pianos so that the ratios above were exactly5⁄4, 4⁄3, and 3⁄2. This made different scales sound different. For instance, the C major triad might have notes in the ratios 4:5:6, while a D major triad might have different ratios, close to the above but slightly off.
There’s also the question of whether the difference between these makes a difference in the sound. There’s two answers to that. On the one hand, it’s a standard textbook exercise that the difference between pitches of a note in two different tuning systems is never large enough for the human ear to hear it. So, most of the time, the tuning systems are impossible to distinguish.
On the other hand, there are certain cases in which the human ear can detect very very small differences when a chord is played. To give a simple (though unmusical) example, suppose we played a chord of a 200 Hz note and a 201 Hz note. The human ear, to a first approximation, will hear a single note of approximately 200 Hz. However, the difference between the two notes has a period of 1 second, so what the human ear actually hears is a 200 Hz note whose (EDIT) amplitude wobbles every second. This is very very obvious, it’s a first sign of your piano being out of tune, and in different tuning systems it happens to different chords.
The reason we avoid E# and B# is to get nice-sounding chords by only using the white keys.
and only 12 notes per octave. With more notes per octave you can distinguish between F# and Gb without losing much accuracy in the most common keys.
In fact, before we understood twelfth roots, people used to tune pianos so that the ratios above were exactly 5⁄4, 4⁄3, and 3⁄2. This made different scales sound different. For instance, the C major triad might have notes in the ratios 4:5:6, while a D major triad might have different ratios, close to the above but slightly off.
Nitpick: I’m no expert in historical tunings, but AFAIK medieval music used pure fifths, where near-pure major thirds are hard to reach. This became a problem in Renaissance music so keyboard instruments started to favor meantone tunings with more impure fifths, to make 4 fifths modulo octave a better major third in the most common keys. (The video demonstrating the major scale/chords generated by a fifth of 695 cents shows this rationale.) As soon as people began to value pure major thirds in their music the fifths in keyboard music became more tempered. Keyboard tunings with both pure 3/2s and pure 5/4s were not widely used, because of the syntonic comma.
In Renaissance music 12-equal was used for lutes for example, which shows that even though people knew about 12 equal temperament and could approximate 2^(1/12) well they didn’t like to use it for keyboard instruments. The tuning of the keyboard gradually changed to accommodate all 12 keys of modern Western music as the style of music started to call for more modulations in circa 18th century. But you are overall correct that different keys in the twelve-tone keyboard sounded different. (even in the 18th century.)
On the one hand, it’s a standard textbook exercise that the difference between pitches of a note in two different tuning systems is never large enough for the human ear to hear it. So, most of the time, the tuning systems are impossible to distinguish.
I find it hard to believe this. If these differences were mostly not significant there would be no reason for the existence of different tuning systems. What kinds of differences between tuning systems are you talking about?
And I suppose that “the white keys”, defined some centuries ago, are a more difficult standard to change than the underlying mathematical assumptions. Right.
Also, the white keys are far from being an arbitrary set of pitches. Very roughly, they’re chosen so that as many combinations of them as possible sound reasonably harmonious together when played on an instrument whose sound has a harmonic spectrum (which applies to most of the tuned instruments used in Western music). I don’t mean that someone deliberately sat down and solved the optimization problem, of course, but it turns out that the Western “diatonic scale” (= the white notes) does rather well by that metric. So it’s not like we’d particularly want to change the scale for the sake of making either the mathematics or the music sound better.
Notes sound good if they’re approximately simple rational multiples of each other. Hence you want your scale to contain multiples.
Since the simplest multiple is x2 we use that for the octave. As for why we break it up into 12 semitones, the reason is that 2^(7/12) is approximately 3⁄2 and as a bonus 2^(4/2) is a passable approximation to 5⁄4.
Similarly, other musical intervals—i.e., ratios between frequencies—have names that are all arguably off by one. A “perfect fifth” is, e.g., from C to G. C,D,E,F,G: five notes. So a fifth plus a fifth is (not a tenth but) a ninth.
Look up “equal temperament.” There are 12 half-steps in an Octave, after each octave the frequency should double, and the simplest way to arrange it is to make each step a multiplication by h=2^(1/12) so that h^12=2.
Many people report that “natural” intervals like the 3:2 and 4:3 ratio, sound better than the equal temperament approximations, though I don’t hear much of a difference myself.
The vast majority of humans don’t have perfect pitch, so the specific pitch of the note is far less important than the relationships to the notes surrounding them. I agree that he is rather showing off, but unless you spend a very large amount of time ear training, you likely cannot tell when a note is a quarter tone sharp or flat. However, just like there are cycles of notes that always sound amazing together when you run them through variation (see the circle of 5ths), there are notes that sound horrible and jarring. Furthermore, the amount of time it takes to reliable sing quarter tones is ridiculously high- it is something that life long trained musicians cannot do. (Of course there is another discussion about how our formulation of music causes this, but lets set that aside for now.) I think it is far more likely that he has studied a circle of 7th’s and 2nd’s, or something to that effect- he has created a musical algorithm where the pattern itself is so convoluted, it is not intuitively detected, and the notes/key changes produced so horrible, it wears on the mind.
Here is a quarter tone scale. While the changes are detectable right next to each other, much like sight delivers images based on pre-established patterns, so does hearing. When laid out in this fashion, you can hear the quarter tone differences- although to my ears (and I play music professionally, have spent much time in ear training, and love music theory) there are times it sounds like two of the same note is played successively. Move out of this context, into an interval jump, and while those with good relative pitch may think it sounds “pitchy”, your mind fills it in to a close note- this is why singers with actual pitch problems still manage to gain a following. Most people cannot hear slightly wrong notes. However, none of this approaches the complexity of actually trying to sing a quarter tone. The amount of vocal training required to sing quarter tones at will is the work of a master musician- much like the the person who can successfully execute slight of hand at the highest level is someone who spends decades in honing their craft.
I just tried some experiments and I find that if I take Brahms’s lullaby (which I think is the one Eliezer means by “Lullaby and Goodnight”) and flatten a couple of random notes by a quarter-tone, the effect is in most cases extremely obvious. And if I displace each individual pitch by a random amount from a quarter-tone flat to a quarter-tone sharp, then of course some notes are individually detectable as out of tune and some not but the overall effect is agonizing in a way that simply getting some notes wrong couldn’t be.
I’m a pretty decent (though strictly amateur) musician and I’m sure many people wouldn’t find such errors so obvious (and many would find it more painful than I do).
Anyway, I’m not sure what our argument actually is. The chapter says, in so many words, that Q. is humming notes “not just out of key for the previous phrases but sung at a pitch which does not correspond to any key” which seems to me perfectly explicit: part of what makes the humming so dreadful is that Q. is out of tune as well as humming wrong notes. And yes, the ability to sing accurate quarter-tones is rare and requires work to develop. So are lots of the abilities Q. has.
(Of course that doesn’t require that the wrong notes be exactly quarter-tones.)
Python code snippet for anyone who wants to do a similar experiment (warning 1: works only on Windows; warning 2: quality of sound is Quirrell-like):
import random, time, winsound
for (p,d) in [(4,1),(5,1),(7,3),(None,1), (4,1),(5,1),(7,3),(None,1), (4,1),(7,1),(12,2),(11,2),(9,2),(9,2),(7,1),(None,1), (2,1),(4,1),(5,3),(None,1), (2,1),(4,1),(5,3),(None,1), (2,1),(5,1),(11,1),(9,1),(7,2),(11,2),(12,4)]:
if p is None: time.sleep(0.2*d)
else: winsound.Beep(int(440*2**((p+1*(random.random()-0.5))/12.)), 200*d)
current = [(4.,1.),(5.,1.),(7.,3.),(None,1.), (4.,1.),(5.,1.),(7.,3.),(None,1.),(4.,1.),(7.,1.),(12.,2.),(11.,2.),(9.,2.),(9.,2.),(7.,1.),(None,1.),(2.,1.),(4.,1.),(5.,3.),(None,1.),(2.,1.),(4.,1.),(5.,3.),(None,1.),(2.,1.),(5.,1.),(11.,1.),(9.,1.),(7.,2.),(11.,2.),(12.,4.)]
timeshift = 1;
while 1:
timeshift = timeshift + timeshift * random.uniform(1 - timebias, 1 + timebias)
if timeshift > 1.0 + 2.0 * timebias or timeshift < 1.0 - 2.0 * timebias:
timeshift = random.uniform(1.0 - timebias / 2.0, 1.0 + timebias / 2.0)
key = random.randrange(0, len(current) - 1)
if random.random() > changebias:
if current[key][0] is not None:
current[key] = (current[key][0] + current[key][0] * random.uniform(-1.0 * pitchbias, pitchbias), current[key][1])
else:
current[key] = (current[key][0], current[key][1] + current[key][1] * random.uniform( -1.0 * timebias, timebias))
time.sleep(random.random())
for (p,d) in current:
if p is None: time.sleep(0.2*d * timeshift)
else: winsound.Beep(int(440*2**(p/12.)), int(200*d*timeshift))
Basically, each loop it tweaks the song slightly from the one before it, randomly. The three different bias settings on the top dictate how the song evolves. But besides just changing the song, the rate of any play varies randomly (according to the timebias as well).
The timebias applies to changes of timing. So the tempo of the play, the rate of change of the length of a note and the length of pauses are all shifted by the timebias randomly. increasing this number will create more dramatic swings in time changes from run to run (as well as the overall bounds of the tempo).
The pitchbias applies to pitch changes. Increasing it will let the algorithm drift from the normal song much faster. Too high will cause obvious swings in notes. Too low, and it’ll take forever to get a decently maddening change (but perhaps that’s part of the master plan).
The changebias indicates the chance that on a particular loop, the pitch of a random note will change, or if the duration will change. This change is carried on to all future plays (and will have a ripple effect)
The result is quite maddening, as parts of the song will randomly trend back towards the correct notes. And notes you could have sworn were wrong will appear normal later. And back and forth it goes. Just repeating, and changing until you get driven mad (or bored) enough to ^C...
Basically, it’s a genetic algorithm without a binding fitness function. Its random changes will just propegate infinitely towards chaos. But for a very long time it will have the “feel” of the original song...
Didn’t check it on anything other than chromium, and I can’t guarantee it won’t eventually use all your memory and crash. It’s horrible in many ways: switches key, misses the frequency of notes, changes from 2^(1/12) ratio between semitones, pauses at random and changes note length.
Take a listen, there’s always a chance it’ll stop :D
/edit ambiguity. Come to think of it, skipping notes is the one thing I didn’t do. Note that it starts reasonably close to being in tune and slowly degrades.
Try this instead; it should work on any OS and generate a .wav file you can play. (It’s better than putting up a recording because you can play with the parameters, put in your own tune, etc.)
import math, random, struct, wave
from math import sin,cos,exp,pi
filename = '/home/dgerard/something.wav' # replace with something sensible
def add_note(t,p,d,v):
# t is time in seconds, p is pitch in Hz, d is duration in seconds
# v is volume in arbitrary (amplitude) units
i0 = int(44100*t)
i1 = int(44100*(t+d))
if len(signal)<i1: signal.extend([0 for i in range(len(signal),i1)])
for i in range(i0,i1):
dt = i/44100.-t
if dt<0.02: f = dt/0.02 # attack: 0..1 over 20ms
elif dt<0.2: f = exp(-(dt-0.02)/0.18) # decay: 1..1/e over 180ms
elif dt<d-0.2: f = exp(-1) # sustain: 1/e
else: f = exp(-1)*(d-dt)/0.2 # release: 1/e..0 over 200ms
signal[i] += f*v*(sin(2*pi*p*dt)+0.2*sin(6*pi*p*dt)+0.06*sin(10*pi*p*dt))
def save_signal():
m = max(abs(x) for x in signal)
d = [int(30000./m*x) for x in signal]
w = wave.open(filename, "wb")
w.setparams((1,2,44100,len(signal),'NONE','noncompressed'))
w.writeframes(''.join(struct.pack('h',x) for x in d))
w.close()
signal = []
t=0
for (p,d) in [(4,1),(5,1),(7,3),(None,1), (4,1),(5,1),(7,3),(None,1), (4,1),(7,1),(12,2),(11,2),(9,2),(9,2),(7,1),(None,1), (2,1),(4,1),(5,3),(None,1), (2,1),(4,1),(5,3),(None,1), (2,1),(5,1),(11,1),(9,1),(7,2),(11,2),(12,4)]:
if p is not None: add_note(t, 440*2**((p+1*(random.random()-0.5))/12.), 0.3*d+0.1, 1)
t += 0.3*d
save_signal()
signal = []
t=0
for (p,d) in [(4,1),(5,1),(7,3),(None,1), (4,1),(5,1),(7,3),(None,1), (4,1),(7,1),(12,2),(11,2),(9,2),(9,2),(7,1),(None,1), (2,1),(4,1),(5,3),(None,1), (2,1),(4,1),(5,3),(None,1), (2,1),(5,1),(11,1),(9,1),(7,2),(11,2),(12,4)]:
if p is not None: add_note(t, 440*2**(((p+random.choice([-1,0,0,0,1]))+random.random())/12.), 0.3*d+0.1, 1)
t += 0.3*d*math.exp(random.random()*random.random())
save_signal()
It (1) displaces 20% of notes up and 20% of notes down by one semitone, (2) detunes all notes randomly by about +/- a quarter-tone, and (3) inserts random delays, usually quite short but up to a factor of about 1.7 times the length of the preceding note or rest.
[EDITED to add: actually, I think it distorts the pitches just a little too much.]
[FURTHER EDITED: really, it should be tweaked so that when two consecutive notes in the original melody are, say, increasing in pitch, the same is true of the distorted ones. I am too lazy to make this happen. A simpler improvement is to replace the two pitch-diddlings with a single call to random.choice() so that you never get, e.g., a semitone displacement plus a quarter-tone mistuning in the same direction. I also tried making the timbre nastier by putting the partials at non-harmonic frequencies, which does indeed sound quite nasty but not in a particularly hummable way. This doesn’t introduce as much nastiness as it would in music with actual harmony in it; one can make even a perfect fifth sound hideously discordant by messing up the spectrum of the notes. See William Sethares’s excellent book “Tuning, timbre, spectrum, scale” for more details, though he inexplicably gives more attention to making music sound better rather than worse.]
For further fun, get the code to play the lullaby, wait an exponentially distributed time with mean, say, 30 seconds, and then start again with 99% probability.
If you were using this on someone else, starting again would be mandatory. But the only way to build up hope that it will stop in yourself, when you know how the code works, is to add a small chance of stopping.
Edit: upon further consideration, the distribution should be Pareto or something with a similarly heavy tail.
Personally, I find random changes a little disorienting even if I’m expecting them (like a deceptive cadence in a familiar piece). Though this feeling of disorientation is not unpleasant, so a simple loop would be more annoying for me too.
I didn’t find the result all that unpleasant. Probably because the sound file was still pretty close to what the intervals/notes were “supposed” to be, my brain categorized them into the right categories. It would have been worse if I perceived them as “completely wrong” intervals (as in a seventh instead of a fourth) rather than just “out-of-tune” intevals.
I’m not sure that the word “creation” is quite right (except in so far as for some musically-minded people it may bring to mind the other words “representation of chaos”) but yes, I’m afraid it is.
Just to add: (1) The pointless “1*” is because I experimented with other sizes of error too. (2) A slight modification of this lets you, e.g., have the pitch drift downward by 1⁄10 of a semitone per note, which for me at least is very noticeable and unpleasant even though each individual interval is OK.
While all of the evil credit of course goes to you, I feel that I have made some neat* modifications:
signal = []
t=0L
pscale=5
pexp=2
transpose = 0
iterations = 10
for ii in range(1,iterations):
for (p,d) in [(4,1),(5,1),(7,3),(None,1), (4,1),(5,1),(7,3),(None,1), (4,1),(7,1),(12,2),(11,2),(9,2),(9,2),(7,1),(None,1), (2,1),(4,1),(5,3),(None,1), (2,1),
(4,1),(5,3),(None,1), (2,1),(5,1),(11,1),(9,1),(7,2),(11,2),(12,4)]:
if p is not None:
add_note(t, random.choice([440*2**(((p+transpose)+random.choice([-1,0,0,0,1]))/12.),440*2**(((p+transpose)+random.random())/12.)]), 0.3*d+0.1, 1)
t += 0.3*d*math.exp(random.random()*random.random())
transpose = random.choice([-14, -9, -7, -4.5, -2, -1, 0, 0.5, 1, 2, 4.5, 7, 9, 14]) #transpose up or down
t += 5*(pexp*((pscale**pexp)/((random.randrange(200,600,1)/100)**(pexp+1)))) #wait a while before repeating
save_signal()
My music theory is rusty and anyway underdeveloped. But I don’t think individual notes can be disturbingly off key. It is the relationship between notes that takes them out of key. A single note of any frequency will produce harmonics with anything in the environment that is capable of responding, and thus create its own meager, on key accompaniment.
I think MIDI keeps you from even approaching the kind of terrible close but not quite right tones you want to reproduce.
Changing one individual note in a monophonic tune absolutely can be horribly off key. Melody is harmony, and harmony is counterpoint; even with a single voice humming, if the tune is “classical” enough your brain understands intuitively where the chord changes are and what the bass line should be.
You don’t need microtonal pitches to violently defy people’s expectations.
(EDIT: Though you almost certainly do need microtonal pitches to precisely mimic the effects described in the text. But I think you certainly could do something horrible without them.)
I don’t think you need to even venture into the world of quarter pitches in order to create horrible humming. To give an idea of a song that twists your expectations of keys and time signatures and melodic progression, and breaks it in specific ways to ramp tension, check the epiphany from sweeney todd.
I didn’t really notice anything wrong with that. it jumped around a lot, and it wasn’t especially good, but it didn’t much bother me.
I forget that when I listen to it, I have the background of the story and buildup already, so I start with different expectations- perhaps not the best example.
Also, I’ve listened to a fair bit of weird proggy music.
There’s a continuous spectrum of pitch. The character is kind of showing off, like he always kind of is.
He’s probably hitting notes that are multiples of irrational numbers when described in Hertz.
Retracted because it seemed the best way to acknowledge the correction: the vast majority of common musical notes are multiples of irrational numbers when described in Hertz.
FYI, in the tuning system commonly used for western music, all notes except A are irrational frequencies in hertz. Example: A below middle C is 220 hertz, and middle C is
(220 * (2 ^ (1/12)) ^ 3) hertz ~= 261.6255653006 hertz.
(To go up a half step, you multiply the frequency by the 12th root of 2.)
At risk of derail, how the hell did they ever get a twelfth root into music?
We think of intervals between tones as being “the same” when there is a constant ratio between them. For instance, if two notes are an octave apart, the frequency of one is twice the other.
Thus, if we want to divide the octave into twelve semitones (which we do have twelve of: C, C#, D, D#, E, F, F#, G, G#, A, A#, B) and we want all of these twelve semitones to be the same intervals, then we want each interval to multiply the frequency by 2^(1/12).
Every part of that makes sense except for the lack of E# and B#, and why x2 is called an octave. Thanks for the info, and for reminding me why musical theory is one of three fields I have ever given up on learning.
The reason we avoid E# and B# is to get nice-sounding chords by only using the white keys. This way, the C-E chord has a ratio of 2^(4/12) which is approximately 5⁄4; the C-F chord has a ratio of 2^(5/12) which is approximately 4⁄3; and the C-G chord has a ratio of 2^(7/12) which is approximately 3⁄2.
In fact, before we understood twelfth roots, people used to tune pianos so that the ratios above were exactly 5⁄4, 4⁄3, and 3⁄2. This made different scales sound different. For instance, the C major triad might have notes in the ratios 4:5:6, while a D major triad might have different ratios, close to the above but slightly off.
There’s also the question of whether the difference between these makes a difference in the sound. There’s two answers to that. On the one hand, it’s a standard textbook exercise that the difference between pitches of a note in two different tuning systems is never large enough for the human ear to hear it. So, most of the time, the tuning systems are impossible to distinguish.
On the other hand, there are certain cases in which the human ear can detect very very small differences when a chord is played. To give a simple (though unmusical) example, suppose we played a chord of a 200 Hz note and a 201 Hz note. The human ear, to a first approximation, will hear a single note of approximately 200 Hz. However, the difference between the two notes has a period of 1 second, so what the human ear actually hears is a 200 Hz note whose (EDIT) amplitude wobbles every second. This is very very obvious, it’s a first sign of your piano being out of tune, and in different tuning systems it happens to different chords.
and only 12 notes per octave. With more notes per octave you can distinguish between F# and Gb without losing much accuracy in the most common keys.
Nitpick: I’m no expert in historical tunings, but AFAIK medieval music used pure fifths, where near-pure major thirds are hard to reach. This became a problem in Renaissance music so keyboard instruments started to favor meantone tunings with more impure fifths, to make 4 fifths modulo octave a better major third in the most common keys. (The video demonstrating the major scale/chords generated by a fifth of 695 cents shows this rationale.) As soon as people began to value pure major thirds in their music the fifths in keyboard music became more tempered. Keyboard tunings with both pure 3/2s and pure 5/4s were not widely used, because of the syntonic comma.
In Renaissance music 12-equal was used for lutes for example, which shows that even though people knew about 12 equal temperament and could approximate 2^(1/12) well they didn’t like to use it for keyboard instruments. The tuning of the keyboard gradually changed to accommodate all 12 keys of modern Western music as the style of music started to call for more modulations in circa 18th century. But you are overall correct that different keys in the twelve-tone keyboard sounded different. (even in the 18th century.)
I find it hard to believe this. If these differences were mostly not significant there would be no reason for the existence of different tuning systems. What kinds of differences between tuning systems are you talking about?
Actually it’s the amplitude that wobbles, and more than slightly.
Thank you, edited.
And I suppose that “the white keys”, defined some centuries ago, are a more difficult standard to change than the underlying mathematical assumptions. Right.
Also, the white keys are far from being an arbitrary set of pitches. Very roughly, they’re chosen so that as many combinations of them as possible sound reasonably harmonious together when played on an instrument whose sound has a harmonic spectrum (which applies to most of the tuned instruments used in Western music). I don’t mean that someone deliberately sat down and solved the optimization problem, of course, but it turns out that the Western “diatonic scale” (= the white notes) does rather well by that metric. So it’s not like we’d particularly want to change the scale for the sake of making either the mathematics or the music sound better.
Notes sound good if they’re approximately simple rational multiples of each other. Hence you want your scale to contain multiples.
Since the simplest multiple is x2 we use that for the octave. As for why we break it up into 12 semitones, the reason is that 2^(7/12) is approximately 3⁄2 and as a bonus 2^(4/2) is a passable approximation to 5⁄4.
I’m referring to the name. What relation does it have to eight?
Eight notes: C D E F G A B C. (People used to not know how to count properly.* I think it comes from not having a clear concept of zero.)
* One can argue that this counting system is no worse than ours, but to do so, one would have to explain why ten octaves is seventy[one] notes.
Similarly, other musical intervals—i.e., ratios between frequencies—have names that are all arguably off by one. A “perfect fifth” is, e.g., from C to G. C,D,E,F,G: five notes. So a fifth plus a fifth is (not a tenth but) a ninth.
Look up “equal temperament.” There are 12 half-steps in an Octave, after each octave the frequency should double, and the simplest way to arrange it is to make each step a multiplication by h=2^(1/12) so that h^12=2.
Many people report that “natural” intervals like the 3:2 and 4:3 ratio, sound better than the equal temperament approximations, though I don’t hear much of a difference myself.
It’s really obvious if you expect any decent math to invoke exponents of 2.
The vast majority of humans don’t have perfect pitch, so the specific pitch of the note is far less important than the relationships to the notes surrounding them. I agree that he is rather showing off, but unless you spend a very large amount of time ear training, you likely cannot tell when a note is a quarter tone sharp or flat. However, just like there are cycles of notes that always sound amazing together when you run them through variation (see the circle of 5ths), there are notes that sound horrible and jarring. Furthermore, the amount of time it takes to reliable sing quarter tones is ridiculously high- it is something that life long trained musicians cannot do. (Of course there is another discussion about how our formulation of music causes this, but lets set that aside for now.) I think it is far more likely that he has studied a circle of 7th’s and 2nd’s, or something to that effect- he has created a musical algorithm where the pattern itself is so convoluted, it is not intuitively detected, and the notes/key changes produced so horrible, it wears on the mind.
Even without a lot of ear training, you can quite likely hear if a note is a quarter-tone out relative to its predecessors and successors.
Here is a quarter tone scale. While the changes are detectable right next to each other, much like sight delivers images based on pre-established patterns, so does hearing. When laid out in this fashion, you can hear the quarter tone differences- although to my ears (and I play music professionally, have spent much time in ear training, and love music theory) there are times it sounds like two of the same note is played successively. Move out of this context, into an interval jump, and while those with good relative pitch may think it sounds “pitchy”, your mind fills it in to a close note- this is why singers with actual pitch problems still manage to gain a following. Most people cannot hear slightly wrong notes. However, none of this approaches the complexity of actually trying to sing a quarter tone. The amount of vocal training required to sing quarter tones at will is the work of a master musician- much like the the person who can successfully execute slight of hand at the highest level is someone who spends decades in honing their craft.
I just tried some experiments and I find that if I take Brahms’s lullaby (which I think is the one Eliezer means by “Lullaby and Goodnight”) and flatten a couple of random notes by a quarter-tone, the effect is in most cases extremely obvious. And if I displace each individual pitch by a random amount from a quarter-tone flat to a quarter-tone sharp, then of course some notes are individually detectable as out of tune and some not but the overall effect is agonizing in a way that simply getting some notes wrong couldn’t be.
I’m a pretty decent (though strictly amateur) musician and I’m sure many people wouldn’t find such errors so obvious (and many would find it more painful than I do).
Anyway, I’m not sure what our argument actually is. The chapter says, in so many words, that Q. is humming notes “not just out of key for the previous phrases but sung at a pitch which does not correspond to any key” which seems to me perfectly explicit: part of what makes the humming so dreadful is that Q. is out of tune as well as humming wrong notes. And yes, the ability to sing accurate quarter-tones is rare and requires work to develop. So are lots of the abilities Q. has.
(Of course that doesn’t require that the wrong notes be exactly quarter-tones.)
Python code snippet for anyone who wants to do a similar experiment (warning 1: works only on Windows; warning 2: quality of sound is Quirrell-like):
Here’s a tweak I made that I think keeps to the spirit.
current = [(4.,1.),(5.,1.),(7.,3.),(None,1.), (4.,1.),(5.,1.),(7.,3.),(None,1.),(4.,1.),(7.,1.),(12.,2.),(11.,2.),(9.,2.),(9.,2.),(7.,1.),(None,1.),(2.,1.),(4.,1.),(5.,3.),(None,1.),(2.,1.),(4.,1.),(5.,3.),(None,1.),(2.,1.),(5.,1.),(11.,1.),(9.,1.),(7.,2.),(11.,2.),(12.,4.)]
Basically, each loop it tweaks the song slightly from the one before it, randomly. The three different bias settings on the top dictate how the song evolves. But besides just changing the song, the rate of any play varies randomly (according to the timebias as well).
The timebias applies to changes of timing. So the tempo of the play, the rate of change of the length of a note and the length of pauses are all shifted by the timebias randomly. increasing this number will create more dramatic swings in time changes from run to run (as well as the overall bounds of the tempo).
The pitchbias applies to pitch changes. Increasing it will let the algorithm drift from the normal song much faster. Too high will cause obvious swings in notes. Too low, and it’ll take forever to get a decently maddening change (but perhaps that’s part of the master plan).
The changebias indicates the chance that on a particular loop, the pitch of a random note will change, or if the duration will change. This change is carried on to all future plays (and will have a ripple effect)
The result is quite maddening, as parts of the song will randomly trend back towards the correct notes. And notes you could have sworn were wrong will appear normal later. And back and forth it goes. Just repeating, and changing until you get driven mad (or bored) enough to ^C...
Basically, it’s a genetic algorithm without a binding fitness function. Its random changes will just propegate infinitely towards chaos. But for a very long time it will have the “feel” of the original song...
I couldn’t help myself. I had to have a go at making it, too.
http://jsfiddle.net/GVTk2/
Didn’t check it on anything other than chromium, and I can’t guarantee it won’t eventually use all your memory and crash.
It’s horrible in many ways: switches key, misses the frequency of notes, changes from 2^(1/12) ratio between semitones, pauses at random and changes note length.
Take a listen, there’s always a chance it’ll stop :D
/edit ambiguity. Come to think of it, skipping notes is the one thing I didn’t do. Note that it starts reasonably close to being in tune and slowly degrades.
This is pretty awesomely horrible, all right! ::applause::
I can’t get this to work in Wine. Could you please put up a recording? Thank you :-)
Try this instead; it should work on any OS and generate a .wav file you can play. (It’s better than putting up a recording because you can play with the parameters, put in your own tune, etc.)
This is quite Quirrellicious:
It (1) displaces 20% of notes up and 20% of notes down by one semitone, (2) detunes all notes randomly by about +/- a quarter-tone, and (3) inserts random delays, usually quite short but up to a factor of about 1.7 times the length of the preceding note or rest.
[EDITED to add: actually, I think it distorts the pitches just a little too much.]
[FURTHER EDITED: really, it should be tweaked so that when two consecutive notes in the original melody are, say, increasing in pitch, the same is true of the distorted ones. I am too lazy to make this happen. A simpler improvement is to replace the two pitch-diddlings with a single call to random.choice() so that you never get, e.g., a semitone displacement plus a quarter-tone mistuning in the same direction. I also tried making the timbre nastier by putting the partials at non-harmonic frequencies, which does indeed sound quite nasty but not in a particularly hummable way. This doesn’t introduce as much nastiness as it would in music with actual harmony in it; one can make even a perfect fifth sound hideously discordant by messing up the spectrum of the notes. See William Sethares’s excellent book “Tuning, timbre, spectrum, scale” for more details, though he inexplicably gives more attention to making music sound better rather than worse.]
For further fun, get the code to play the lullaby, wait an exponentially distributed time with mean, say, 30 seconds, and then start again with 99% probability.
If you were using this on someone else, starting again would be mandatory. But the only way to build up hope that it will stop in yourself, when you know how the code works, is to add a small chance of stopping.
Edit: upon further consideration, the distribution should be Pareto or something with a similarly heavy tail.
Please post a recording, for those of us who don’t want to have to set up whole programming environments to watch a Youtube video.
I’ve made a recording with SuperCollider using almost the same algorithm as in the Python script above, here’s the link /watch?v=wjZRM6KgGbE.
It loses much of the impact when you intentionally seek it out, I think. The lullaby loop midi I found to be more annoying than the errors.
Still, thanks for posting that—it’s certainly interesting.
Listening to something is not at all the same as listening to something for seven hours.
Personally, I find random changes a little disorienting even if I’m expecting them (like a deceptive cadence in a familiar piece). Though this feeling of disorientation is not unpleasant, so a simple loop would be more annoying for me too.
“unavailable”: what gives?
Oh well, I guess bad music isn’t actually so annoying… I tried it and it didn’t bother me at all.
Apparently I’m not quite as good at tormenting people as Lord Voldemort. Oh well, can’t win ’em all.
I didn’t find the result all that unpleasant. Probably because the sound file was still pretty close to what the intervals/notes were “supposed” to be, my brain categorized them into the right categories. It would have been worse if I perceived them as “completely wrong” intervals (as in a seventh instead of a fourth) rather than just “out-of-tune” intevals.
Whooo, that is awesome.
So simple, and yet so awful … you’re onto sheer antimusical gold here.
Awesome. Is this your creation?
I’m not sure that the word “creation” is quite right (except in so far as for some musically-minded people it may bring to mind the other words “representation of chaos”) but yes, I’m afraid it is.
Just to add: (1) The pointless “1*” is because I experimented with other sizes of error too. (2) A slight modification of this lets you, e.g., have the pitch drift downward by 1⁄10 of a semitone per note, which for me at least is very noticeable and unpleasant even though each individual interval is OK.
While all of the evil credit of course goes to you, I feel that I have made some neat* modifications:
*Where neat is, of course, a synonym for evil