We think of intervals between tones as being “the same” when there is a constant ratio between them. For instance, if two notes are an octave apart, the frequency of one is twice the other.
Thus, if we want to divide the octave into twelve semitones (which we do have twelve of: C, C#, D, D#, E, F, F#, G, G#, A, A#, B) and we want all of these twelve semitones to be the same intervals, then we want each interval to multiply the frequency by 2^(1/12).
Every part of that makes sense except for the lack of E# and B#, and why x2 is called an octave. Thanks for the info, and for reminding me why musical theory is one of three fields I have ever given up on learning.
The reason we avoid E# and B# is to get nice-sounding chords by only using the white keys. This way, the C-E chord has a ratio of 2^(4/12) which is approximately 5⁄4; the C-F chord has a ratio of 2^(5/12) which is approximately 4⁄3; and the C-G chord has a ratio of 2^(7/12) which is approximately 3⁄2.
In fact, before we understood twelfth roots, people used to tune pianos so that the ratios above were exactly5⁄4, 4⁄3, and 3⁄2. This made different scales sound different. For instance, the C major triad might have notes in the ratios 4:5:6, while a D major triad might have different ratios, close to the above but slightly off.
There’s also the question of whether the difference between these makes a difference in the sound. There’s two answers to that. On the one hand, it’s a standard textbook exercise that the difference between pitches of a note in two different tuning systems is never large enough for the human ear to hear it. So, most of the time, the tuning systems are impossible to distinguish.
On the other hand, there are certain cases in which the human ear can detect very very small differences when a chord is played. To give a simple (though unmusical) example, suppose we played a chord of a 200 Hz note and a 201 Hz note. The human ear, to a first approximation, will hear a single note of approximately 200 Hz. However, the difference between the two notes has a period of 1 second, so what the human ear actually hears is a 200 Hz note whose (EDIT) amplitude wobbles every second. This is very very obvious, it’s a first sign of your piano being out of tune, and in different tuning systems it happens to different chords.
The reason we avoid E# and B# is to get nice-sounding chords by only using the white keys.
and only 12 notes per octave. With more notes per octave you can distinguish between F# and Gb without losing much accuracy in the most common keys.
In fact, before we understood twelfth roots, people used to tune pianos so that the ratios above were exactly 5⁄4, 4⁄3, and 3⁄2. This made different scales sound different. For instance, the C major triad might have notes in the ratios 4:5:6, while a D major triad might have different ratios, close to the above but slightly off.
Nitpick: I’m no expert in historical tunings, but AFAIK medieval music used pure fifths, where near-pure major thirds are hard to reach. This became a problem in Renaissance music so keyboard instruments started to favor meantone tunings with more impure fifths, to make 4 fifths modulo octave a better major third in the most common keys. (The video demonstrating the major scale/chords generated by a fifth of 695 cents shows this rationale.) As soon as people began to value pure major thirds in their music the fifths in keyboard music became more tempered. Keyboard tunings with both pure 3/2s and pure 5/4s were not widely used, because of the syntonic comma.
In Renaissance music 12-equal was used for lutes for example, which shows that even though people knew about 12 equal temperament and could approximate 2^(1/12) well they didn’t like to use it for keyboard instruments. The tuning of the keyboard gradually changed to accommodate all 12 keys of modern Western music as the style of music started to call for more modulations in circa 18th century. But you are overall correct that different keys in the twelve-tone keyboard sounded different. (even in the 18th century.)
On the one hand, it’s a standard textbook exercise that the difference between pitches of a note in two different tuning systems is never large enough for the human ear to hear it. So, most of the time, the tuning systems are impossible to distinguish.
I find it hard to believe this. If these differences were mostly not significant there would be no reason for the existence of different tuning systems. What kinds of differences between tuning systems are you talking about?
And I suppose that “the white keys”, defined some centuries ago, are a more difficult standard to change than the underlying mathematical assumptions. Right.
Also, the white keys are far from being an arbitrary set of pitches. Very roughly, they’re chosen so that as many combinations of them as possible sound reasonably harmonious together when played on an instrument whose sound has a harmonic spectrum (which applies to most of the tuned instruments used in Western music). I don’t mean that someone deliberately sat down and solved the optimization problem, of course, but it turns out that the Western “diatonic scale” (= the white notes) does rather well by that metric. So it’s not like we’d particularly want to change the scale for the sake of making either the mathematics or the music sound better.
Notes sound good if they’re approximately simple rational multiples of each other. Hence you want your scale to contain multiples.
Since the simplest multiple is x2 we use that for the octave. As for why we break it up into 12 semitones, the reason is that 2^(7/12) is approximately 3⁄2 and as a bonus 2^(4/2) is a passable approximation to 5⁄4.
Similarly, other musical intervals—i.e., ratios between frequencies—have names that are all arguably off by one. A “perfect fifth” is, e.g., from C to G. C,D,E,F,G: five notes. So a fifth plus a fifth is (not a tenth but) a ninth.
We think of intervals between tones as being “the same” when there is a constant ratio between them. For instance, if two notes are an octave apart, the frequency of one is twice the other.
Thus, if we want to divide the octave into twelve semitones (which we do have twelve of: C, C#, D, D#, E, F, F#, G, G#, A, A#, B) and we want all of these twelve semitones to be the same intervals, then we want each interval to multiply the frequency by 2^(1/12).
Every part of that makes sense except for the lack of E# and B#, and why x2 is called an octave. Thanks for the info, and for reminding me why musical theory is one of three fields I have ever given up on learning.
The reason we avoid E# and B# is to get nice-sounding chords by only using the white keys. This way, the C-E chord has a ratio of 2^(4/12) which is approximately 5⁄4; the C-F chord has a ratio of 2^(5/12) which is approximately 4⁄3; and the C-G chord has a ratio of 2^(7/12) which is approximately 3⁄2.
In fact, before we understood twelfth roots, people used to tune pianos so that the ratios above were exactly 5⁄4, 4⁄3, and 3⁄2. This made different scales sound different. For instance, the C major triad might have notes in the ratios 4:5:6, while a D major triad might have different ratios, close to the above but slightly off.
There’s also the question of whether the difference between these makes a difference in the sound. There’s two answers to that. On the one hand, it’s a standard textbook exercise that the difference between pitches of a note in two different tuning systems is never large enough for the human ear to hear it. So, most of the time, the tuning systems are impossible to distinguish.
On the other hand, there are certain cases in which the human ear can detect very very small differences when a chord is played. To give a simple (though unmusical) example, suppose we played a chord of a 200 Hz note and a 201 Hz note. The human ear, to a first approximation, will hear a single note of approximately 200 Hz. However, the difference between the two notes has a period of 1 second, so what the human ear actually hears is a 200 Hz note whose (EDIT) amplitude wobbles every second. This is very very obvious, it’s a first sign of your piano being out of tune, and in different tuning systems it happens to different chords.
and only 12 notes per octave. With more notes per octave you can distinguish between F# and Gb without losing much accuracy in the most common keys.
Nitpick: I’m no expert in historical tunings, but AFAIK medieval music used pure fifths, where near-pure major thirds are hard to reach. This became a problem in Renaissance music so keyboard instruments started to favor meantone tunings with more impure fifths, to make 4 fifths modulo octave a better major third in the most common keys. (The video demonstrating the major scale/chords generated by a fifth of 695 cents shows this rationale.) As soon as people began to value pure major thirds in their music the fifths in keyboard music became more tempered. Keyboard tunings with both pure 3/2s and pure 5/4s were not widely used, because of the syntonic comma.
In Renaissance music 12-equal was used for lutes for example, which shows that even though people knew about 12 equal temperament and could approximate 2^(1/12) well they didn’t like to use it for keyboard instruments. The tuning of the keyboard gradually changed to accommodate all 12 keys of modern Western music as the style of music started to call for more modulations in circa 18th century. But you are overall correct that different keys in the twelve-tone keyboard sounded different. (even in the 18th century.)
I find it hard to believe this. If these differences were mostly not significant there would be no reason for the existence of different tuning systems. What kinds of differences between tuning systems are you talking about?
Actually it’s the amplitude that wobbles, and more than slightly.
Thank you, edited.
And I suppose that “the white keys”, defined some centuries ago, are a more difficult standard to change than the underlying mathematical assumptions. Right.
Also, the white keys are far from being an arbitrary set of pitches. Very roughly, they’re chosen so that as many combinations of them as possible sound reasonably harmonious together when played on an instrument whose sound has a harmonic spectrum (which applies to most of the tuned instruments used in Western music). I don’t mean that someone deliberately sat down and solved the optimization problem, of course, but it turns out that the Western “diatonic scale” (= the white notes) does rather well by that metric. So it’s not like we’d particularly want to change the scale for the sake of making either the mathematics or the music sound better.
Notes sound good if they’re approximately simple rational multiples of each other. Hence you want your scale to contain multiples.
Since the simplest multiple is x2 we use that for the octave. As for why we break it up into 12 semitones, the reason is that 2^(7/12) is approximately 3⁄2 and as a bonus 2^(4/2) is a passable approximation to 5⁄4.
I’m referring to the name. What relation does it have to eight?
Eight notes: C D E F G A B C. (People used to not know how to count properly.* I think it comes from not having a clear concept of zero.)
* One can argue that this counting system is no worse than ours, but to do so, one would have to explain why ten octaves is seventy[one] notes.
Similarly, other musical intervals—i.e., ratios between frequencies—have names that are all arguably off by one. A “perfect fifth” is, e.g., from C to G. C,D,E,F,G: five notes. So a fifth plus a fifth is (not a tenth but) a ninth.