It surprises people like Greg Egan, and they’re not entirely stupid, because brains are Turing complete modulo the finite memory—there’s no analogue of that for visible wavelengths.
If this weren’t Less Wrong, I’d just slink away now and pretend I never saw this, but:
I don’t understand this comment, but it sounds important. Where can I go and what can I read that will cause me to understand statements like this in the future?
When speaking about sensory inputs, it makes sense to say that different species (even different individuals) have different ranges, so one can percieve something and other can’t.
With computation it is known that sufficiently strong programming languages are in some sense equal. For example, you could speak about relative advantages of Basic, C/C++, Java, Lisp, Pascal, Python, etc., but in each of these languages you can write a simulator of the remaining ones. This means that if an algorithm can be implemented in one of these languages, it can be implemented in all of them—in worst case, it would be implemented as a simulation of another language running its native implementation.
There are some technical details, though. Simulating another program is slower and requires more memory than the original program. So it could be argued that on a given hardware you could do a program in language X which uses all the memory and all available time, so it does not necessarily follow that you can do the same program in language Y. But on this level of abstraction we ignore hardware limits. We assume that the computer is fast enough and has enough memory for whatever purpose. (More precisely, we assume that in available time a computer can do any finite number of computation steps; but it cannot do an infinite number of steps. The memory is also unlimited, but in a finite time you can only manage to use a finite amount of memory.)
So on this level of abstraction we only care about whether something can or cannot be implemented by a computer. We ignore time and space (i.e. speed and memory) constraints. Some problems can be solved by algorithms, others can not. (Then, there are other interesting levels of abstraction which care about time and space complexity of algorithms.)
Are all programming languages equal in the above sense? No. For example, although programmers generally want to avoid infinite loops in their programs, if you remove a potential for infinite loops from the programming language (e.g. in Pascal you forbid “while” and “repeat” commands, and a possibility to call functions recursively), you lose ability to simulate programming languages which have this potential, and you lose ability to solve some problems. On the other hand, some universal programming languages seem extremely simple—a famous example is a Turing machine. This is very useful, because it is easier to do mathematical proofs about a simple language. For example if you invent a new programming language X, all you have to do to prove its universality, is to write a Turing machine simulator, which is usually very simple.
Now back to the original discussion… Eliezer suggests that brain functionality should be likened to computation, not to sensory input. A human brain is computationally universal, because (given enough time, pen and paper) we can simulate a computer program, so all brains should be equal when optimally used (differing only in speed and use of resources). In another comment he adds that ability to compute isn’t the same as ability to understand. Therefore (my conclusion) what one human can understand, another human can at least correctly calculate without understanding, given a correct algorithm.
Not a complete answer, but here’s commentary from a ffdn review of Chapter 14:
Kevin S. Van Horn 7/24/10 . chapter 14 Harry is jumping to conclusions when he tells McGonagall that the Time-Turner isn’t even Turing computable. Time travel simulation is simply a matter of solving fixed-point equation f(x) = x. Here x is the information sent back in time, and f is a function that maps the information received from the future to the information that gets sent back in time. If a solution exists at all, you can find it to any desired degree of accuracy by simply enumerating all possible rational values of x until you find one that satisfies the equation. And if f is known to be both continuous and have a convex compact range, then the Brouwer fixed-point theorem guarantees that there will be a solution.
So the only way I can see that simulating the Time-Turner wouldn’t be Turing computable would be if the physical laws of our universe give rise to fixed-point equations that have no solutions. But the existence of the Time-Turner then proves that the conditions leading to no solution can never arise.
I got the impression that what “not Turing-computable” meant is that there’s no way to only compute what ‘actually happens’; you have to somehow iteratively solve the fixed-point equation, maybe necessarily generating experiences (waves hands confusedly) corresponding to the ‘false’ timelines.
A computational system is Turing complete if certain features of its operation can reproduce those of a Turing machine, which is a sort of bare-bones abstracted model of the low-level process of computation. This is important because you can, in principle, simulate the active parts of any Turing complete system in any other Turing complete system (though doing so will be inefficient in a lot of cases); in other words, if you’ve got enough time and memory, you can calculate anything calculable with any system meeting a fairly minimal set of requirements. Thanks to this result, we know that there’s a deep symmetry between different flavors of computation that might not otherwise be obvious. There are some caveats, though: in particular, the idealized version of a Turing machine assumes infinite memory.
Now, to answer your actual question, the branch of mathematics that this comes from is called computability theory, and it’s related to the study of mathematical logic and formal languages. The textbook I got most of my understanding of it from is Hopcroft, Motwani, and Ullman’s Introduction to Automata Theory, Languages, and Computation, although it might be worth looking through the “Best Textbooks on Every Subject” thread to see if there’s a consensus on another.
Curious, does “memory space” mean something more than just “memory”?
Just a little more specific. Some people may hear “memory” and associate it with, say, the duration of their memory rather than how many can be physically held. For example when a human is said to have a ‘really good memory’ we don’t tend to be trying to make a claim about the theoretical maximum amount of stuff they could remember.
No, although either or both might be a little misleading depending on what connotations you attach to it: an idealized Turing machine stores all its state on a rewritable tape (or several tapes, but that’s equivalent to the one-tape version) of symbols that’s infinite in both directions. You could think of that as analogous to both memory and disk, or to whatever the system you’re actually working with uses for storage.
brains are Turing complete modulo the finite memory
What does that statement mean in the context of thoughts?
That is, when I think about human thoughts I think about information processing algorithms, which typically rely on hardware set up for that explicit purpose. So even though I might be able to repurpose my “verbal manipulation” module to do formal logic, that doesn’t mean I have a formal logic module.
Any defects in my ability to repurpose might be specific to me: I might able to think the thought “A-> B, ~A, therefore ~B” with the flavor of trueness, and another person can only think that thought with the flavor of falseness. If the truth flavor is as much a part of the thought as the textual content, then the second thinker cannot think the thought that the first thinker can.
Aren’t there people who can hear sounds but not music? Are their brains not Turing complete? Are musical thoughts ones they cannot think?
It means nothing, although Greg Egan is quite impressed by it. Sad but true: Someone with an IQ of, say, 90 can be trained to operate a Turing machine, but will in all probability never understand matrix calculus. The belief that Turing-complete = understanding-complete is false. It just isn’t stupid.
[That human brains are Turing-complete] means nothing, although Greg Egan is quite impressed by it. Sad but true: Someone with an IQ of, say, 90 can be trained to operate a Turing machine, but will in all probability never understand matrix calculus.
It doesn’t mean nothing; it means that people (like machines) can be taught to do things without understanding them.
(They can also be taught to understand, provided you reduce understanding to Turing-machine computations, which is harder. “Understanding that 1+1 = 2” is not the same thing as being able to output “2″ to the query “1+1=”.)
I would imagine that he can be taught matrix calculus, given sufficient desire (on his and the teachers’ parts), teaching skill, and time. I’m not sure if in practice it is possible to muster enough desire or time to do it, but I do think that understanding is something that can theoretically be taught to anyone who can perform the mechanical calculations.
Have you ever tried to teach math to anyone who is not good at math? In my youth I once tutored a woman who was poor, but motivated enough to pay $40/session. A major obstacle was teaching her how to calculate (a^b)^c and getting her to reliably notice that minus times minus equals plus. Despite my attempts at creative physical demonstrations of the notion of a balanced scale, I couldn’t get her to really understand the notion of doing the same things to both sides of a mathematical equation. I don’t think she would ever understand what was going on in matrix calculus, period, barring “teaching methods” that involve neural reprogramming or gain of additional hardware.
Your claim is too large for the evidence you present in support of it.
Teaching someone math who is not good at math is hard, but “will in all probability never understand matrix calculus”!? I don’t think you’re using the Try Harder.
Assume teaching is hard (list of weak evidence: it’s a three year undergraduate degree; humanity has hardly allowed itself to run any proper experiments in the field, and those that have been run seem usually to be generally ignored by professional practitioners; it’s massively subject to the typical mind fallacy and most practitioners don’t know that fallacy exists). That you, “in your youth” (without having studied teaching), “once” tutored a woman who you couldn’t teach very well… doesn’t support any very strong conclusion.
It seems very likely to me that Omega could teach matrix calculus to someone with IQ 90 given reasonable time and motivation from the student. One of the things I’m willing to devote significant resources to in the coming years is making education into a proper science. Given the tools of that proper science I humbly submit that you could teach your former student a lot. Track the progress of the Khan Academy for some promising developments in the field.
humanity has hardly allowed itself to run any proper experiments in the field, and those that have been run seem usually to be generally ignored by professional practitioners
What are the experiments that are generally ignored?
I’d intended a different meaning of “hard”. On reflection your interpretation seems a very reasonable inference from what I wrote.
What I meant:
Teaching is hard enough that you shouldn’t expect to find it easy without having spent any time studying it. Even as a well educated westerner, the bits of teaching you can reasonably expect to pick up won’t take you far down the path to mastery.
No, I haven’t, and reading your explanation I now believe that there is a fair chance you are correct. However, one problem I have with it is that you’re describing a few points of frustration, some of which I assume you ended up overcoming. I am not entirely convinced that had she spent, say one hundred hours studying each skill that someone with adequate talent could fully understand in one, she would not eventually fully understand it.
In cases of extreme trouble, I can imagine her spending forty hours working through a thousand examples, until mechanically she can recognise every example reasonably well, and find the solution correctly, then another twenty working through applications, then another forty hours analysing applications in the real world until the process of seeing the application, formulating the correct problem, and solving it becomes internalised. Certainly, just because I can imagine it doesn’t make it true, but I’m not sure on what grounds I should prefer the “impossibility” hypothesis to the “very very slow learning” hypothesis.
What was your impression of her intelligence otherwise?
Suzette Haden Elgin (a science fiction author and linguist who was quite intelligent with and about words) described herself as intractably bad at math.
This anecdote gives very little information on its own. Can you describe your experience teaching math to other people—the audience, the investment, the methods, the outcome? Do you have any idea whether that one woman eventually succeeded in learning some of what you couldn’t teach her, and if so, how?
(ETA: I do agree with the general argument about people who are not good at math. I’m only saying this particular story doesn’t tell us much about that particular woman, because we don’t know how good you are at teaching, etc.)
I fear you’re committing the typical mind fallacy. The dyscalculic could simulate a Turing machine, but all of mathematics, including basic arithmetic, is whaargarbl to them. They’re often highly intelligent (though of course the diagnosis is “intelligent elsewhere, unintelligent at maths”), good at words and social things, but literally unable to calculate 17+17 more accurately than “somewhere in the twenties or thirties” or “I have no idea” without machine assistance. I didn’t believe it either until I saw it.
Well, I certainly don’t disbelieve in it now. I first saw it at eighteen, in first-year psychology, in the bit where they tried to beat basic statistics into our heads.
I can’t imagine how hard it is to learn to program if you don’t instinctively know how. Yet I know it is that hard for many people. Some succeed in learning, some don’t. Those who do still have big differences in ability, and ability at a young age seems to be a pretty good predictor of lifetime ability.
I realize I must have learned the basics at some point, although I don’t remember it. And I remember learning many more advanced concepts during the many years since. But for both the basics and the advanced subjects, I never experienced anything I can compare to what I’d call “learning” in other subjects I studied.
When programming, if I see/read something new, I may need some time (seconds or hours) to understand it, then once I do, I can use it. It is cognitively very similar to seeing a new room for the first time. It’s novel, but I understand it intuitively and in most cases quickly.
When I studied e.g. biology or math at university, I had to deliberately memorize, to solve exercises before understanding the “real thing”, to accept that some things I could describe I couldn’t duplicate by building them from scratch no matter how much time I had and what materials and tools. This never happened to me in programming. I may not fully understand the domain problem that the program is manipulating. But I always understand the program itself.
And yet I’ve seen people struggle to understand the most elementary concepts of programming, like, say, distinguishing between names and values. I’ve had to work with some pretty poor programmers, and had the official job of on-the-job mentoring newbies on two occasions. I know it can be very difficult to teach effectively, it can be very difficult to learn.
Given that I encountered a heavily preselected set of people, who were trying to make programming their main profession, it’s easy for me to believe that—at the extreme—for many people elementary programming is impossible to learn, period. And the same should apply to math and any other “abstract” subject for which biologically normal people don’t have dedicated thinking modules in their brains.
The belief that Turing-complete = understanding-complete is false. It just isn’t stupid.
I’m not sure what you mean by understanding-complete, but remember that the turing-complete system is both the operator and any machinery they are manipulating.
Obviously the man in the Chinese room lacks understanding, by most common definitions of understanding. It is the room as a system which understands Chinese. (Assuming lookup tables can understand. By functional definitions, they should be able to.)
But with a person it becomes a bit more complicated because it depends on what we are referring to when we say their name. I was trying to make an allusion to Blindsight.
Aren’t there people who can hear sounds but not music?
FWIW I’ve read a study that says about 50% of people can’t tell the difference between a major and a minor chord even when you label them happy/sad. [ETA: Happy/sad isn’t the relevant dimension, see the replies to this comment.] I have no idea how probable that is, but if true it would imply that half of the American population basically can’t hear music.
This is weird. It is hard for me to hear the difference in the cadence, but crystal clear otherwise. In the cadence, the problem for me is that the notes are dragging on, like when you press pedal on piano a bit, that makes it hard to discern the difference.
Maybe they lost something in retelling here? Made up new stimuli for which it doesn’t work because of harmonics or something?
Or maybe its just me and everyone on this thread? I have a lot of trouble hearing speech through noise (like that of flowing water), i always have to tell others, i am not hearing what you’re saying i am washing the dishes. Though i’ve no idea how well other people can hear something when they are washing the dishes; maybe i care too much not to pretend to listen when i don’t hear.
The following recordings are played on an acoustic instrument by a human (me),
and they have spaces in between the chords. The chord sequences are randomly
generated (which means that the major-to-minor ratio is not necessarily 1:1,
but all of them do have a mixture of major and minor chords).
Each of the following two recordings is a sequence of eight C major or C minor
chords:
Edit 2 (2012-Apr-22): I added another
recording that contains these
chords:
F B♭ C F
F B♭ Cmi F
repeated over and over, while the balance between the voices is varied, from
“all voices roughly equal” to “only the second voice from the top audible”.
The second voice from the top is the only one that is different on the C minor
chord. My idea is that hearing the changing voice foregrounded from its
context like this might make it easier to pick it out when it’s not
foregrounded.
Ditto for me—The difference between the two chords is crystal clear, but in the cadence I can barely hear it.
I’m not a professional, but I sang in school chorus for 6 years, was one of the more skilled singers there, I’ve studied a little musical theory, and I apparently have a lot of natural talent. And the first time I heard the version played in cadence I didn’t notice the difference at all. Freaky. I know how that post-doc felt when she couldn’t hear the difference in the chords.
Still, the notes drag on, the notes have harmonics, etc. It is not pure sine waves that abruptly stop and give time for the ear to ‘clear’ of afterimage-like sound.
I hear the difference in the cadence, it’s just that I totally can’t believe it can possibly be clearer than just the one chord then another chord. I can tell apart just the two chords at much lower volume level and/or paying much less attention.
I’ve had between a dozen and two dozen music students over
the years. (Guitar and bass guitar.) Some of them started
out having trouble telling the difference between ascending
and descending intervals. (In other words, some of them had bad
ears.) All of them improved, and all of them, with practice,
were able to hear me play something and play it back by ear.
I’m sure there are some people who are neurologically unable
to do this, but in general, it is a learnable skill.
Edit: One disadvantage to that exercise/game for people who aren’t already
familiar with the intervals is that it doesn’t have you differentiate between
major and minor intervals. (So if you select e.g. 2 and 8 as your intervals,
you’ll be hearing three different intervals, because some of the 2nds will be
minor rather than major.) Sooner or later I’ll write my own interval game!
I was going to comment about how the individual chords were clearly different to my ear but the “stereotypical I-IV-V-I cadential sequences” were indistinguishable, precisely the reverse of the experience the Bell Labs post doc reportedly reported. Then I read the comments on the article and realized this is fairly common, so I deleted the comment. Then I decided to comment on it anyway. Now I have.
And me. I guess—as most probable explanation—they just lost something crucial in retelling. The notes drag on a fair bit in the second part. I can hear the difference if I really concentrate. But its ilke a typo in the text. If the text was blurred.
At first, I found it unbelievable. Then, I remembered that I have imperfect perfect pitch: I learned both piano and french horn; the latter of which is transposed up a perfect fourth. Especially when I’m practicing regularly, I can usually name a note or simple chord when I hear it; but I’m often off by a perfect fourth.
Introspecting on the difference between being right about a note and wrong about a note makes me believe people can confuse major and minor, but still enjoy music.
Might have something to do with the fact that happy/sad is neither an accurate nor an encompassing description of the uses of major/minor chords, unless you place a C major and a C or A minor directly next to each other. I for one find that when I try to tell the difference solely on that basis, I might as well flip a coin and my success rate would go down only slightly. When I come at it from other directions and ignore the emotive impact, my success rate is much higher.
In short: Your conclusion doesn’t follow from the evidence.
Yeah, I spotted that after making my comment, but after that I wasn’t sure whether you were citing the same source material or no. The actual evidence does say a lot more about how humans (don’t?) perceive musical sounds. Thanks for clarifying, though.
There’s the halting problem, so here you go. There’s also the thoughts that you’ll never arrive at because your arriver at the thoughts won’t reach them, even if you could think them if told of them.
It surprises people like Greg Egan, and they’re not entirely stupid, because brains are Turing complete modulo the finite memory—there’s no analogue of that for visible wavelengths.
If this weren’t Less Wrong, I’d just slink away now and pretend I never saw this, but:
I don’t understand this comment, but it sounds important. Where can I go and what can I read that will cause me to understand statements like this in the future?
When speaking about sensory inputs, it makes sense to say that different species (even different individuals) have different ranges, so one can percieve something and other can’t.
With computation it is known that sufficiently strong programming languages are in some sense equal. For example, you could speak about relative advantages of Basic, C/C++, Java, Lisp, Pascal, Python, etc., but in each of these languages you can write a simulator of the remaining ones. This means that if an algorithm can be implemented in one of these languages, it can be implemented in all of them—in worst case, it would be implemented as a simulation of another language running its native implementation.
There are some technical details, though. Simulating another program is slower and requires more memory than the original program. So it could be argued that on a given hardware you could do a program in language X which uses all the memory and all available time, so it does not necessarily follow that you can do the same program in language Y. But on this level of abstraction we ignore hardware limits. We assume that the computer is fast enough and has enough memory for whatever purpose. (More precisely, we assume that in available time a computer can do any finite number of computation steps; but it cannot do an infinite number of steps. The memory is also unlimited, but in a finite time you can only manage to use a finite amount of memory.)
So on this level of abstraction we only care about whether something can or cannot be implemented by a computer. We ignore time and space (i.e. speed and memory) constraints. Some problems can be solved by algorithms, others can not. (Then, there are other interesting levels of abstraction which care about time and space complexity of algorithms.)
Are all programming languages equal in the above sense? No. For example, although programmers generally want to avoid infinite loops in their programs, if you remove a potential for infinite loops from the programming language (e.g. in Pascal you forbid “while” and “repeat” commands, and a possibility to call functions recursively), you lose ability to simulate programming languages which have this potential, and you lose ability to solve some problems. On the other hand, some universal programming languages seem extremely simple—a famous example is a Turing machine. This is very useful, because it is easier to do mathematical proofs about a simple language. For example if you invent a new programming language X, all you have to do to prove its universality, is to write a Turing machine simulator, which is usually very simple.
Now back to the original discussion… Eliezer suggests that brain functionality should be likened to computation, not to sensory input. A human brain is computationally universal, because (given enough time, pen and paper) we can simulate a computer program, so all brains should be equal when optimally used (differing only in speed and use of resources). In another comment he adds that ability to compute isn’t the same as ability to understand. Therefore (my conclusion) what one human can understand, another human can at least correctly calculate without understanding, given a correct algorithm.
Wow. That’s really cool, thank you. Upvoted you, jeremysalwen and Nornagest. :)
Could you also explain why the HPMoR universe isn’t Turing computable? The time-travel involved seems simple enough to me.
Not a complete answer, but here’s commentary from a ffdn review of Chapter 14:
I got the impression that what “not Turing-computable” meant is that there’s no way to only compute what ‘actually happens’; you have to somehow iteratively solve the fixed-point equation, maybe necessarily generating experiences (waves hands confusedly) corresponding to the ‘false’ timelines.
Sounds rather like our own universe, really.
There’s also the problem of an infinite number of possible solutions.
The number of solutions is finite but (very, very, mind-bogglingly) large.
Ah. It’s math.
:) Thanks.
A computational system is Turing complete if certain features of its operation can reproduce those of a Turing machine, which is a sort of bare-bones abstracted model of the low-level process of computation. This is important because you can, in principle, simulate the active parts of any Turing complete system in any other Turing complete system (though doing so will be inefficient in a lot of cases); in other words, if you’ve got enough time and memory, you can calculate anything calculable with any system meeting a fairly minimal set of requirements. Thanks to this result, we know that there’s a deep symmetry between different flavors of computation that might not otherwise be obvious. There are some caveats, though: in particular, the idealized version of a Turing machine assumes infinite memory.
Now, to answer your actual question, the branch of mathematics that this comes from is called computability theory, and it’s related to the study of mathematical logic and formal languages. The textbook I got most of my understanding of it from is Hopcroft, Motwani, and Ullman’s Introduction to Automata Theory, Languages, and Computation, although it might be worth looking through the “Best Textbooks on Every Subject” thread to see if there’s a consensus on another.
Curious, does “memory space” mean something more than just “memory”?
Just a little more specific. Some people may hear “memory” and associate it with, say, the duration of their memory rather than how many can be physically held. For example when a human is said to have a ‘really good memory’ we don’t tend to be trying to make a claim about the theoretical maximum amount of stuff they could remember.
No, although either or both might be a little misleading depending on what connotations you attach to it: an idealized Turing machine stores all its state on a rewritable tape (or several tapes, but that’s equivalent to the one-tape version) of symbols that’s infinite in both directions. You could think of that as analogous to both memory and disk, or to whatever the system you’re actually working with uses for storage.
Right, I know that. Was just curious why the extra verbiage in a post meant to explain something.
Because it’s late and I’m long-winded. I’ll delete it.
https://en.wikipedia.org/wiki/Turing_completeness
What does that statement mean in the context of thoughts?
That is, when I think about human thoughts I think about information processing algorithms, which typically rely on hardware set up for that explicit purpose. So even though I might be able to repurpose my “verbal manipulation” module to do formal logic, that doesn’t mean I have a formal logic module.
Any defects in my ability to repurpose might be specific to me: I might able to think the thought “A-> B, ~A, therefore ~B” with the flavor of trueness, and another person can only think that thought with the flavor of falseness. If the truth flavor is as much a part of the thought as the textual content, then the second thinker cannot think the thought that the first thinker can.
Aren’t there people who can hear sounds but not music? Are their brains not Turing complete? Are musical thoughts ones they cannot think?
It means nothing, although Greg Egan is quite impressed by it. Sad but true: Someone with an IQ of, say, 90 can be trained to operate a Turing machine, but will in all probability never understand matrix calculus. The belief that Turing-complete = understanding-complete is false. It just isn’t stupid.
It doesn’t mean nothing; it means that people (like machines) can be taught to do things without understanding them.
(They can also be taught to understand, provided you reduce understanding to Turing-machine computations, which is harder. “Understanding that 1+1 = 2” is not the same thing as being able to output “2″ to the query “1+1=”.)
I would imagine that he can be taught matrix calculus, given sufficient desire (on his and the teachers’ parts), teaching skill, and time. I’m not sure if in practice it is possible to muster enough desire or time to do it, but I do think that understanding is something that can theoretically be taught to anyone who can perform the mechanical calculations.
Have you ever tried to teach math to anyone who is not good at math? In my youth I once tutored a woman who was poor, but motivated enough to pay $40/session. A major obstacle was teaching her how to calculate (a^b)^c and getting her to reliably notice that minus times minus equals plus. Despite my attempts at creative physical demonstrations of the notion of a balanced scale, I couldn’t get her to really understand the notion of doing the same things to both sides of a mathematical equation. I don’t think she would ever understand what was going on in matrix calculus, period, barring “teaching methods” that involve neural reprogramming or gain of additional hardware.
Your claim is too large for the evidence you present in support of it.
Teaching someone math who is not good at math is hard, but “will in all probability never understand matrix calculus”!? I don’t think you’re using the Try Harder.
Assume teaching is hard (list of weak evidence: it’s a three year undergraduate degree; humanity has hardly allowed itself to run any proper experiments in the field, and those that have been run seem usually to be generally ignored by professional practitioners; it’s massively subject to the typical mind fallacy and most practitioners don’t know that fallacy exists). That you, “in your youth” (without having studied teaching), “once” tutored a woman who you couldn’t teach very well… doesn’t support any very strong conclusion.
It seems very likely to me that Omega could teach matrix calculus to someone with IQ 90 given reasonable time and motivation from the student. One of the things I’m willing to devote significant resources to in the coming years is making education into a proper science. Given the tools of that proper science I humbly submit that you could teach your former student a lot. Track the progress of the Khan Academy for some promising developments in the field.
What are the experiments that are generally ignored?
Some of it is weak evidence for the hardness claim (3 years degree), some against (all the rest). Does that match what you meant?
I’d intended a different meaning of “hard”. On reflection your interpretation seems a very reasonable inference from what I wrote.
What I meant: Teaching is hard enough that you shouldn’t expect to find it easy without having spent any time studying it. Even as a well educated westerner, the bits of teaching you can reasonably expect to pick up won’t take you far down the path to mastery.
(Thank you for you comment—it got me thinking.)
No, I haven’t, and reading your explanation I now believe that there is a fair chance you are correct. However, one problem I have with it is that you’re describing a few points of frustration, some of which I assume you ended up overcoming. I am not entirely convinced that had she spent, say one hundred hours studying each skill that someone with adequate talent could fully understand in one, she would not eventually fully understand it.
In cases of extreme trouble, I can imagine her spending forty hours working through a thousand examples, until mechanically she can recognise every example reasonably well, and find the solution correctly, then another twenty working through applications, then another forty hours analysing applications in the real world until the process of seeing the application, formulating the correct problem, and solving it becomes internalised. Certainly, just because I can imagine it doesn’t make it true, but I’m not sure on what grounds I should prefer the “impossibility” hypothesis to the “very very slow learning” hypothesis.
I can’t imagine how hard it would be to learn math without the concept of referential transparency.
Not all that hard if that’s the only sticking point. I acquired it quite late myself.
What was your impression of her intelligence otherwise?
Suzette Haden Elgin (a science fiction author and linguist who was quite intelligent with and about words) described herself as intractably bad at math.
This anecdote gives very little information on its own. Can you describe your experience teaching math to other people—the audience, the investment, the methods, the outcome? Do you have any idea whether that one woman eventually succeeded in learning some of what you couldn’t teach her, and if so, how?
(ETA: I do agree with the general argument about people who are not good at math. I’m only saying this particular story doesn’t tell us much about that particular woman, because we don’t know how good you are at teaching, etc.)
I fear you’re committing the typical mind fallacy. The dyscalculic could simulate a Turing machine, but all of mathematics, including basic arithmetic, is whaargarbl to them. They’re often highly intelligent (though of course the diagnosis is “intelligent elsewhere, unintelligent at maths”), good at words and social things, but literally unable to calculate 17+17 more accurately than “somewhere in the twenties or thirties” or “I have no idea” without machine assistance. I didn’t believe it either until I saw it.
Do you find this harder to believe than, say, aphasia? I’ve never seen it, but I have no difficulty believing it.
Well, I certainly don’t disbelieve in it now. I first saw it at eighteen, in first-year psychology, in the bit where they tried to beat basic statistics into our heads.
I can’t imagine how hard it is to learn to program if you don’t instinctively know how. Yet I know it is that hard for many people. Some succeed in learning, some don’t. Those who do still have big differences in ability, and ability at a young age seems to be a pretty good predictor of lifetime ability.
I realize I must have learned the basics at some point, although I don’t remember it. And I remember learning many more advanced concepts during the many years since. But for both the basics and the advanced subjects, I never experienced anything I can compare to what I’d call “learning” in other subjects I studied.
When programming, if I see/read something new, I may need some time (seconds or hours) to understand it, then once I do, I can use it. It is cognitively very similar to seeing a new room for the first time. It’s novel, but I understand it intuitively and in most cases quickly.
When I studied e.g. biology or math at university, I had to deliberately memorize, to solve exercises before understanding the “real thing”, to accept that some things I could describe I couldn’t duplicate by building them from scratch no matter how much time I had and what materials and tools. This never happened to me in programming. I may not fully understand the domain problem that the program is manipulating. But I always understand the program itself.
And yet I’ve seen people struggle to understand the most elementary concepts of programming, like, say, distinguishing between names and values. I’ve had to work with some pretty poor programmers, and had the official job of on-the-job mentoring newbies on two occasions. I know it can be very difficult to teach effectively, it can be very difficult to learn.
Given that I encountered a heavily preselected set of people, who were trying to make programming their main profession, it’s easy for me to believe that—at the extreme—for many people elementary programming is impossible to learn, period. And the same should apply to math and any other “abstract” subject for which biologically normal people don’t have dedicated thinking modules in their brains.
I’m not sure what you mean by understanding-complete, but remember that the turing-complete system is both the operator and any machinery they are manipulating.
So you are considering a man in a Chinese room to lack understanding?
Obviously the man in the Chinese room lacks understanding, by most common definitions of understanding. It is the room as a system which understands Chinese. (Assuming lookup tables can understand. By functional definitions, they should be able to.)
But with a person it becomes a bit more complicated because it depends on what we are referring to when we say their name. I was trying to make an allusion to Blindsight.
It means you could, in theory, run an AI on them (slowly).
FWIW I’ve read a study that says about 50% of people can’t tell the difference between a major and a minor chord even when you label them happy/sad. [ETA: Happy/sad isn’t the relevant dimension, see the replies to this comment.] I have no idea how probable that is, but if true it would imply that half of the American population basically can’t hear music.
http://languagelog.ldc.upenn.edu/nll/?p=2074
It shocked the hell out of me, too.
This is weird. It is hard for me to hear the difference in the cadence, but crystal clear otherwise. In the cadence, the problem for me is that the notes are dragging on, like when you press pedal on piano a bit, that makes it hard to discern the difference.
Maybe they lost something in retelling here? Made up new stimuli for which it doesn’t work because of harmonics or something?
Or maybe its just me and everyone on this thread? I have a lot of trouble hearing speech through noise (like that of flowing water), i always have to tell others, i am not hearing what you’re saying i am washing the dishes. Though i’ve no idea how well other people can hear something when they are washing the dishes; maybe i care too much not to pretend to listen when i don’t hear.
This needs proper study.
The following recordings are played on an acoustic instrument by a human (me), and they have spaces in between the chords. The chord sequences are randomly generated (which means that the major-to-minor ratio is not necessarily 1:1, but all of them do have a mixture of major and minor chords).
Each of the following two recordings is a sequence of eight C major or C minor chords:
major-minor-1.mp3
major-minor-2.mp3
Each of the following two recordings is a sequence of eight “cadences” -- groups of four chords that are either
F B♭ C F
or
F B♭ Cminor F
cadences-1.mp3
cadences-2.mp3
Edit: Here’s a listing of the chords in all four sound files.
Edit 2 (2012-Apr-22): I added another recording that contains these chords:
repeated over and over, while the balance between the voices is varied, from “all voices roughly equal” to “only the second voice from the top audible”. The second voice from the top is the only one that is different on the C minor chord. My idea is that hearing the changing voice foregrounded from its context like this might make it easier to pick it out when it’s not foregrounded.
Ditto for me—The difference between the two chords is crystal clear, but in the cadence I can barely hear it.
I’m not a professional, but I sang in school chorus for 6 years, was one of the more skilled singers there, I’ve studied a little musical theory, and I apparently have a lot of natural talent. And the first time I heard the version played in cadence I didn’t notice the difference at all. Freaky. I know how that post-doc felt when she couldn’t hear the difference in the chords.
I added another recording. See “Edit 2” in this comment for an explanation.
Nope, the audio examples are all straightforward realizations of the corresponding music notation. (They are easy for me to tell apart.)
Still, the notes drag on, the notes have harmonics, etc. It is not pure sine waves that abruptly stop and give time for the ear to ‘clear’ of afterimage-like sound.
I hear the difference in the cadence, it’s just that I totally can’t believe it can possibly be clearer than just the one chord then another chord. I can tell apart just the two chords at much lower volume level and/or paying much less attention.
I am with you on easily telling the two apart in the original chords but being unable to reliably tell the difference in the cadence version.
I’ve had between a dozen and two dozen music students over the years. (Guitar and bass guitar.) Some of them started out having trouble telling the difference between ascending and descending intervals. (In other words, some of them had bad ears.) All of them improved, and all of them, with practice, were able to hear me play something and play it back by ear. I’m sure there are some people who are neurologically unable to do this, but in general, it is a learnable skill.
The cognitive fun! website has a musical interval exercise.
Edit: One disadvantage to that exercise/game for people who aren’t already familiar with the intervals is that it doesn’t have you differentiate between major and minor intervals. (So if you select e.g. 2 and 8 as your intervals, you’ll be hearing three different intervals, because some of the 2nds will be minor rather than major.) Sooner or later I’ll write my own interval game!
is this what you’re looking for?
http://www.musictheory.net/exercises/ear-interval
That’s pretty cool. Are there keybindings?
I don’t know, doesn’t look like it.
Likewise.
I was going to comment about how the individual chords were clearly different to my ear but the “stereotypical I-IV-V-I cadential sequences” were indistinguishable, precisely the reverse of the experience the Bell Labs post doc reportedly reported. Then I read the comments on the article and realized this is fairly common, so I deleted the comment. Then I decided to comment on it anyway. Now I have.
I had to listen to that second part several times before I could pick up the difference too. They sound equivalent unless I concentrate.
And me. I guess—as most probable explanation—they just lost something crucial in retelling. The notes drag on a fair bit in the second part. I can hear the difference if I really concentrate. But its ilke a typo in the text. If the text was blurred.
The second sequence sounded jarringly wrong to me, FWIW.
At first, I found it unbelievable. Then, I remembered that I have imperfect perfect pitch: I learned both piano and french horn; the latter of which is transposed up a perfect fourth. Especially when I’m practicing regularly, I can usually name a note or simple chord when I hear it; but I’m often off by a perfect fourth.
Introspecting on the difference between being right about a note and wrong about a note makes me believe people can confuse major and minor, but still enjoy music.
Might have something to do with the fact that happy/sad is neither an accurate nor an encompassing description of the uses of major/minor chords, unless you place a C major and a C or A minor directly next to each other. I for one find that when I try to tell the difference solely on that basis, I might as well flip a coin and my success rate would go down only slightly. When I come at it from other directions and ignore the emotive impact, my success rate is much higher.
In short: Your conclusion doesn’t follow from the evidence.
I stated the evidence incorrectly, look at the uncle/aunt of your comment (if you haven’t already) for the actual evidence.
Yeah, I spotted that after making my comment, but after that I wasn’t sure whether you were citing the same source material or no. The actual evidence does say a lot more about how humans (don’t?) perceive musical sounds. Thanks for clarifying, though.
I’m curious; 50% of what sample? total human population or USians or what?
There’s the halting problem, so here you go. There’s also the thoughts that you’ll never arrive at because your arriver at the thoughts won’t reach them, even if you could think them if told of them.