I have just recalled an anecdote about the symptoms of trying to explain something incoherent. If (so I read) you hypnotize someone and suggest to them that they can see a square circle drawn on the wall, fully circular and fully a square, they have the experience of seeing a square circle. Now, I’m somewhat sceptical about the reality of hypnosis, but not at all sceptical about the physical ability of a brain to have that experience, despite the fact that there is no such thing as a square circle.
If you ask that person (the story goes on) to draw what they see, they start drawing, but keep on erasing and trying again, frustrated by the fact that what they draw always fails to capture the thing they are trying to draw.
I think you might be reading a bit too much into things here. Eliezer is exceptional in his verbal communication abilities for whatever his other skills and flaws, so even if the most rationalist rationalist set out to write the sequences and they were not on par with Eliezer’s verbal skills they likely would not have been as successful, would have gotten lost and have lots of dangling pointers to things to explain later. Chapman is facing the normal problems of trying to explain a complex things when you’re closer to the mean.
The problem with this reasoning is: if you can’t explain it, just how exactly are you so sure that there is any “it” to explain?
If we’re talking about some alleged practical skill, or some alleged understanding of a physical phenomenon, etc., then this is no obstacle. Can’t explain it in words? Fine; simply demonstrate the skill, or make a correct prediction (or several) based on your understanding, and it’ll be clear at once that you really do have said skill, or really do possess said understanding.
For example, say I claim to know how to make a delicious pie. You are skeptical, and demand an explanation. I fumble and mumble, and eventually confess that I’m just no good at explaining. But I don’t have to explain! I can just bake you a pie. So I can be perfectly confident that I have pie-baking skills, because I have this pie; and you can be confident in same, for the same reason.
Similar logic applies to alleged understanding of the real world.
But with something like this—some deep philosophical issue—how can you demonstrate to me that you know something I don’t, or understand something I don’t, without explaining it to me? Now, don’t get me wrong; maybe you really do have some knowledge or understanding. Not all that is true, can be easily demonstrated.
But without a clear verbal explanation, not only do I have no good reason at all to believe that “there’s a ‘there’ there”… but neither do you!
But without a clear verbal explanation, not only do I have no good reason at all to believe that “there’s a ‘there’ there”… but neither do you!
I may well have knowledge of things through experience I cannot verbalize well or explain to myself in a systematized way. To suggest it must be that I can only have such things if I can explain them is to assume against the point I’m making in the original post, which you are free to disagree with but I want to make clear I think this is a difference of assumptions not a difference of reasoning from a shared assumption.
So… you have experience you can’t verbalize or explain, to yourself or to others; and through this experience, you gain knowledge, which you also can’t adequately verbalize or explain, to yourself or to others? Nor can you in any meaningful way demonstrate this knowledge or its fruits?
Once again, I do not claim that this proves that you don’t have any special knowledge or understanding that you claim to have. But it seems clear to me that you have no good reason for believing that you have any such thing—and much less does anyone else have any reason to believe this.
And, with respect, whatever assumption you have made, which would lead you to conclude otherwise… I submit to you that this assumption has taken you beyond the bounds of sanity.
I would imagine it’s also about: not having such an explanation right now, but being confident you will have one soon. For an extreme case with high confidence: I see a ‘proof’ that 1=2. I may be confident that there is at least 1 mistake in the proof before I find it. With less confidence, I may guess that the error is ‘dividng by zero’ before I see it.
I have just recalled an anecdote about the symptoms of trying to explain something incoherent. If (so I read) you hypnotize someone and suggest to them that they can see a square circle drawn on the wall, fully circular and fully a square, they have the experience of seeing a square circle. Now, I’m somewhat sceptical about the reality of hypnosis, but not at all sceptical about the physical ability of a brain to have that experience, despite the fact that there is no such thing as a square circle.
If you ask that person (the story goes on) to draw what they see, they start drawing, but keep on erasing and trying again, frustrated by the fact that what they draw always fails to capture the thing they are trying to draw.
Edit: the story is from Edward de Bono’s book “Lateral Thinking: An Introduction” (previously published as “The Use of Lateral Thinking”).
I think you might be reading a bit too much into things here. Eliezer is exceptional in his verbal communication abilities for whatever his other skills and flaws, so even if the most rationalist rationalist set out to write the sequences and they were not on par with Eliezer’s verbal skills they likely would not have been as successful, would have gotten lost and have lots of dangling pointers to things to explain later. Chapman is facing the normal problems of trying to explain a complex things when you’re closer to the mean.
The problem with this reasoning is: if you can’t explain it, just how exactly are you so sure that there is any “it” to explain?
If we’re talking about some alleged practical skill, or some alleged understanding of a physical phenomenon, etc., then this is no obstacle. Can’t explain it in words? Fine; simply demonstrate the skill, or make a correct prediction (or several) based on your understanding, and it’ll be clear at once that you really do have said skill, or really do possess said understanding.
For example, say I claim to know how to make a delicious pie. You are skeptical, and demand an explanation. I fumble and mumble, and eventually confess that I’m just no good at explaining. But I don’t have to explain! I can just bake you a pie. So I can be perfectly confident that I have pie-baking skills, because I have this pie; and you can be confident in same, for the same reason.
Similar logic applies to alleged understanding of the real world.
But with something like this—some deep philosophical issue—how can you demonstrate to me that you know something I don’t, or understand something I don’t, without explaining it to me? Now, don’t get me wrong; maybe you really do have some knowledge or understanding. Not all that is true, can be easily demonstrated.
But without a clear verbal explanation, not only do I have no good reason at all to believe that “there’s a ‘there’ there”… but neither do you!
I may well have knowledge of things through experience I cannot verbalize well or explain to myself in a systematized way. To suggest it must be that I can only have such things if I can explain them is to assume against the point I’m making in the original post, which you are free to disagree with but I want to make clear I think this is a difference of assumptions not a difference of reasoning from a shared assumption.
So… you have experience you can’t verbalize or explain, to yourself or to others; and through this experience, you gain knowledge, which you also can’t adequately verbalize or explain, to yourself or to others? Nor can you in any meaningful way demonstrate this knowledge or its fruits?
Once again, I do not claim that this proves that you don’t have any special knowledge or understanding that you claim to have. But it seems clear to me that you have no good reason for believing that you have any such thing—and much less does anyone else have any reason to believe this.
And, with respect, whatever assumption you have made, which would lead you to conclude otherwise… I submit to you that this assumption has taken you beyond the bounds of sanity.
Improvements to subjective well being can be extremely legible from the inside and fairly noisy from the outside.
Fair enough, that’s true. To be clear, then—is that all this is about? “Improvements to subjective well being”?
I would imagine it’s also about: not having such an explanation right now, but being confident you will have one soon. For an extreme case with high confidence: I see a ‘proof’ that 1=2. I may be confident that there is at least 1 mistake in the proof before I find it. With less confidence, I may guess that the error is ‘dividng by zero’ before I see it.