I have an unfortunate tendency to initially overestimate the intelligence of people I talk to. Stories and pointers that I consider trivially obvious seem to be too complex for most of the people here.
I’d say what you’re overestimating is how much like you other people are. (Perhaps you consider that statement redundant.)
When a particular koan is considered to be easy, and people don’t get it, my estimation of them drops.
When the meaning can easily be found by conducting a quick Google, and people don’t search for it yet demand to be told what it means, my estimation of them drops.
And when people talk about rationality and becoming more rational, but don’t make an effort to be so or to do so, guess what happens?
Perhaps I should just give up. I’m not certain what good I can do in a community that collectively never considered the possibility that certain ideas are communicated only indirectly for good reasons.
Very simple! Write a post explicitly explaining why certain ideas are communicated only indirectly for good reasons! And if you think that idea is itself communicated only indirectly for good reasons… sounds to me like a too-convenient coincidence.
“why certain ideas are communicated only indirectly for good reasons” by Pre.
Rather than use an obscure example like Zen, we’ll use a fairly simple idea: Learning how to catch a ball.
Now I can directly explain to you how a ball is caught. I can describe the simultaneous ballistic equations that govern the flight of the ball, instruct you on how to alter your idea of where the ball will land based on Bayesian reasoning given certain priors and measured weather conditions.
These things are almost certainly needed if you’re gonna program a computer to catch a ball.
If you’re gonna teach a human to catch a ball though, you’re just going to have to throw a lot of balls at them and tell ’em to keep theirs eyes on it.
I suspect most Zen koans are just poor jokes, but if there’s a point to ’em it’s the same as the point of throwing those balls at a student catcher.
Just to get you to practice thinking in that way. Because you, as a human, will become better at the things you practice.
If the thing you are practising is spouting existential bullshit this may or may not be a good idea. ;)
If you say you’re teaching someone how to catch balls, and then provide them with sequences of equations, there’s a dangerous meta-message involved. You’re conveying the (unspoken, implicit) idea that the equations are what’s needed to make the student good at catching.
If the student then believes that because they’ve mastered the equations they’ve learned how to catch, they’ll go out into the world—and fail and fail and fail.
One real-life example of this may be people who attain high status in martial arts training schools and then get themselves slaughtered in actual fights, where the only rules are those of physics and people have chosen optimized strategies for reality.
I would expect their real-life experience to be sufficient to convince them that it’s possible to catch a ball.
More importantly, if they’re not sure that’s possible, they shouldn’t be looking for someone to teach them how to do it. They should be trying to determine if it’s possible before they do anything else.
These things are almost certainly needed if you’re gonna program a computer to catch a ball.
Incidentally, that’s not how we tend to program computers to do things like catch balls (successfully). We instead build a sort-of general learning system attached to grasping and visual systems, and then teach it how through observation.
I’d say what you’re overestimating is how much like you other people are. (Perhaps you consider that statement redundant.)
It could also straightforwardly result from all kinds of self-overestimation.
When a particular koan is considered to be easy, and people don’t get it, my estimation of them drops.
When the meaning can easily be found by conducting a quick Google, and people don’t search for it yet demand to be told what it means, my estimation of them drops.
And when people talk about rationality and becoming more rational, but don’t make an effort to be so or to do so, guess what happens?
Perhaps I should just give up. I’m not certain what good I can do in a community that collectively never considered the possibility that certain ideas are communicated only indirectly for good reasons.
Very simple! Write a post explicitly explaining why certain ideas are communicated only indirectly for good reasons! And if you think that idea is itself communicated only indirectly for good reasons… sounds to me like a too-convenient coincidence.
“why certain ideas are communicated only indirectly for good reasons” by Pre.
Rather than use an obscure example like Zen, we’ll use a fairly simple idea: Learning how to catch a ball.
Now I can directly explain to you how a ball is caught. I can describe the simultaneous ballistic equations that govern the flight of the ball, instruct you on how to alter your idea of where the ball will land based on Bayesian reasoning given certain priors and measured weather conditions.
These things are almost certainly needed if you’re gonna program a computer to catch a ball.
If you’re gonna teach a human to catch a ball though, you’re just going to have to throw a lot of balls at them and tell ’em to keep theirs eyes on it.
I suspect most Zen koans are just poor jokes, but if there’s a point to ’em it’s the same as the point of throwing those balls at a student catcher.
Just to get you to practice thinking in that way. Because you, as a human, will become better at the things you practice.
If the thing you are practising is spouting existential bullshit this may or may not be a good idea. ;)
But first explaining how to catch a ball won’t keep the person from then learning how to catch it.
If you say you’re teaching someone how to catch balls, and then provide them with sequences of equations, there’s a dangerous meta-message involved. You’re conveying the (unspoken, implicit) idea that the equations are what’s needed to make the student good at catching.
If the student then believes that because they’ve mastered the equations they’ve learned how to catch, they’ll go out into the world—and fail and fail and fail.
One real-life example of this may be people who attain high status in martial arts training schools and then get themselves slaughtered in actual fights, where the only rules are those of physics and people have chosen optimized strategies for reality.
In fact, such an explanation can help to assure them that catching a ball is possible before they commit to practicing.
I would expect their real-life experience to be sufficient to convince them that it’s possible to catch a ball.
More importantly, if they’re not sure that’s possible, they shouldn’t be looking for someone to teach them how to do it. They should be trying to determine if it’s possible before they do anything else.
Incidentally, that’s not how we tend to program computers to do things like catch balls (successfully). We instead build a sort-of general learning system attached to grasping and visual systems, and then teach it how through observation.