I just have a hard time to believe that they could be so wrong, people who write essays like this. That’s why I allow for the possibility that they are right and that I simply do not understand the issue. Can you rule out that possibility? And if that was the case, what would it mean to spread it even further? You see, that’s my problem.
Indeed. On the other hand, humans frequently use intelligence to do much stupider things than they could have done without that degree of intelligence. Previous brilliance means that future strange ideas should be taken seriously, but not that the future ideas must be even more brilliant because they look so stupid. Ray Kurzweil is an excellent example—an undoubted genius of real achievements, but also now undoubtedly completely off the rails and well into pseudoscience. (Alkaline water!)
I don’t think that’s credible. Eliezer has focused much of his intelligence on avoiding “brilliant stupidity”, orders of magnitude more so than any Kurzweil-esque example.
If you observe an action (A) that you judge so absurd that it casts doubt on the agent’s (G) rationality, then your confidence (C1) in G’s rationality should decrease. If C1 was previously high, then your confidence (C2) in your judgment of A’s absurdity should decrease.
So if someone you strongly trust to be rational does something you strongly suspect to be absurd, the end result ought to be that your trust and your suspicions are both weakened. Then you can ask yourself whether, after that modification, you still trust G’s rationality enough to believe that there exist good reasons for A.
The only reason it feels like a problem is that human brains aren’t good at this. It sometimes helps to write it all down on paper, but mostly it’s just something to practice until it gets easier.
In the meantime, what I would recommend is giving some careful thought to why you trust G, and why you think A is absurd, independent of each other. That is: what’s your evidence? Are C1 and C2 at all calibrated to observed events?
If you conclude at the end of it that they one or the other is unjustified, your problem dissolves and you know which way to jump. No problem.
If you conclude that they are both justified, then your best bet is probably to assume the existence of either evidence or arguments that you’re unaware of (more or less as you’re doing now)… not because “you can’t rule out the possibility” but because it seems more likely than the alternatives. Again, no problem.
And the fact that other people don’t end up in the same place simply reflects the fact that their prior confidence was different, presumably because their experiences were different and they don’t have perfect trust in everyone’s perfect Bayesianness. Again, no problem… you simply disagree.
Working out where you stand can be a useful exercise. In my own experience, I find it significantly diminishes my impulse to argue the point past where anything new is being said, which generally makes me happier.
Another thing: rationality is best expressed as a percentage, not a binary. I might look at the virtues and say “wow, I bet this guy only makes mistakes 10% of the time! That’s fantastic!”- but then when I see something that looks like a mistake, I’m not afraid to call it that. I just expect to see fewer of them.
I just have a hard time to believe that they could be so wrong, people who write essays like this. That’s why I allow for the possibility that they are right and that I simply do not understand the issue. Can you rule out that possibility? And if that was the case, what would it mean to spread it even further? You see, that’s my problem.
Indeed. On the other hand, humans frequently use intelligence to do much stupider things than they could have done without that degree of intelligence. Previous brilliance means that future strange ideas should be taken seriously, but not that the future ideas must be even more brilliant because they look so stupid. Ray Kurzweil is an excellent example—an undoubted genius of real achievements, but also now undoubtedly completely off the rails and well into pseudoscience. (Alkaline water!)
Ray on alkaline water:
http://glowing-health.com/alkaline-water/ray-kurzweil-alkaine-water.html
See, RationalWiki is a silly wiki full of rude people. But one thing we know a lot about, is woo. That reads like a parody of woo.
Scary.
I don’t think that’s credible. Eliezer has focused much of his intelligence on avoiding “brilliant stupidity”, orders of magnitude more so than any Kurzweil-esque example.
So the thing to do in this situation is to ask them: “excuse me wtf are you doin?” And this has been done.
So far there’s been no explanation, nor even acknowledgement of how profoundly stupid this looks. This does nothing to make them look smarter.
Of course, as I noted, a truly amazing Xanatos retcon is indeed not impossible.
There is no problem.
If you observe an action (A) that you judge so absurd that it casts doubt on the agent’s (G) rationality, then your confidence (C1) in G’s rationality should decrease. If C1 was previously high, then your confidence (C2) in your judgment of A’s absurdity should decrease.
So if someone you strongly trust to be rational does something you strongly suspect to be absurd, the end result ought to be that your trust and your suspicions are both weakened. Then you can ask yourself whether, after that modification, you still trust G’s rationality enough to believe that there exist good reasons for A.
The only reason it feels like a problem is that human brains aren’t good at this. It sometimes helps to write it all down on paper, but mostly it’s just something to practice until it gets easier.
In the meantime, what I would recommend is giving some careful thought to why you trust G, and why you think A is absurd, independent of each other. That is: what’s your evidence? Are C1 and C2 at all calibrated to observed events?
If you conclude at the end of it that they one or the other is unjustified, your problem dissolves and you know which way to jump. No problem.
If you conclude that they are both justified, then your best bet is probably to assume the existence of either evidence or arguments that you’re unaware of (more or less as you’re doing now)… not because “you can’t rule out the possibility” but because it seems more likely than the alternatives. Again, no problem.
And the fact that other people don’t end up in the same place simply reflects the fact that their prior confidence was different, presumably because their experiences were different and they don’t have perfect trust in everyone’s perfect Bayesianness. Again, no problem… you simply disagree.
Working out where you stand can be a useful exercise. In my own experience, I find it significantly diminishes my impulse to argue the point past where anything new is being said, which generally makes me happier.
This comment is also relevant.
Another thing: rationality is best expressed as a percentage, not a binary. I might look at the virtues and say “wow, I bet this guy only makes mistakes 10% of the time! That’s fantastic!”- but then when I see something that looks like a mistake, I’m not afraid to call it that. I just expect to see fewer of them.
What issue? The forbidden one? You are not even supposed to be thinking about that! For pennance, go and say 30 “Hail Yudkowskys”!