a perfect world is difficult to achieve … most also would not have expected for Wikipedia to work out as well as it does
A perfect world is, of course, impossible to achieve (not to mention that what’s perfect to you is probably not so for other people) and as to Wikipedia, there are longer lists than yours of its shortcomings and problems. Is it highly useful? Of course. Will it ever get close to perfect? Of course not.
I was fascinated by LW and thought it possible to make great leaps towards some form of truth. I now consider that unwarranted exuberance.
Sure. But this is an observation about your mind, not about LW.
High relevancy to the reader who is an aspiring rationalist.
“Aspiring rationalist” is a content-free expression. It tells me nothing about what you consider “wrong” or “relevant”.
The discussion of AI mostly end, where they become interesting.
Heed the typical mind fallacy. Other people are not you. What you find interesting is not necessarily what others find interesting. Your dilemmas or existential issues are not their dilemmas or existential issues.
For example, I don’t find the question of “shall we enforce a police state” interesting. The answer is “No”, case closed, we’re done. Notice that I’m speaking about myself—you, being a different person, might well be highly interested in extended discussion of the topic.
if you truly think there is an Animal Holocaust (which Singer does), the answer may not be donating $50 to some animal charity.
Yeah, sure, you go join an Animal Liberation Front of some sorts, but what’s particularly interesting or rational about it? It’s a straightforward consequences of the values you hold.
Heed the typical mind fallacy. Other people are not you. What you find interesting is not necessarily what others find interesting. Your dilemmas or existential issues are not their dilemmas or existential issues.
For example, I don’t find the question of “shall we enforce a police state” interesting. The answer is “No”, case closed, we’re done. Notice that I’m speaking about myself—you, being a different person, might well be highly interested in extended discussion of the topic.
I strongly disagree and think it is unrelated to the typical mind fallacy. Ok, the word “interesting” was too unprecise. However, the argument deserves a deeper look in my opinion. Let me rephrase to: “Discussions of AI sometimes end, where they have serious implications regarding real life.” Especially! if you do not enjoy to entertain the thought of a police state and increased surveillance, you should be worried if respected rational essayists come to conclusions that include them as an option. Closing your case when confronted with possible results from a chain of argumentation won’t make them disappear. And a police state to stay with the example is either an issue for almost everybody (if it comes to existance) or nobody. Hence, this detached from and not about my personal values.
Let me rephrase to: “Discussions of AI sometimes end, where they have serious implications regarding real life.”
I agree, that would be a bad thing.
Closing your case when confronted with possible results from a chain of argumentation won’t make them disappear.
Of course not, but given my values and my estimates of how likely are certain future scenarios, I already came to certain conclusions. For them to change, either the values or the probabilities have to change. I find it unlikely that my values will change as the result of eschatological discussions on the ’net, and the discussions about the probabilities of Skynet FOOMing can be had (and probably should be had) without throwing the police state into the mix.
In general, I don’t find talking about very specific scenarios in the presence of large Knightian uncertainty to be terribly useful.
A perfect world is, of course, impossible to achieve (not to mention that what’s perfect to you is probably not so for other people) and as to Wikipedia, there are longer lists than yours of its shortcomings and problems. Is it highly useful? Of course. Will it ever get close to perfect? Of course not.
Sure. But this is an observation about your mind, not about LW.
“Aspiring rationalist” is a content-free expression. It tells me nothing about what you consider “wrong” or “relevant”.
Heed the typical mind fallacy. Other people are not you. What you find interesting is not necessarily what others find interesting. Your dilemmas or existential issues are not their dilemmas or existential issues.
For example, I don’t find the question of “shall we enforce a police state” interesting. The answer is “No”, case closed, we’re done. Notice that I’m speaking about myself—you, being a different person, might well be highly interested in extended discussion of the topic.
Yeah, sure, you go join an Animal Liberation Front of some sorts, but what’s particularly interesting or rational about it? It’s a straightforward consequences of the values you hold.
I strongly disagree and think it is unrelated to the typical mind fallacy. Ok, the word “interesting” was too unprecise. However, the argument deserves a deeper look in my opinion. Let me rephrase to: “Discussions of AI sometimes end, where they have serious implications regarding real life.” Especially! if you do not enjoy to entertain the thought of a police state and increased surveillance, you should be worried if respected rational essayists come to conclusions that include them as an option. Closing your case when confronted with possible results from a chain of argumentation won’t make them disappear. And a police state to stay with the example is either an issue for almost everybody (if it comes to existance) or nobody. Hence, this detached from and not about my personal values.
I agree, that would be a bad thing.
Of course not, but given my values and my estimates of how likely are certain future scenarios, I already came to certain conclusions. For them to change, either the values or the probabilities have to change. I find it unlikely that my values will change as the result of eschatological discussions on the ’net, and the discussions about the probabilities of Skynet FOOMing can be had (and probably should be had) without throwing the police state into the mix.
In general, I don’t find talking about very specific scenarios in the presence of large Knightian uncertainty to be terribly useful.