I am not sure of the point here. I read it as “I can imagine a perfect world and LW is not it”. Well, duh.
No. I think all the points indicate that a perfect world is difficult to achieve as rationalist forums are in part self-defeating (maybe not impossible though, most also would not have expected for Wikipedia to work out as well as it does). At the moment, Less Wrong may be the worst form of forum, except for all the others. My point in other words: I was fascinated by LW and thought it possible to make great leaps towards some form of truth. I now consider that unwarranted exuberance. I met a few people whom I highly respect and whom I consider aspiring rationalists. They were not interested in forums, congresses, etc. I now suspect that many of our fellow rationalists are and have an advantage to be somewhat of lone wolves and the ones we see are a curious exceptions.
There are also a lot of words (like “wrong”) that the OP knows the meaning of, but I do not. For example, I have no idea what are “wrong opinions” which, apparently, rational discussions have a tendency to support. Or what is that “high relevancy” of missing articles—relevancy to whom?
High relevancy to the reader who is an aspiring rationalist. The discussion of AI mostly end, where they become interesting. Assuming that AI is an existential risk, shall we enforce a police state? Shall we invest in surveillance? Some may even suggest to seek a Terminator-like solution trying to stop scientific research (which I did not say is feasible. Those are the kinds of questions that inevitably come up and I have seen them discussed nowhere, but in the last chapter of Superintelligence in like 3 sentences and somewhat in SSC’s Moloch (maybe you find more sources, but its surely not mainstream). In summary: If Musks $10M constitute a significant share of humanities effort to reduce the risk of AI some may view that as evidence of progress and some as evidence for the necessity of other, and maybe more radical, approaches. The same in EA, if you truly think there is an Animal Holocaust (which Singer does), the answer may not be donating $50 to some animal charity.
Wrong opinions: If, as just argued, not all the relevant evidence and conclusions are discussed, it follows that opinions are more likely to be less than perfect. There are some examples in the article.
And, um, do you believe that your postings will be free from that laundry list of misfeatures you catalogued?
No. Nash probably wouldn’t cooperate, even though he understood game theory and I wouldn’t blame him. I may simply stop posting (which sounds like a cop-out or threat, but I just see it as one logical conclusion).
a perfect world is difficult to achieve … most also would not have expected for Wikipedia to work out as well as it does
A perfect world is, of course, impossible to achieve (not to mention that what’s perfect to you is probably not so for other people) and as to Wikipedia, there are longer lists than yours of its shortcomings and problems. Is it highly useful? Of course. Will it ever get close to perfect? Of course not.
I was fascinated by LW and thought it possible to make great leaps towards some form of truth. I now consider that unwarranted exuberance.
Sure. But this is an observation about your mind, not about LW.
High relevancy to the reader who is an aspiring rationalist.
“Aspiring rationalist” is a content-free expression. It tells me nothing about what you consider “wrong” or “relevant”.
The discussion of AI mostly end, where they become interesting.
Heed the typical mind fallacy. Other people are not you. What you find interesting is not necessarily what others find interesting. Your dilemmas or existential issues are not their dilemmas or existential issues.
For example, I don’t find the question of “shall we enforce a police state” interesting. The answer is “No”, case closed, we’re done. Notice that I’m speaking about myself—you, being a different person, might well be highly interested in extended discussion of the topic.
if you truly think there is an Animal Holocaust (which Singer does), the answer may not be donating $50 to some animal charity.
Yeah, sure, you go join an Animal Liberation Front of some sorts, but what’s particularly interesting or rational about it? It’s a straightforward consequences of the values you hold.
Heed the typical mind fallacy. Other people are not you. What you find interesting is not necessarily what others find interesting. Your dilemmas or existential issues are not their dilemmas or existential issues.
For example, I don’t find the question of “shall we enforce a police state” interesting. The answer is “No”, case closed, we’re done. Notice that I’m speaking about myself—you, being a different person, might well be highly interested in extended discussion of the topic.
I strongly disagree and think it is unrelated to the typical mind fallacy. Ok, the word “interesting” was too unprecise. However, the argument deserves a deeper look in my opinion. Let me rephrase to: “Discussions of AI sometimes end, where they have serious implications regarding real life.” Especially! if you do not enjoy to entertain the thought of a police state and increased surveillance, you should be worried if respected rational essayists come to conclusions that include them as an option. Closing your case when confronted with possible results from a chain of argumentation won’t make them disappear. And a police state to stay with the example is either an issue for almost everybody (if it comes to existance) or nobody. Hence, this detached from and not about my personal values.
Let me rephrase to: “Discussions of AI sometimes end, where they have serious implications regarding real life.”
I agree, that would be a bad thing.
Closing your case when confronted with possible results from a chain of argumentation won’t make them disappear.
Of course not, but given my values and my estimates of how likely are certain future scenarios, I already came to certain conclusions. For them to change, either the values or the probabilities have to change. I find it unlikely that my values will change as the result of eschatological discussions on the ’net, and the discussions about the probabilities of Skynet FOOMing can be had (and probably should be had) without throwing the police state into the mix.
In general, I don’t find talking about very specific scenarios in the presence of large Knightian uncertainty to be terribly useful.
None of us could “enforce a police state”. It’s barely possible even in principle, since it would need to include all industrialized nations (at a minimum) to have much payoff against AGI risk in particular. Worrying about “respected rational essayists” endorsing this plan also seems foolish.
“Surveillance” has similar problems, and your next sentence sounds like something we banned from the site for a reason. You do not seem competent for crime.
I’m trying to be charitable about your post as a whole to avoid anti-disjunction bias. While it’s common to reject conclusions if weak arguments are added in support of them, this isn’t actually fair. But I see nothing to justify your summary.
No. I think all the points indicate that a perfect world is difficult to achieve as rationalist forums are in part self-defeating (maybe not impossible though, most also would not have expected for Wikipedia to work out as well as it does). At the moment, Less Wrong may be the worst form of forum, except for all the others. My point in other words: I was fascinated by LW and thought it possible to make great leaps towards some form of truth. I now consider that unwarranted exuberance. I met a few people whom I highly respect and whom I consider aspiring rationalists. They were not interested in forums, congresses, etc. I now suspect that many of our fellow rationalists are and have an advantage to be somewhat of lone wolves and the ones we see are a curious exceptions.
High relevancy to the reader who is an aspiring rationalist. The discussion of AI mostly end, where they become interesting. Assuming that AI is an existential risk, shall we enforce a police state? Shall we invest in surveillance? Some may even suggest to seek a Terminator-like solution trying to stop scientific research (which I did not say is feasible. Those are the kinds of questions that inevitably come up and I have seen them discussed nowhere, but in the last chapter of Superintelligence in like 3 sentences and somewhat in SSC’s Moloch (maybe you find more sources, but its surely not mainstream). In summary: If Musks $10M constitute a significant share of humanities effort to reduce the risk of AI some may view that as evidence of progress and some as evidence for the necessity of other, and maybe more radical, approaches. The same in EA, if you truly think there is an Animal Holocaust (which Singer does), the answer may not be donating $50 to some animal charity. Wrong opinions: If, as just argued, not all the relevant evidence and conclusions are discussed, it follows that opinions are more likely to be less than perfect. There are some examples in the article.
No. Nash probably wouldn’t cooperate, even though he understood game theory and I wouldn’t blame him. I may simply stop posting (which sounds like a cop-out or threat, but I just see it as one logical conclusion).
A perfect world is, of course, impossible to achieve (not to mention that what’s perfect to you is probably not so for other people) and as to Wikipedia, there are longer lists than yours of its shortcomings and problems. Is it highly useful? Of course. Will it ever get close to perfect? Of course not.
Sure. But this is an observation about your mind, not about LW.
“Aspiring rationalist” is a content-free expression. It tells me nothing about what you consider “wrong” or “relevant”.
Heed the typical mind fallacy. Other people are not you. What you find interesting is not necessarily what others find interesting. Your dilemmas or existential issues are not their dilemmas or existential issues.
For example, I don’t find the question of “shall we enforce a police state” interesting. The answer is “No”, case closed, we’re done. Notice that I’m speaking about myself—you, being a different person, might well be highly interested in extended discussion of the topic.
Yeah, sure, you go join an Animal Liberation Front of some sorts, but what’s particularly interesting or rational about it? It’s a straightforward consequences of the values you hold.
I strongly disagree and think it is unrelated to the typical mind fallacy. Ok, the word “interesting” was too unprecise. However, the argument deserves a deeper look in my opinion. Let me rephrase to: “Discussions of AI sometimes end, where they have serious implications regarding real life.” Especially! if you do not enjoy to entertain the thought of a police state and increased surveillance, you should be worried if respected rational essayists come to conclusions that include them as an option. Closing your case when confronted with possible results from a chain of argumentation won’t make them disappear. And a police state to stay with the example is either an issue for almost everybody (if it comes to existance) or nobody. Hence, this detached from and not about my personal values.
I agree, that would be a bad thing.
Of course not, but given my values and my estimates of how likely are certain future scenarios, I already came to certain conclusions. For them to change, either the values or the probabilities have to change. I find it unlikely that my values will change as the result of eschatological discussions on the ’net, and the discussions about the probabilities of Skynet FOOMing can be had (and probably should be had) without throwing the police state into the mix.
In general, I don’t find talking about very specific scenarios in the presence of large Knightian uncertainty to be terribly useful.
None of us could “enforce a police state”. It’s barely possible even in principle, since it would need to include all industrialized nations (at a minimum) to have much payoff against AGI risk in particular. Worrying about “respected rational essayists” endorsing this plan also seems foolish.
“Surveillance” has similar problems, and your next sentence sounds like something we banned from the site for a reason. You do not seem competent for crime.
I’m trying to be charitable about your post as a whole to avoid anti-disjunction bias. While it’s common to reject conclusions if weak arguments are added in support of them, this isn’t actually fair. But I see nothing to justify your summary.