Got it. Believe it or not, I am trying to figure out the rules (which are radically different than a number of my initial assumptions) and not trying solely to be a pain in the ass.
I’ll cool it on the top level posts.
Admittedly, a lot of my problem is that there is either a really huge double standard or I’m missing something critical. To illustrate . . . . Kingfisher’s comment “Something is clear if it is easily understood by those with the necessary baseline knowledge.” My posts are, elsewhere, considered very clear by people with less baseline knowledge. If my post was logically incorrect to someone with higher knowledge, then they should be able to dissect it and get to the root of the problem. Instead, what I’m seeing is tremendous numbers of strawmen. The lesson seems to be “If you don’t go slow and you fail to rule out every single strawman that I can possibly raise, I will refuse to let you go further (and I will do it by insisting that you have actively embraced the strawman). Am I starting to get it or am I way off base?
Note: I am never trying to insult (except one ill-chosen all caps response). But the community seems to be acting against its own goals as I perceive they have stated them. Would it be fair to say that your expectations (and apparently even goals) are not clear to new posters (not newcomers, I have read and believe I grok all of the sequences, etc. to the extent that virtually any link that is pointed to, I’ve already seen).
Another, last comment. At the top of discussion posts, it says “This part of the site is for the discussion of topics not yet ready or not suitable for normal top-level posts.” That is what led me to believe that posting a couple of posts that I obviously considered ready for normal prime-time (i.e. not LessWrong) wouldn’t be a problem. I am now being told that it is a problem and I will abide. But can you make any clarification?
Kingfisher’s definition of clarity is actually not quite right. In order to be clear, you have to carve reality at the joints. That’s what the problem was with the Intelligence vs. Wisdom post; there wasn’t anything obviously false, at least that I noticed, but it seemed to be dividing up concept space in an unnatural way. Similarly with this post. For example, “selfish” is a natural concept for humans, who have a basic set of self-centered goals by default, which they balance against non-self-centered goals like improving their community. But if you take that definition and try to transfer it to AIs, you run into trouble, because they don’t have those self-centered goals, so if you want to make sense of it you have to come up with a new definition. Is an AI that optimizes the happiness of its creator, at the expense of other humans, being selfish? How about the happiness of its creator’s friends, at the expense of humanity in general? How about humanity’s happiness, at the expense of other terrestrial animals?
Using fuzzy words in places where they don’t belong hides a lot of complexity. One way that people respond to that is by coming up with things that the words could mean, and presenting them as counterexamples. You seem to have misinterpreted that as presenting straw-men; it’s not saying that the best interpretation is wrong, but rather, saying that the phrasing was vague enough to admit some bad interpretations.
I would also like to add that detecting confusion, both in our own thoughts and in things we read, is one of the main skills of rationality. People here are, on average, much more sensitive to confusion than most people.
At the top of discussion posts, it says “This part of the site is for the discussion of topics not yet ready or not suitable for normal top-level posts.” That is what led me to believe that posting a couple of posts that I obviously considered ready for normal prime-time (i.e. not LessWrong) wouldn’t be a problem.
Just because the standards are properly lower here than on main LW doesn’t mean that you can post an arbitrary volume of arbitrarily ill-received posts without being told to stop.
Got it. Believe it or not, I am trying to figure out the rules (which are radically different than a number of my initial assumptions) and not trying solely to be a pain in the ass.
I’ll cool it on the top level posts.
Admittedly, a lot of my problem is that there is either a really huge double standard or I’m missing something critical. To illustrate . . . . Kingfisher’s comment “Something is clear if it is easily understood by those with the necessary baseline knowledge.” My posts are, elsewhere, considered very clear by people with less baseline knowledge. If my post was logically incorrect to someone with higher knowledge, then they should be able to dissect it and get to the root of the problem. Instead, what I’m seeing is tremendous numbers of strawmen. The lesson seems to be “If you don’t go slow and you fail to rule out every single strawman that I can possibly raise, I will refuse to let you go further (and I will do it by insisting that you have actively embraced the strawman). Am I starting to get it or am I way off base?
Note: I am never trying to insult (except one ill-chosen all caps response). But the community seems to be acting against its own goals as I perceive they have stated them. Would it be fair to say that your expectations (and apparently even goals) are not clear to new posters (not newcomers, I have read and believe I grok all of the sequences, etc. to the extent that virtually any link that is pointed to, I’ve already seen).
Another, last comment. At the top of discussion posts, it says “This part of the site is for the discussion of topics not yet ready or not suitable for normal top-level posts.” That is what led me to believe that posting a couple of posts that I obviously considered ready for normal prime-time (i.e. not LessWrong) wouldn’t be a problem. I am now being told that it is a problem and I will abide. But can you make any clarification?
Thanks.
Kingfisher’s definition of clarity is actually not quite right. In order to be clear, you have to carve reality at the joints. That’s what the problem was with the Intelligence vs. Wisdom post; there wasn’t anything obviously false, at least that I noticed, but it seemed to be dividing up concept space in an unnatural way. Similarly with this post. For example, “selfish” is a natural concept for humans, who have a basic set of self-centered goals by default, which they balance against non-self-centered goals like improving their community. But if you take that definition and try to transfer it to AIs, you run into trouble, because they don’t have those self-centered goals, so if you want to make sense of it you have to come up with a new definition. Is an AI that optimizes the happiness of its creator, at the expense of other humans, being selfish? How about the happiness of its creator’s friends, at the expense of humanity in general? How about humanity’s happiness, at the expense of other terrestrial animals?
Using fuzzy words in places where they don’t belong hides a lot of complexity. One way that people respond to that is by coming up with things that the words could mean, and presenting them as counterexamples. You seem to have misinterpreted that as presenting straw-men; it’s not saying that the best interpretation is wrong, but rather, saying that the phrasing was vague enough to admit some bad interpretations.
I would also like to add that detecting confusion, both in our own thoughts and in things we read, is one of the main skills of rationality. People here are, on average, much more sensitive to confusion than most people.
Just because the standards are properly lower here than on main LW doesn’t mean that you can post an arbitrary volume of arbitrarily ill-received posts without being told to stop.