There is an important difference between “identifying this pill as not being ‘poison’ allows me to focus my uncertainty about what I’ll observe after administering the pill to a human (even if most possible minds have never seen a ‘human’ and would never waste cycles imagining administering the pill to one)” and “identifying this pill as not being ‘poison’, because if I publicly called it ‘poison’, then the manufacturer of the pill might sue me.”
What is that sentence supposed to tell me? It’s not clear whether or not that important difference is supposed to imply to the reader that one is better then the other. Given that there seems to be a clear value judgement in the others, maybe it does here?
Reading it leaves me as a reader with constructed an example where you might be pointing.
You might run standard tox tests and your mice are dead. Mice differ from humans, so you might want to not use the term poison in contrast to the general way people think about tox testing, because you don’t care about mice? Is a general critique of the way we do tox testing intended or not?
The part about most possible minds never having seen a human feels like a disgression to me, made with words that are unnecessarily obscure (most people in society won’t understand what wasting cycles is about) when it would be quite easy to say that you care about human more then mice.
Is the claim that it’s bad to use words in a way to conform to the standards of a powerful institution that enforces certain expectations of what people can expect when they hear a certain word? Boo Brussels? Boo journals who refuse to publish papers that use words when community standards of when certain words should be used aren’t meet?
To those people who proofread and appeartly didn’t find an issue in that sentence, is it really necessary to mix all those different issues into a 6-line sentence?
It’s not clear whether or not that important difference is supposed to imply to the reader that one is better then the other. Given that there seems to be a clear value judgement in the others, maybe it does here?
All three paragraphs starting with “There’s an important difference [...]” are trying to illustrate the distinction between choosing a model because it reflects value-relevant parts of reality (which I think is good), and choosing a model because of some non-reality-mapping consequences of the choice of model (which I think is generally bad).
words that are unnecessarily obscure (most people in society won’t understand what wasting cycles is about)
The primary audience of this post is longtime Less Wrong readers; as an author, I’m not concerned with trying to reach “most people in society” with this post. I expect Less Wrong readers to have trained up generalization instincts motivating the leap to thinking about AIs or minds-in-general even though this would seem weird or incomprehensible to the general public.
To those people who proofread and appeartly didn’t find an issue in that sentence, is it really necessary to mix all those different issues into a 6-line sentence?
It’s true that I tend to have a “dense” writing style (with lots of nested parentheticals and subordinate clauses), and that I should probably work on writing more simply in order to be easier to read. Sorry.
I do find myself somewhat confused about the hostility in this comment. It’s hard to write good things, and there will always be misunderstandings. Many posts on LessWrong are unnecessarily confusing, including many posts by Eliezer, usually just because it takes a lot of effort, time and skill to polish a post to the point where it’s completely clear to everyone on the site (and in many technical subjects achieving that bar is often impossible).
Recommendations for how to phrase things in a clear way seem good to me, and I appreciate them on my writing, but doing so in a way that implies some kind of major moral failing seems like it makes people overall less likely to post, and also overall less likely to react positively to feedback.
You seem to pose a model where a post is either saying good things or saying things uncleanly in a way that’s easily misunderstood. A model whereby it’s not important to analyses which claims happen to be made which are wrong.
My first answer was pointing out statements in the post that I consider to be clearly wrong and important (it’s something many people believed that holds back intellectual progress in the topic). The response seemed to be along the lines of:
“I didn’t mean to imply that what I claimed to be true (” Similarly, the primary thing when you take a word in your lips is your intention to reflect the territory, whatever the means”), I said that because it seems to send the right tribal signals because it looks similar to what EY wrote.
Besides the people in my tribe that I showed my draft liked it.”
Defending the post as being tribally right instead of either allowing claims to be falsified or defending the claims on their merits feels to me like a violation of debate norms that raises emotional hostility.
I feel that it’s bad to by default assume that any disagreement is due to misunderstandings and not substance.
I do think that emotion is justified in the sense that if we get a lot of articles that are full of tribal signaling and attempts to look like EY posts but endorse misconceptions, that would be problematic to LW in a way that posts that are simply low quality because writing good is hard wouldn’t be (and that wouldn’t trigger emotions).
After rereading the post a few times, I think you are just misunderstanding it?
Like, I can’t make sense of your top-level comment in my current interpretation of the post, and as such I interpreted your comment as asking for clarification in a weirdly hostile tone (which was supported by your first sentence being “What is that sentence supposed to tell me?”). I generally think it’s a bad idea to start substantive criticisms of a post with a rhetorical question that’s hard to distinguish from a genuine question (and probably would advise against rhetorical questions in general, but am less confident of that).
To me the section you quoted seems relatively clear, and makes a pretty straightforwardly true point, and from my current vantage point I fail to understand your criticism of it. I would be happy to try to explain my current interpretation, but would need a bit more help understanding what your current perspective is.
I have written multiple post in this thread and I wouldn’t expect you to make sense of the tone by treating this post in isolation.
In a way it’s true straightforwardly true point to say that apples are significantly different from tomatoes. It’s defensibly true in a certain sense.
At the same time if a reader wants to learn something from the statement and transfer the knowledge to another case, they need to model of what kind of significant difference is implied.
You might read the statement as being about how tomatoes are vegetables purposes for tariff or for cooking purposes and how scientific taxonomy isn’t the only taxonomy that matters but it’s very bailey-and-motte about that issue. The bailey-and-motteness then makes it hard to falsify the claims.
Are you saying people should never casually make such claims about apples and tomatoes? I haven’t tried to parse your comments in detail, apologies if I’m misunderstanding. But they seem to be implying a huge amount of friction on conversation that does not seem practical to me. (i.e. only discuss things if you’re going to take the time to clarify details of your model. The reasons we have clusters and words and shorthand is because that’s a lot of effort that most of the time isn’t worth it)
A model should generally be clear enough to be falsifiable. It might be okay for a paragraph to not expand an idea in enough detail for that but when there’s a >3800 word essay about a model that avoids being falsifiable and instead is full with applause lights I do consider that bad.
What is that sentence supposed to tell me? It’s not clear whether or not that important difference is supposed to imply to the reader that one is better then the other. Given that there seems to be a clear value judgement in the others, maybe it does here?
Reading it leaves me as a reader with constructed an example where you might be pointing.
You might run standard tox tests and your mice are dead. Mice differ from humans, so you might want to not use the term poison in contrast to the general way people think about tox testing, because you don’t care about mice? Is a general critique of the way we do tox testing intended or not?
The part about most possible minds never having seen a human feels like a disgression to me, made with words that are unnecessarily obscure (most people in society won’t understand what wasting cycles is about) when it would be quite easy to say that you care about human more then mice.
Is the claim that it’s bad to use words in a way to conform to the standards of a powerful institution that enforces certain expectations of what people can expect when they hear a certain word? Boo Brussels? Boo journals who refuse to publish papers that use words when community standards of when certain words should be used aren’t meet?
To those people who proofread and appeartly didn’t find an issue in that sentence, is it really necessary to mix all those different issues into a 6-line sentence?
All three paragraphs starting with “There’s an important difference [...]” are trying to illustrate the distinction between choosing a model because it reflects value-relevant parts of reality (which I think is good), and choosing a model because of some non-reality-mapping consequences of the choice of model (which I think is generally bad).
The primary audience of this post is longtime Less Wrong readers; as an author, I’m not concerned with trying to reach “most people in society” with this post. I expect Less Wrong readers to have trained up generalization instincts motivating the leap to thinking about AIs or minds-in-general even though this would seem weird or incomprehensible to the general public.
It’s true that I tend to have a “dense” writing style (with lots of nested parentheticals and subordinate clauses), and that I should probably work on writing more simply in order to be easier to read. Sorry.
I do find myself somewhat confused about the hostility in this comment. It’s hard to write good things, and there will always be misunderstandings. Many posts on LessWrong are unnecessarily confusing, including many posts by Eliezer, usually just because it takes a lot of effort, time and skill to polish a post to the point where it’s completely clear to everyone on the site (and in many technical subjects achieving that bar is often impossible).
Recommendations for how to phrase things in a clear way seem good to me, and I appreciate them on my writing, but doing so in a way that implies some kind of major moral failing seems like it makes people overall less likely to post, and also overall less likely to react positively to feedback.
You seem to pose a model where a post is either saying good things or saying things uncleanly in a way that’s easily misunderstood. A model whereby it’s not important to analyses which claims happen to be made which are wrong.
My first answer was pointing out statements in the post that I consider to be clearly wrong and important (it’s something many people believed that holds back intellectual progress in the topic). The response seemed to be along the lines of:
“I didn’t mean to imply that what I claimed to be true (” Similarly, the primary thing when you take a word in your lips is your intention to reflect the territory, whatever the means”), I said that because it seems to send the right tribal signals because it looks similar to what EY wrote.
Besides the people in my tribe that I showed my draft liked it.”
Defending the post as being tribally right instead of either allowing claims to be falsified or defending the claims on their merits feels to me like a violation of debate norms that raises emotional hostility.
I feel that it’s bad to by default assume that any disagreement is due to misunderstandings and not substance.
I do think that emotion is justified in the sense that if we get a lot of articles that are full of tribal signaling and attempts to look like EY posts but endorse misconceptions, that would be problematic to LW in a way that posts that are simply low quality because writing good is hard wouldn’t be (and that wouldn’t trigger emotions).
After rereading the post a few times, I think you are just misunderstanding it?
Like, I can’t make sense of your top-level comment in my current interpretation of the post, and as such I interpreted your comment as asking for clarification in a weirdly hostile tone (which was supported by your first sentence being “What is that sentence supposed to tell me?”). I generally think it’s a bad idea to start substantive criticisms of a post with a rhetorical question that’s hard to distinguish from a genuine question (and probably would advise against rhetorical questions in general, but am less confident of that).
To me the section you quoted seems relatively clear, and makes a pretty straightforwardly true point, and from my current vantage point I fail to understand your criticism of it. I would be happy to try to explain my current interpretation, but would need a bit more help understanding what your current perspective is.
I have written multiple post in this thread and I wouldn’t expect you to make sense of the tone by treating this post in isolation.
In a way it’s true straightforwardly true point to say that apples are significantly different from tomatoes. It’s defensibly true in a certain sense.
At the same time if a reader wants to learn something from the statement and transfer the knowledge to another case, they need to model of what kind of significant difference is implied.
You might read the statement as being about how tomatoes are vegetables purposes for tariff or for cooking purposes and how scientific taxonomy isn’t the only taxonomy that matters but it’s very bailey-and-motte about that issue. The bailey-and-motteness then makes it hard to falsify the claims.
Are you saying people should never casually make such claims about apples and tomatoes? I haven’t tried to parse your comments in detail, apologies if I’m misunderstanding. But they seem to be implying a huge amount of friction on conversation that does not seem practical to me. (i.e. only discuss things if you’re going to take the time to clarify details of your model. The reasons we have clusters and words and shorthand is because that’s a lot of effort that most of the time isn’t worth it)
A model should generally be clear enough to be falsifiable. It might be okay for a paragraph to not expand an idea in enough detail for that but when there’s a >3800 word essay about a model that avoids being falsifiable and instead is full with applause lights I do consider that bad.