Just a small note: in a place where you said “instrumental truthseekers tend to,” I represent the countertrend. I’m an instrumental truthseeker, but I nevertheless still think it comes first and highest because I have a strong sense that all the other goods depend on it tremendously. This is in no way counter to your point but it seemed worth elevating to the level of explicitness—there are instrumental truthseekers who are ~as rabidly committed to truthseeking as the terminal ones.
Yeah. I think the most interesting things I was trying to point at at that even if we’re all agreed on “rabid commitment to truthseeking”, figuring out what that means is still a bit hairy and context-dependent. (i.e. do you say the literal truth or the words that’ll lead people to believe true things?)
A related issue (that I’m less sure of your take on) is that even if you have a clear set of injunctions against truth violations (i.e. never tell even a white lie, or [summarizing what I think of as your point from Models of Models], never tell a white lie without looking yourself in the mirror and clearly thinking through the ramifications of what you’re doing)...
...there’s still a long range of “how much you could go out of your way to be _extra_ full of integrity”. Do you list all your flaws up front when applying for a job? (Do you truthfully reveal flaws but sandwich them between a good first impression and a peak-end? Is the former more truthful than the latter, or is the choice random?)
That said, writing that last paragraph felt more like descending into a pointless rabbit hole than talking about something important.
I think my main thesis here was that if you point out things in a way that make people feel criticized or otherwise disengage, you’re not necessarily upholding the Truth side of things, in addition to potentially having some cost on instrumental things like people’s motivation to get shit done.
Yeah. I think the most interesting things I was trying to point at at that even if we’re all agreed on “rabid commitment to truthseeking”, figuring out what that means is still a bit hairy and context-dependent. (i.e. do you say the literal truth or the words that’ll lead people to believe true things?)
What we do will become community norms. I think each thing favors a certain demographic.
If saying the literal truth is a social norm it becomes easier for new people with new ideas to come in and challenge old ideas.
If they have to model what the community believes then they will have an uphill struggle, there will be an immense amount of effort required to understands everyone in the communities interests and how they might speak in a way to not raise any hackles. If they spend most of their time in a Laboratory looking at results or hacking away with spreadsheets they will not be optimised for this task. They may even be non-neuro typical and find it very hard to model other people sufficiently to convince them.
On the other hand saying words that’ll lead people to believe true things will make people more effective in the real world, they can use more traditional structures and use social recognition as motivation. It favors the old guard those with entrenched positions. Someone would need to remove things from the sequences if the rationalist were to go down this route, else new people would read things like the article on lost purposes and point out the hypocrisy.
Optimising for one or the other seems like a bad idea. Every successful young rabble rouser contrarian becomes the old guard at some point (unless they take the Wittgenstein way out of becoming a teacher).
We need some way to break the cycle I think. Something that makes the rationalist community different to the the normal world.
In the case of communication on Less Wrong, I think it’s a (relatively) achievable goal to have people communicate their ideas clearly without having to filter them through “what will people understand?” (and if people don’t understand your ideas, you can resolve the issue by taking a step back and explaining things in more detail)
The sort of issue that prompted this post was things more like “People in the EA posting infographics or advertisements that are [probably true / true-ish but depend on certain assumptions / not strictly true but fleshing out all the details would dramatically reduce the effectiveness of the advertisements, not because of falsehoods but because then you’ve turned a short, punchy phrase into a meandering paragraph]”
i.e. Less Wrong is a place to talk about things in exhaustive depth, but as soon as your ideas start interfacing with the rest of the world you need to start thinking about how to handle that sort of thing
I think I’m worried that people that have to interface with the outside world cannot interact with a weird/blunt lesswrong, as they need to keep up their normal/polite promoting behaviour. It is not like the rest of the world can’t see in here or that they will respect this as a place for weirdness/bluntness. There is a reputational price of “Guilt by association” for those interfacing with the outside world.
Just a small note: in a place where you said “instrumental truthseekers tend to,” I represent the countertrend. I’m an instrumental truthseeker, but I nevertheless still think it comes first and highest because I have a strong sense that all the other goods depend on it tremendously. This is in no way counter to your point but it seemed worth elevating to the level of explicitness—there are instrumental truthseekers who are ~as rabidly committed to truthseeking as the terminal ones.
Yeah. I think the most interesting things I was trying to point at at that even if we’re all agreed on “rabid commitment to truthseeking”, figuring out what that means is still a bit hairy and context-dependent. (i.e. do you say the literal truth or the words that’ll lead people to believe true things?)
A related issue (that I’m less sure of your take on) is that even if you have a clear set of injunctions against truth violations (i.e. never tell even a white lie, or [summarizing what I think of as your point from Models of Models], never tell a white lie without looking yourself in the mirror and clearly thinking through the ramifications of what you’re doing)...
...there’s still a long range of “how much you could go out of your way to be _extra_ full of integrity”. Do you list all your flaws up front when applying for a job? (Do you truthfully reveal flaws but sandwich them between a good first impression and a peak-end? Is the former more truthful than the latter, or is the choice random?)
That said, writing that last paragraph felt more like descending into a pointless rabbit hole than talking about something important.
I think my main thesis here was that if you point out things in a way that make people feel criticized or otherwise disengage, you’re not necessarily upholding the Truth side of things, in addition to potentially having some cost on instrumental things like people’s motivation to get shit done.
What we do will become community norms. I think each thing favors a certain demographic.
If saying the literal truth is a social norm it becomes easier for new people with new ideas to come in and challenge old ideas.
If they have to model what the community believes then they will have an uphill struggle, there will be an immense amount of effort required to understands everyone in the communities interests and how they might speak in a way to not raise any hackles. If they spend most of their time in a Laboratory looking at results or hacking away with spreadsheets they will not be optimised for this task. They may even be non-neuro typical and find it very hard to model other people sufficiently to convince them.
On the other hand saying words that’ll lead people to believe true things will make people more effective in the real world, they can use more traditional structures and use social recognition as motivation. It favors the old guard those with entrenched positions. Someone would need to remove things from the sequences if the rationalist were to go down this route, else new people would read things like the article on lost purposes and point out the hypocrisy.
Optimising for one or the other seems like a bad idea. Every successful young rabble rouser contrarian becomes the old guard at some point (unless they take the Wittgenstein way out of becoming a teacher).
We need some way to break the cycle I think. Something that makes the rationalist community different to the the normal world.
In the case of communication on Less Wrong, I think it’s a (relatively) achievable goal to have people communicate their ideas clearly without having to filter them through “what will people understand?” (and if people don’t understand your ideas, you can resolve the issue by taking a step back and explaining things in more detail)
The sort of issue that prompted this post was things more like “People in the EA posting infographics or advertisements that are [probably true / true-ish but depend on certain assumptions / not strictly true but fleshing out all the details would dramatically reduce the effectiveness of the advertisements, not because of falsehoods but because then you’ve turned a short, punchy phrase into a meandering paragraph]”
i.e. Less Wrong is a place to talk about things in exhaustive depth, but as soon as your ideas start interfacing with the rest of the world you need to start thinking about how to handle that sort of thing
I think I’m worried that people that have to interface with the outside world cannot interact with a weird/blunt lesswrong, as they need to keep up their normal/polite promoting behaviour. It is not like the rest of the world can’t see in here or that they will respect this as a place for weirdness/bluntness. There is a reputational price of “Guilt by association” for those interfacing with the outside world.