This link was posted by someone else but initially downvoted; I opened it, read it, and in my view, it has lots of good commentary, despite that it is too harsh for lesswrong. I’d suggest others read it thoughtfully. While it’s not friendly, and people should go in with the understanding that it’s kinda thunderdome and generated by people’s emotive responses, a lot of it is pretty reasonable thunderdome commentary. If you’re the type who can strip thunderdome feedback from its emotive content, these criticisms contain some things I think OP could benefit from at least pondering seriously, even if they ultimately disagree. Keep in mind that most people focus on a mix of simulacrum 1, 2, 3 per word, with very little 4. No comment 100% meant literally but neither is it meant entirely emotively/socially. All those warnings and framings aside, I agree somewhat with several posts there—I will not say which, and I do not mean to imply that I agree with the simulacrum 3 emotional message of rejectability, as I consider simulacrum 3 fundamentally invalid and always ignorable (though I often fail).
I also would like to do something like this myself; so I’ll be pondering how to integrate the critiques and avoid pitfalls they see.
My impression was that the reddit thread didn’t bring up anything that people don’t already know, but one of the strengths of LessWrong is that it’s socially permissible not to sneer at weird things if they actually make sense.
Like, obviously hiring someone to do nothing but side behind you gives off bad vibes. (The post even acknowledged this briefly, sort of.) But it could also genuinely be a really good idea. So if we can’t do it because of the vibes, that’s a bad thing!
I clicked the link and thought it was a bad idea ex post. I think that my attempted charitable reading of the Reddit comments revealed significantly less constructive data than what would have been provided by ChatGPT.
I suspect that rationalists engaging with this form of content harms the community a non-trivial amount.
Interesting, if the same could be done with chatgpt I’d be curious to hear how you’d frame the question. If the same analysis can be done with chatgpt I’d do it consistently.
Can you say more about how it causes harm? I’d like to find a way to reduce that harm, because there’s a lot of good stuff in this sort of analysis, but you’re right that there’s a tendency to use extremely spiky words. A favorite internet poster of mine has some really interesting takes on how it’s important to use soft language and not demand people agree, which folks on that subreddit are in fact pretty bad at doing. It’s hard to avoid it at times, though, when one is impassioned.
You can give ChatGPT the job posting and a brief description of Simon’s experiment, and then just ask them to provide critiques from a given perspective (eg. “What are some potential moral problems with this plan?”)
Hmm. I’d like to change that via attempting to make tools or skills for extracting value from potentially high-conflict contexts like that subreddit; I am consistently glad to have read it, though unhappy in the moment until I can get their attitude out of my head. It does often take me a while meditating to integrate what I think of their takes. Eg, I think their critique here is expressing worry about worker treatment.
They are consistently negative in ways that mean you can only rely on them to give direction (edit: as in, relative direction of their critique on a given post compared to their other critiques), not magnitude. But those directions are very often insight that was missing from the LW perspective, and I think that that’s the case here.
This link was posted by someone else but initially downvoted; I opened it, read it, and in my view, it has lots of good commentary, despite that it is too harsh for lesswrong. I’d suggest others read it thoughtfully. While it’s not friendly, and people should go in with the understanding that it’s kinda thunderdome and generated by people’s emotive responses, a lot of it is pretty reasonable thunderdome commentary. If you’re the type who can strip thunderdome feedback from its emotive content, these criticisms contain some things I think OP could benefit from at least pondering seriously, even if they ultimately disagree. Keep in mind that most people focus on a mix of simulacrum 1, 2, 3 per word, with very little 4. No comment 100% meant literally but neither is it meant entirely emotively/socially. All those warnings and framings aside, I agree somewhat with several posts there—I will not say which, and I do not mean to imply that I agree with the simulacrum 3 emotional message of rejectability, as I consider simulacrum 3 fundamentally invalid and always ignorable (though I often fail).
I also would like to do something like this myself; so I’ll be pondering how to integrate the critiques and avoid pitfalls they see.
My impression was that the reddit thread didn’t bring up anything that people don’t already know, but one of the strengths of LessWrong is that it’s socially permissible not to sneer at weird things if they actually make sense.
Like, obviously hiring someone to do nothing but side behind you gives off bad vibes. (The post even acknowledged this briefly, sort of.) But it could also genuinely be a really good idea. So if we can’t do it because of the vibes, that’s a bad thing!
I think the hackernews comment section, though still somewhat emotionally charged, is of substantially better quality.
Also, I responded to some comments/questions there.
I clicked the link and thought it was a bad idea ex post. I think that my attempted charitable reading of the Reddit comments revealed significantly less constructive data than what would have been provided by ChatGPT.
I suspect that rationalists engaging with this form of content harms the community a non-trivial amount.
Interesting, if the same could be done with chatgpt I’d be curious to hear how you’d frame the question. If the same analysis can be done with chatgpt I’d do it consistently.
Can you say more about how it causes harm? I’d like to find a way to reduce that harm, because there’s a lot of good stuff in this sort of analysis, but you’re right that there’s a tendency to use extremely spiky words. A favorite internet poster of mine has some really interesting takes on how it’s important to use soft language and not demand people agree, which folks on that subreddit are in fact pretty bad at doing. It’s hard to avoid it at times, though, when one is impassioned.
You can give ChatGPT the job posting and a brief description of Simon’s experiment, and then just ask them to provide critiques from a given perspective (eg. “What are some potential moral problems with this plan?”)
ah, I see, yeah, solid and makes sense.
I have never clicked on a link to sneerclub and then been glad I did so, so I’ll pass.
Hmm. I’d like to change that via attempting to make tools or skills for extracting value from potentially high-conflict contexts like that subreddit; I am consistently glad to have read it, though unhappy in the moment until I can get their attitude out of my head. It does often take me a while meditating to integrate what I think of their takes. Eg, I think their critique here is expressing worry about worker treatment.
They are consistently negative in ways that mean you can only rely on them to give direction (edit: as in, relative direction of their critique on a given post compared to their other critiques), not magnitude. But those directions are very often insight that was missing from the LW perspective, and I think that that’s the case here.