Some top-level post topics that get much higher scrutiny:
1. Takes on AI
Simply because of the volume of it, the standards are higher. I recommend reading Scott Alexander’s Superintelligence FAQ as a good primer to make sure you understand the basics. Make sure you’re familiar with the Orthogonality Thesis and Instrumental Convergence. I recommend both Eliezer’s AGI Ruin: A List of Lethalities and Paul Christiano’s response post so you understand what sort of difficulties the field is actually facing.
2. Quantum Suicide/Immortality, Roko’s Basilisk and Acausal Extortion.
In theory, these are topics that have room for novel questions and contributions. In practice, they seem to attract people who seem… looking for something to be anxious about? I don’t have great advice for these people, but my impression is that they’re almost always trapped in a loop where they’re trying to think about it in enough detail that they don’t have to be anxious anymore, but that doesn’t work. They just keep finding new subthreads to be anxious about.
For Acausal Trade, I do think Critch’s Acausal normalcy might be a useful perspective that points your thoughts in a more useful direction. Alas, I don’t have a great primer that succinctly explains why quantum immortality isn’t a great frame, in a way that doesn’t have a ton of philosophical dependencies.
I mostly recommend… going outside, hanging out with friends and finding other more productive things to get intellectually engrossed in.
3. Needing help with depression, akrasia, or medical advice with confusing mystery illness.
This is pretty sad and I feel quite bad saying it – on one hand, I do think LessWrong has some useful stuff to offer here. But too much focus on this has previously warped the community in weird ways – people with all kinds of problems come trying to get help and we just don’t have the resources to help all of them.
For your first post on LessWrong, think of it more like you’re applying to a university. Yes, universities have mental health departments for students and faculty… but when we’re evaluating “does it make sense to let this person into this university”, the focus should be on “does this person have the ability to make useful intellectual contributions?” not “do they need help in a way we can help with?”
3. Needing help with depression, akrasia, or medical advice with confusing mystery illness.
Bit of a shame to see this one, but I understand this one. It’s crunch time for AGI alignment and there’s a lot on the line. Maybe those of us interested in self-help can go to/post their thoughts on some of the rationalsphere blogs, or maybe start their own.
I got a lot of value out of the more self-help and theory of mind posts here, especially Kaj Sotala’s and Valentine’s work on multiagent models of mind, and it’d be cool to have another place to continue discussions around that.
Some top-level post topics that get much higher scrutiny:
1. Takes on AI
Simply because of the volume of it, the standards are higher. I recommend reading Scott Alexander’s Superintelligence FAQ as a good primer to make sure you understand the basics. Make sure you’re familiar with the Orthogonality Thesis and Instrumental Convergence. I recommend both Eliezer’s AGI Ruin: A List of Lethalities and Paul Christiano’s response post so you understand what sort of difficulties the field is actually facing.
I suggest going to the most recent AI Open Questions thread, or looking into the FAQ at https://ui.stampy.ai/
2. Quantum Suicide/Immortality, Roko’s Basilisk and Acausal Extortion.
In theory, these are topics that have room for novel questions and contributions. In practice, they seem to attract people who seem… looking for something to be anxious about? I don’t have great advice for these people, but my impression is that they’re almost always trapped in a loop where they’re trying to think about it in enough detail that they don’t have to be anxious anymore, but that doesn’t work. They just keep finding new subthreads to be anxious about.
For Acausal Trade, I do think Critch’s Acausal normalcy might be a useful perspective that points your thoughts in a more useful direction. Alas, I don’t have a great primer that succinctly explains why quantum immortality isn’t a great frame, in a way that doesn’t have a ton of philosophical dependencies.
I mostly recommend… going outside, hanging out with friends and finding other more productive things to get intellectually engrossed in.
3. Needing help with depression, akrasia, or medical advice with confusing mystery illness.
This is pretty sad and I feel quite bad saying it – on one hand, I do think LessWrong has some useful stuff to offer here. But too much focus on this has previously warped the community in weird ways – people with all kinds of problems come trying to get help and we just don’t have the resources to help all of them.
For your first post on LessWrong, think of it more like you’re applying to a university. Yes, universities have mental health departments for students and faculty… but when we’re evaluating “does it make sense to let this person into this university”, the focus should be on “does this person have the ability to make useful intellectual contributions?” not “do they need help in a way we can help with?”
Maybe an FAQ for the intersection of #1, #2 and #3, “depressed/anxious because of AI”, might be a good thing to be able to link to, though?
Bit of a shame to see this one, but I understand this one. It’s crunch time for AGI alignment and there’s a lot on the line. Maybe those of us interested in self-help can go to/post their thoughts on some of the rationalsphere blogs, or maybe start their own.
I got a lot of value out of the more self-help and theory of mind posts here, especially Kaj Sotala’s and Valentine’s work on multiagent models of mind, and it’d be cool to have another place to continue discussions around that.