Can you be more specific what you think the LW consensus is, that you’re referring to? Recursive self-improvement and pessimism about AI existential risk? Or something else?
Can you be more specific what you think the LW consensus is, that you’re referring to? Recursive self-improvement and pessimism about AI existential risk? Or something else?