It seems Russell does not agree with what is considered an LW consensus. From ’Architects of Intelligence The truth about AI from the people building it’:
When [the first AGI is created], it’s not going to be a single finishing line that we cross. It’s going to be along several dimensions.
[...]
I do think that I’m an optimist. I think there’s a long way to go. We are just scratching the surface of this control problem, but the first scratching seems to be productive, and so I’m reasonably optimistic that there is a path of AI development that leads us to what we might describe as “provably beneficial AI systems.”
Can you be more specific what you think the LW consensus is, that you’re referring to? Recursive self-improvement and pessimism about AI existential risk? Or something else?
It seems Russell does not agree with what is considered an LW consensus. From ’Architects of Intelligence The truth about AI from the people building it’:
Can you be more specific what you think the LW consensus is, that you’re referring to? Recursive self-improvement and pessimism about AI existential risk? Or something else?