It seems intuitive to me why that would be the case. And I’ve seen Eliezer make the claim a few times. But I can’t find an article describing the idea. Does anyone have a link?
It seems intuitive to me why that would be the case. And I’ve seen Eliezer make the claim a few times. But I can’t find an article describing the idea. Does anyone have a link?
Eliezer talks about this in Do Earths with slower economic growth have a better chance at FAI? e.g.
Moved this to answers, because it’s a direct answer.
wonderful, thank you!
For an alternative view, you may find this response interesting from an 80000 hours podcast. Here, Paul Christiano appears to reject that AI safety research is less parallelizable.
Moved this to answers, because I think it seems like a very useful answer.