I know your question was probably just rhetorical, but to answer it regardless—I was confused in part because it would have made sense to me if he had said it would “better” if AGI timelines were short.
Lots of people want short AGI timelines because they think the alignment problem will be easy or otherwise aren’t concerned about it and they want the perceived benefits of AGI for themselves/their family and friends/humanity (eg eliminating disease, eliminating involuntary death, abundance, etc). And he could have just said “better” without really changing the rest of his argument.
At least the word “better” would make sense to me, even if, as you imply, it might be wrong and plenty of others would disagree with it.
So I expect I am missing something in his internal model that made him use the word “safer” instead of “better”. I can only guess at possibilities. Like thinking that if AGI timelines are too long, then the CCP might take over the USA/the West in AI capabilities, and care even less about AGI safety when it matters the most.
Good point.
I know your question was probably just rhetorical, but to answer it regardless—I was confused in part because it would have made sense to me if he had said it would “better” if AGI timelines were short.
Lots of people want short AGI timelines because they think the alignment problem will be easy or otherwise aren’t concerned about it and they want the perceived benefits of AGI for themselves/their family and friends/humanity (eg eliminating disease, eliminating involuntary death, abundance, etc). And he could have just said “better” without really changing the rest of his argument.
At least the word “better” would make sense to me, even if, as you imply, it might be wrong and plenty of others would disagree with it.
So I expect I am missing something in his internal model that made him use the word “safer” instead of “better”. I can only guess at possibilities. Like thinking that if AGI timelines are too long, then the CCP might take over the USA/the West in AI capabilities, and care even less about AGI safety when it matters the most.