If your hypothesis smears probability over a wider range of outcomes than mine, while I can more sharply predict events using my theory of how alignment works—that constitutes a Bayes-update towards my theory and away from yours. Right?
He didn’t say “anything can happen before AI explodes”. He said “I expect AI to look pretty great until it explodes.” And he didn’t say that his model about AGI safety generated that prediction; maybe his model about AGI safety generates some long-run predictions and then he’s using other models to make the “look pretty great” prediction.
He didn’t say “anything can happen before AI explodes”. He said “I expect AI to look pretty great until it explodes.” And he didn’t say that his model about AGI safety generated that prediction; maybe his model about AGI safety generates some long-run predictions and then he’s using other models to make the “look pretty great” prediction.