I don’t know whether the grim model in Eliezer’s interview is true or not.
…
If it’s false (alignment efforts are likely to work), then we need to know that.
I think this is a false dichotomy. Eliezer’s position is that said that AI alignment requires a “miracle” aka “positive model violation” aka “surprising positive development of unknown shape”. If that is false, that does not mean that alignment efforts are “likely to work”. They could still fail just for ordinary non-miraculous reasons. Civilization has failed to prevent many disasters that didn’t require miracles to prevent.
I think this is a false dichotomy. Eliezer’s
position is thatsaid that AI alignment requires a “miracle” aka “positive model violation” aka “surprising positive development of unknown shape”. If that is false, that does not mean that alignment efforts are “likely to work”. They could still fail just for ordinary non-miraculous reasons. Civilization has failed to prevent many disasters that didn’t require miracles to prevent.