Calling both of these things a “catastrophe” seems to sweep that difference under the rug.
Sure, but just like it makes sense to be able to say that a class of outcomes is “good” without every single such outcome being maximally good, it makes sense to have a concept for catastrophes, even if they’re not literally the worst things possible.
Which seems bound to happen even in the best case where a FAI takes over.
Building a powerful agent helping you get what you want, doesn’t destroy your ability to get what you want. By my definition, that’s not a catastrophe.
as if a “catastrophe” is necessarily the worst thing possible and should be avoided at all costs. If an antialigned “evil” AI were about to be released with high probability, and you had a paperclip maximizer in a box, releasing the paperclip maximizer would be the best option, even though that moves the chance of catastrophe from high probability to indistinguishable from certainty.
Correct. Again, I don’t mean to say that any catastrophe is literally the worst outcome possible.
Sure, but just like it makes sense to be able to say that a class of outcomes is “good” without every single such outcome being maximally good, it makes sense to have a concept for catastrophes, even if they’re not literally the worst things possible.
Building a powerful agent helping you get what you want, doesn’t destroy your ability to get what you want. By my definition, that’s not a catastrophe.
Correct. Again, I don’t mean to say that any catastrophe is literally the worst outcome possible.