I think it makes sense to use the word “reasonable” to describe someone who is taking actions that minimize total risk, even if those actions aren’t what they’d take in a different situation, and even if various actors had made mistakes to get them into this situation.
(Also note that I’m not talking about making wildly superintelligent AI, I’m just talking about making AGI; my guess is that even when you’re pretty rushed you should try to avoid making galaxy-brained superintelligence.)
I agree it seems good to minimize total risk, even when the best available actions are awful; I think my reservation is mainly that in most such cases, it seems really important to say you’re in that position, so others don’t mistakenly conclude you have things handled. And I model AGI companies as being quite disincentivized from admitting this already—and humans generally as being unreasonably disinclined to update that weird things are happening—so I feel wary of frames/language that emphasize local relative tradeoffs, thereby making it even easier to conceal the absolute level of danger.
I think it makes sense to use the word “reasonable” to describe someone who is taking actions that minimize total risk, even if those actions aren’t what they’d take in a different situation, and even if various actors had made mistakes to get them into this situation.
(Also note that I’m not talking about making wildly superintelligent AI, I’m just talking about making AGI; my guess is that even when you’re pretty rushed you should try to avoid making galaxy-brained superintelligence.)
I agree it seems good to minimize total risk, even when the best available actions are awful; I think my reservation is mainly that in most such cases, it seems really important to say you’re in that position, so others don’t mistakenly conclude you have things handled. And I model AGI companies as being quite disincentivized from admitting this already—and humans generally as being unreasonably disinclined to update that weird things are happening—so I feel wary of frames/language that emphasize local relative tradeoffs, thereby making it even easier to conceal the absolute level of danger.
Yep that’s very fair. I agree that it’s very likely that AI companies will continue to be misleading about the absolute risk posed by their actions.