But your original comment was referring to a situation in which we didn’t carefully control the AI in our lab. (By letting it have an arbitrarily long horizon). If we have lead time on other projects, I think it’s very plausible to have a situation where we couldn’t protect ourselves from our own AI if we weren’t carefully controlling the conditions, but we could protect ourselves from our own AI if we we were carefully controlling the situation, and then given our lead time, we’re not at a big risk from other projects yet.
But your original comment was referring to a situation in which we didn’t carefully control the AI in our lab. (By letting it have an arbitrarily long horizon). If we have lead time on other projects, I think it’s very plausible to have a situation where we couldn’t protect ourselves from our own AI if we weren’t carefully controlling the conditions, but we could protect ourselves from our own AI if we we were carefully controlling the situation, and then given our lead time, we’re not at a big risk from other projects yet.