As in, AIs boosting human productivity might/should let us figure out how to make stuff safe as it comes up, so no need to be concerned about us not having a solution to the endpoint of that process before we’ve made the first steps?
I don’t expect it to be helpful to block individually safe steps on this path, though it would probably be wise to figure out what unsafe steps down this path look like concretely (which you’re doing!).
But yeah. I don’t have any particular reason to expect “solve for the end state without dealing with any of the intermediate states” to work. It feels to me like someone starting a chat application and delaying the “obtain customers” step until they support every language, have a chat architecture that could scale up to serve everyone, and have found a moderation scheme that works without human input.
I don’t expect that team to ever ship. If they do ship, I expect their product will not work, because I think many of the problems they encounter in practice will not be the ones they expected to encounter.
I don’t expect it to be helpful to block individually safe steps on this path, though it would probably be wise to figure out what unsafe steps down this path look like concretely (which you’re doing!).
But yeah. I don’t have any particular reason to expect “solve for the end state without dealing with any of the intermediate states” to work. It feels to me like someone starting a chat application and delaying the “obtain customers” step until they support every language, have a chat architecture that could scale up to serve everyone, and have found a moderation scheme that works without human input.
I don’t expect that team to ever ship. If they do ship, I expect their product will not work, because I think many of the problems they encounter in practice will not be the ones they expected to encounter.