Fwiw, I think that when Daniel says he thinks offsetting is useful and I say that I want as a desideratum “the AI is able to do useful things”, we’re using similar intuitions, but this is entirely a guess that I haven’t confirmed with Daniel.
Update: we discussed this, and came to the conclusion that these aren’t based on similar intuitions.
Update: we discussed this, and came to the conclusion that these aren’t based on similar intuitions.