I think training exclusively on objective measures has a couple of other issues:
For sufficiently open-ended training, objective performance metrics could incentivize manipulating and deceiving humans to accomplish the objective. A simple example would be training an AI to make money, which might incentivize illegal/unethical behavior.
For less open-ended training, I basically just think you can only get so much done this way, and people will want to use fuzzier “approval” measures to get help from AIs with fuzzier goals (this seems to be how things are now with LLMs).
I think your point about the footprint is a good one and means we could potentially be very well-placed to track “escaped” AIs if a big effort were put in to do so. But I don’t see signs of that effort today and don’t feel at all confident that it will happen in time to stop an “escape.”
I think training exclusively on objective measures has a couple of other issues:
For sufficiently open-ended training, objective performance metrics could incentivize manipulating and deceiving humans to accomplish the objective. A simple example would be training an AI to make money, which might incentivize illegal/unethical behavior.
For less open-ended training, I basically just think you can only get so much done this way, and people will want to use fuzzier “approval” measures to get help from AIs with fuzzier goals (this seems to be how things are now with LLMs).
I think your point about the footprint is a good one and means we could potentially be very well-placed to track “escaped” AIs if a big effort were put in to do so. But I don’t see signs of that effort today and don’t feel at all confident that it will happen in time to stop an “escape.”