I’d like more discussion of the claim that alignment research is unhelpful-at-best for existential safety because of it accelerating deployment. It seems to me that alignment research has a couple paths to positive impact which might balance the risk:
Tech companies will be incentivized to deploy AI with slipshod alignment, which might then take actions that no one wants and which pose existential risk. (Concretely, I’m thinking of out with a whimper and out with a bang scenarios.) But the existence of better alignment techniques might legitimize governance demands, i.e. demands that tech companies don’t make products that do things that literally no one wants.
Single/single alignment might be a prerequisite to certain computational social choice solutions. E.g., once we know how to build an agent that “does what [human] wants”, we can then build an agent that “helps [human 1] and [human 2] draw up incomplete contracts for mutual benefit subject to the constraints in the [policy] written by [human 3]”. And slipshod alignment might not be enough for this application.
I’d believe the claim if I thought that alignment was easy enough that AI products that pass internal product review and which don’t immediately trigger lawsuits would be aligned enough to not end the world through alignment failure. But I don’t think that’s the case, unfortunately.
It seems like we’ll have to put special effort into both single/single alignment and multi/single “alignment”, because the free market might not give it to us.
I’d like more discussion of the claim that alignment research is unhelpful-at-best for existential safety because of it accelerating deployment. It seems to me that alignment research has a couple paths to positive impact which might balance the risk:
Tech companies will be incentivized to deploy AI with slipshod alignment, which might then take actions that no one wants and which pose existential risk. (Concretely, I’m thinking of out with a whimper and out with a bang scenarios.) But the existence of better alignment techniques might legitimize governance demands, i.e. demands that tech companies don’t make products that do things that literally no one wants.
Single/single alignment might be a prerequisite to certain computational social choice solutions. E.g., once we know how to build an agent that “does what [human] wants”, we can then build an agent that “helps [human 1] and [human 2] draw up incomplete contracts for mutual benefit subject to the constraints in the [policy] written by [human 3]”. And slipshod alignment might not be enough for this application.
I’d believe the claim if I thought that alignment was easy enough that AI products that pass internal product review and which don’t immediately trigger lawsuits would be aligned enough to not end the world through alignment failure. But I don’t think that’s the case, unfortunately.
It seems like we’ll have to put special effort into both single/single alignment and multi/single “alignment”, because the free market might not give it to us.