I think this is a good description of what agent foundations is and why it might be needed. But the binary of ‘either we get alignment by default or we need to find the True Name’ isn’t how I think about it.
Rather, there’s some unknown parameter, something like ‘how sharply does the pressure towards incorrigibility ramp up, what capability level does it start at, how strong is it’?
Setting this at 0 means alignment by default. Setting this higher and higher means we need various kinds of Prosaic alignment strategies which are better at keeping systems corrigible and detecting bad behaviour. And setting it at ‘infinity’ means we need to find the True Names/foundational insights.
My rough model is that there’s an unknown quantity about reality which is roughly “how strong does the oversight process have to be before the trained model does what the oversight process intended for it to do”. p(doom) mainly depends on whether the actors training the powerful systems have sufficiently powerful oversight processes.
Maybe one way of getting at this is to look at ELK—if you think the simplest dumbest ELK proposals probably work, that’s Alignment by Default. The harder you think prosaic alignment is, the more complex an ELK solution you expect to need. And if you think we need agent foundations, you think we need a worst-case ELK solution.
I think this is a good description of what agent foundations is and why it might be needed. But the binary of ‘either we get alignment by default or we need to find the True Name’ isn’t how I think about it.
Rather, there’s some unknown parameter, something like ‘how sharply does the pressure towards incorrigibility ramp up, what capability level does it start at, how strong is it’?
Setting this at 0 means alignment by default. Setting this higher and higher means we need various kinds of Prosaic alignment strategies which are better at keeping systems corrigible and detecting bad behaviour. And setting it at ‘infinity’ means we need to find the True Names/foundational insights.
Rohin:
Maybe one way of getting at this is to look at ELK—if you think the simplest dumbest ELK proposals probably work, that’s Alignment by Default. The harder you think prosaic alignment is, the more complex an ELK solution you expect to need. And if you think we need agent foundations, you think we need a worst-case ELK solution.