What kind of relationships to ‘utility functions’ do you think are most plausible in the first transformative AI?
How does the answer change conditioned on ‘we did it, all alignment desiderata got sufficiently resolved’ (whatever that means) and on ‘we failed, this is the point of no return’?
What kind of relationships to ‘utility functions’ do you think are most plausible in the first transformative AI?
How does the answer change conditioned on ‘we did it, all alignment desiderata got sufficiently resolved’ (whatever that means) and on ‘we failed, this is the point of no return’?
I’m taking about relationships like
or