I like this post but I think redwood has varied some on whether control is for getting alignment work out of AIs vs getting generally good-for-humanity work out of them and pushing for a pause once they reach some usefulness/​danger threshold (eg well before super intelligence).
[based on my recollection of Buck seminar in MATS 6]
I like this post but I think redwood has varied some on whether control is for getting alignment work out of AIs vs getting generally good-for-humanity work out of them and pushing for a pause once they reach some usefulness/​danger threshold (eg well before super intelligence).
[based on my recollection of Buck seminar in MATS 6]