I … recommend aiming humanity’s first AGI systems at simple limited goals that end the acute risk period and then cede stewardship of the future to some process that can reliably do the “aim minds towards the right thing” thing
What could that process possibly be? A congress of humanity’s best and brightest? An AI sovereign governed by a “Proto-CEV” system of values?
What could that process possibly be? A congress of humanity’s best and brightest? An AI sovereign governed by a “Proto-CEV” system of values?