We don’t know what we want from AI, beyond obvious goals like survival. Mostly I think in terms of a perfect tutor that would bring us to its own level of intelligence before turning itself off. But quite possibly we don’t want that at all. I recall some commenter here seemed to want a long-term ruler AI.
I am generally in favour of a long-term ruler AI; though I don’t think I’m the one you heard it from before. As you say, though, this is an area where we should have unusually low confidence that we know what we want.
We don’t know what we want from AI, beyond obvious goals like survival. Mostly I think in terms of a perfect tutor that would bring us to its own level of intelligence before turning itself off. But quite possibly we don’t want that at all. I recall some commenter here seemed to want a long-term ruler AI.
I am generally in favour of a long-term ruler AI; though I don’t think I’m the one you heard it from before. As you say, though, this is an area where we should have unusually low confidence that we know what we want.