All 7.8 billion humans that the AI system should be aligned with, so it doesn’t impose global catastrophic risks
That’s the main one—once there is a super intelligent AI aligned with humanity as a whole, then /it/ can solve the lower-scale instances.
That said, there are a lot of caveats and contradictions to that too:
People have values about the values they ought to have that are distinct from actual values they have—and we might want the AI to pay more attention to former?
People’s values are often contradictory (both a single person may do thing they will regret, some people would explicitly value suffering of others, and all kinds of other biases and inconsistencies)
It’s very unclear how the values should be generalized beyond the routine scenarios people encounter in life.
Should we weight everybody’s values equally (including little kids)? Or should we assume that some people are better informed that others, have spent more time thinking about moral and ethical issues, and should be trusted more to represent the desired values?
That’s the main one—once there is a super intelligent AI aligned with humanity as a whole, then /it/ can solve the lower-scale instances.
That said, there are a lot of caveats and contradictions to that too:
People have values about the values they ought to have that are distinct from actual values they have—and we might want the AI to pay more attention to former?
People’s values are often contradictory (both a single person may do thing they will regret, some people would explicitly value suffering of others, and all kinds of other biases and inconsistencies)
It’s very unclear how the values should be generalized beyond the routine scenarios people encounter in life.
Should we weight everybody’s values equally (including little kids)? Or should we assume that some people are better informed that others, have spent more time thinking about moral and ethical issues, and should be trusted more to represent the desired values?
and many more.