Avoiding, not ignoring. Ignoring the problem is what “dumb monkey status games etc.” points towards, and it almost invariably leads to expressing a wide variety of unexamined but basically partisan stances which are assumed to just be common sense (because, in the social context they come from, they are).
The failures following from this should be obvious.
That rather depends on who’s building it, doesn’t it?
If you’re talking about Eliezer et al’s FAI concept, I get the impression that they’re mostly concerned with issues that aren’t presently politicized among anyone except perhaps bioethicists. It does entail solving some political problems along the way, but how is underspecified, and I don’t see a meaningful upside to viewing any of the relevant design problems through a partisan lens at this stage.
In any case, that’s (again) not what the OP is about.
How does ignoring the Gordian knot problem solves it?
Avoiding, not ignoring. Ignoring the problem is what “dumb monkey status games etc.” points towards, and it almost invariably leads to expressing a wide variety of unexamined but basically partisan stances which are assumed to just be common sense (because, in the social context they come from, they are).
The failures following from this should be obvious.
Which problem is the construction of a super AI trying to solve then?
That rather depends on who’s building it, doesn’t it?
If you’re talking about Eliezer et al’s FAI concept, I get the impression that they’re mostly concerned with issues that aren’t presently politicized among anyone except perhaps bioethicists. It does entail solving some political problems along the way, but how is underspecified, and I don’t see a meaningful upside to viewing any of the relevant design problems through a partisan lens at this stage.
In any case, that’s (again) not what the OP is about.
I think I understand what you mean. But I maintain my hypothesis.