Is there anything else than politics to protect us against the emergence of an evil super AI?
Yes.
Isn’t contemporary political cynicism the real “Mind-Killer”?
No.
Seriously. I’m not trying to fall into the common nerd mistake of saying “politics is dumb monkey status games for normals” here; political processes serve an important role in solving coordination problems and we ignore them at our peril. But that’s really not what the OP is getting at. It’s saying that enormous and nearly intractable biases surround ideology; that we systematically overestimate the importance of conventional partisanship; and that there’s value in structuring our arguments and social spaces to skirt these issues, or at least not to light ourselves on fire and run toward them while shouting COME AT ME, BRO.
Avoiding, not ignoring. Ignoring the problem is what “dumb monkey status games etc.” points towards, and it almost invariably leads to expressing a wide variety of unexamined but basically partisan stances which are assumed to just be common sense (because, in the social context they come from, they are).
The failures following from this should be obvious.
That rather depends on who’s building it, doesn’t it?
If you’re talking about Eliezer et al’s FAI concept, I get the impression that they’re mostly concerned with issues that aren’t presently politicized among anyone except perhaps bioethicists. It does entail solving some political problems along the way, but how is underspecified, and I don’t see a meaningful upside to viewing any of the relevant design problems through a partisan lens at this stage.
In any case, that’s (again) not what the OP is about.
Yes.
No.
Seriously. I’m not trying to fall into the common nerd mistake of saying “politics is dumb monkey status games for normals” here; political processes serve an important role in solving coordination problems and we ignore them at our peril. But that’s really not what the OP is getting at. It’s saying that enormous and nearly intractable biases surround ideology; that we systematically overestimate the importance of conventional partisanship; and that there’s value in structuring our arguments and social spaces to skirt these issues, or at least not to light ourselves on fire and run toward them while shouting COME AT ME, BRO.
All of these statements are true.
How does ignoring the Gordian knot problem solves it?
Avoiding, not ignoring. Ignoring the problem is what “dumb monkey status games etc.” points towards, and it almost invariably leads to expressing a wide variety of unexamined but basically partisan stances which are assumed to just be common sense (because, in the social context they come from, they are).
The failures following from this should be obvious.
Which problem is the construction of a super AI trying to solve then?
That rather depends on who’s building it, doesn’t it?
If you’re talking about Eliezer et al’s FAI concept, I get the impression that they’re mostly concerned with issues that aren’t presently politicized among anyone except perhaps bioethicists. It does entail solving some political problems along the way, but how is underspecified, and I don’t see a meaningful upside to viewing any of the relevant design problems through a partisan lens at this stage.
In any case, that’s (again) not what the OP is about.
I think I understand what you mean. But I maintain my hypothesis.