It seems you agree with Viliam: see the second paragraph below.
For the obvious reasons I don’t think you can find selfless and competent human rulers to make this really work. But conditional on possibility of creating a Friendly superintelligent AI… sure.
Although calling that “communism” is about as much of a central example, as calling the paperclip maximizer scenario “capitalism”.
That is completely irrelevant to debates about AI.
But anyway, I object against the premise being realistic. Humans run on “corrupted hardware”, so even if they start as honest and competent and rational and well-meaning, that usually changes very quickly. In long term, they also get old and die, so what you would actually need is honest and competent elite group, able to raise and filter its next generation that would be at least equally honest, competent, rational, well-meaning, and skilled at raising and filtering the next generation for the same qualities.
In other words, you would need to have a group of rulers enlightened enough that they are able to impartially and precisely judge whether their competitors are equally good or somewhat better in the relevant criteria, and in such case they would voluntarily transfer their power to the competitors. -- Which goes completely against what the evolution taughts us: that if your opponent is better than you, you should use your power to crush him, preferably immediately, while you still have the advantage of power, and before other tribe members notice his superiority and start offering to ally with him against you.
Oh, and this perfect group would also need to be able to overthrow the current power structures and get themselves in the positions of power, without losing any of its qualities in the process. That is, they have to be competent enough to overthrow an opponent with orders of magnitude more power (imagine someone who owns the media and police and army and secret service and can also use illegal methods to kidnap your members, torture them to extract their secrets, and kill them afterwards), without having to compromise on your values. So, in addition, the members of your elite group must have perfect mental resistance against torture and blackmail; and be numerous enough, so they can easily replace their fallen brethren and continue with the original plan.
Well… there doesn’t seem to be a law of physics that would literally prevent this, it just seems very unlikely.
With a less elite group, there are many things that can possibly go wrong, and evolutionary pressures in favor of things going wrong as quickly as possible.
Fair enough; I just wanted to make it explicit that that question has basically nothing to do with anything else in the thread. I mean, Viliam was saying “so it might be a good idea to do such-and-such about superhumanly capable AI” and you came in and said “aha, that kinda pattern-matches to communism. Are you defending communism?” and then said oh, by the way, I’m only interested in communism in the case where there is no superhumanly capable AI.
But, well, trolls gonna troll, and you’ve already said trolling is your preferred mode of political debate.
Well, the kinda-sorta OP phrased the issue this way:
If the choice is between giving each human a 1⁄7000000000 of the universe, or giving the whole universe to Elon Musk (or some other person) and letting everyone else starve
...and that set the tone for the entire subthread :-P
It seems you agree with Viliam: see the second paragraph below.
Right, but I am specifically interested in Viliam’s views about the scenario where there is no AI, but we do have honest and competent rulers.
That is completely irrelevant to debates about AI.
But anyway, I object against the premise being realistic. Humans run on “corrupted hardware”, so even if they start as honest and competent and rational and well-meaning, that usually changes very quickly. In long term, they also get old and die, so what you would actually need is honest and competent elite group, able to raise and filter its next generation that would be at least equally honest, competent, rational, well-meaning, and skilled at raising and filtering the next generation for the same qualities.
In other words, you would need to have a group of rulers enlightened enough that they are able to impartially and precisely judge whether their competitors are equally good or somewhat better in the relevant criteria, and in such case they would voluntarily transfer their power to the competitors. -- Which goes completely against what the evolution taughts us: that if your opponent is better than you, you should use your power to crush him, preferably immediately, while you still have the advantage of power, and before other tribe members notice his superiority and start offering to ally with him against you.
Oh, and this perfect group would also need to be able to overthrow the current power structures and get themselves in the positions of power, without losing any of its qualities in the process. That is, they have to be competent enough to overthrow an opponent with orders of magnitude more power (imagine someone who owns the media and police and army and secret service and can also use illegal methods to kidnap your members, torture them to extract their secrets, and kill them afterwards), without having to compromise on your values. So, in addition, the members of your elite group must have perfect mental resistance against torture and blackmail; and be numerous enough, so they can easily replace their fallen brethren and continue with the original plan.
Well… there doesn’t seem to be a law of physics that would literally prevent this, it just seems very unlikely.
With a less elite group, there are many things that can possibly go wrong, and evolutionary pressures in favor of things going wrong as quickly as possible.
Fair enough; I just wanted to make it explicit that that question has basically nothing to do with anything else in the thread. I mean, Viliam was saying “so it might be a good idea to do such-and-such about superhumanly capable AI” and you came in and said “aha, that kinda pattern-matches to communism. Are you defending communism?” and then said oh, by the way, I’m only interested in communism in the case where there is no superhumanly capable AI.
But, well, trolls gonna troll, and you’ve already said trolling is your preferred mode of political debate.
Well, the kinda-sorta OP phrased the issue this way:
...and that set the tone for the entire subthread :-P