And what exactly happens if someone breaks the rule?
Will someone stronger punish them? And what exactly happens if someone stronger breaks the rule? Is the majority together stronger than them, and the majority will punish them? And what happens if majority breaks the rule by deciding that some minority does not deserve any rights? It could be any kind of minority, even one that does not exist today. What if someone reproduces wildly? Exponential growth—and after a few generations even 1000 miles with nanobots won’t be enough for them; and by the way, at that time they will be the majority. Also if someone uses nanobots to strategically prepare for the war, they can be stronger than the majority that has other preferences; and also will have first-strike advantage. I don’t say this situation is impossible, just that a mysterious answer is not enough. Maybe some solution will develop naturally; but maybe we really wouldn’t like the solution.
And what exactly happens if someone breaks the rule?
I have a lot of ideas, but I am not very keen to share it here, where the “FAI” and “CEV” are the answers for your questions.
I think, they aren’t good answers, but let put that aside.
Also not to mention, that the whole AI business in the SIAI and on the LW is at about ZERO level. I don’t know for ONE member of this community who can say—I did THIS in the AI field. (Myself excluded, but I am hardly a member here.)
Also not to mention, that the whole AI business in the SIAI and on the LW is at about ZERO level. I don’t know for ONE member of this community who can say—I did THIS in the AI field. (Myself excluded, but I am hardly a member here.)
What do you mean the ‘zero’ level? As in “you’re such n00bs you haven’t done anything?”. (Because even I can say I did a THIS in the AI field, even though I don’t consider this fact especially significant.)
(Will reply tomorrow if I get time. I will, of course, be in the position of arguing how utterly insignificant something I spent several years of my life doing actually is. I’m sure something is backwards here. This explains why I was never cut out to be an academic.)
Let me guess: The first person or corporation that develops a super-intelligent AI becomes the master of the universe. At least until the moment when a bug in program removes them from control.
And what exactly happens if someone breaks the rule?
Will someone stronger punish them? And what exactly happens if someone stronger breaks the rule? Is the majority together stronger than them, and the majority will punish them? And what happens if majority breaks the rule by deciding that some minority does not deserve any rights? It could be any kind of minority, even one that does not exist today. What if someone reproduces wildly? Exponential growth—and after a few generations even 1000 miles with nanobots won’t be enough for them; and by the way, at that time they will be the majority. Also if someone uses nanobots to strategically prepare for the war, they can be stronger than the majority that has other preferences; and also will have first-strike advantage. I don’t say this situation is impossible, just that a mysterious answer is not enough. Maybe some solution will develop naturally; but maybe we really wouldn’t like the solution.
I have a lot of ideas, but I am not very keen to share it here, where the “FAI” and “CEV” are the answers for your questions.
I think, they aren’t good answers, but let put that aside.
Also not to mention, that the whole AI business in the SIAI and on the LW is at about ZERO level. I don’t know for ONE member of this community who can say—I did THIS in the AI field. (Myself excluded, but I am hardly a member here.)
What do you mean the ‘zero’ level? As in “you’re such n00bs you haven’t done anything?”. (Because even I can say I did a THIS in the AI field, even though I don’t consider this fact especially significant.)
What did you do? Tell us, be cause that IS significant.
(Will reply tomorrow if I get time. I will, of course, be in the position of arguing how utterly insignificant something I spent several years of my life doing actually is. I’m sure something is backwards here. This explains why I was never cut out to be an academic.)
Let me guess: The first person or corporation that develops a super-intelligent AI becomes the master of the universe. At least until the moment when a bug in program removes them from control.
Your guess is a bit naive.