How many individuals will own those machines? How will their conflicts be settled?
Here is a million acres of land, 1000 miles deep, here are some basic tools like nano factory—now you are on your own. Farm and be in a good friendship with your neighbors or just ignore them! “Do no evil” rule still applies.
And what exactly happens if someone breaks the rule?
Will someone stronger punish them? And what exactly happens if someone stronger breaks the rule? Is the majority together stronger than them, and the majority will punish them? And what happens if majority breaks the rule by deciding that some minority does not deserve any rights? It could be any kind of minority, even one that does not exist today. What if someone reproduces wildly? Exponential growth—and after a few generations even 1000 miles with nanobots won’t be enough for them; and by the way, at that time they will be the majority. Also if someone uses nanobots to strategically prepare for the war, they can be stronger than the majority that has other preferences; and also will have first-strike advantage. I don’t say this situation is impossible, just that a mysterious answer is not enough. Maybe some solution will develop naturally; but maybe we really wouldn’t like the solution.
And what exactly happens if someone breaks the rule?
I have a lot of ideas, but I am not very keen to share it here, where the “FAI” and “CEV” are the answers for your questions.
I think, they aren’t good answers, but let put that aside.
Also not to mention, that the whole AI business in the SIAI and on the LW is at about ZERO level. I don’t know for ONE member of this community who can say—I did THIS in the AI field. (Myself excluded, but I am hardly a member here.)
Also not to mention, that the whole AI business in the SIAI and on the LW is at about ZERO level. I don’t know for ONE member of this community who can say—I did THIS in the AI field. (Myself excluded, but I am hardly a member here.)
What do you mean the ‘zero’ level? As in “you’re such n00bs you haven’t done anything?”. (Because even I can say I did a THIS in the AI field, even though I don’t consider this fact especially significant.)
(Will reply tomorrow if I get time. I will, of course, be in the position of arguing how utterly insignificant something I spent several years of my life doing actually is. I’m sure something is backwards here. This explains why I was never cut out to be an academic.)
Let me guess: The first person or corporation that develops a super-intelligent AI becomes the master of the universe. At least until the moment when a bug in program removes them from control.
What is the mass of the smallest self-replicating unit that can be reconfigured after receiving instructions from another planet or solar system? I would venture to speculate it dosen’t take that much.
In any case after it arrives wait a few years for the stuff to cook (self-replicate), build infrastructure and then just build the human bodies (or computers running ems) of the people you want to settle there.
Here is a million acres of land, 1000 miles deep, here are some basic tools like nano factory—now you are on your own. Farm and be in a good friendship with your neighbors or just ignore them! “Do no evil” rule still applies.
I suggest this one, for now.
And what exactly happens if someone breaks the rule?
Will someone stronger punish them? And what exactly happens if someone stronger breaks the rule? Is the majority together stronger than them, and the majority will punish them? And what happens if majority breaks the rule by deciding that some minority does not deserve any rights? It could be any kind of minority, even one that does not exist today. What if someone reproduces wildly? Exponential growth—and after a few generations even 1000 miles with nanobots won’t be enough for them; and by the way, at that time they will be the majority. Also if someone uses nanobots to strategically prepare for the war, they can be stronger than the majority that has other preferences; and also will have first-strike advantage. I don’t say this situation is impossible, just that a mysterious answer is not enough. Maybe some solution will develop naturally; but maybe we really wouldn’t like the solution.
I have a lot of ideas, but I am not very keen to share it here, where the “FAI” and “CEV” are the answers for your questions.
I think, they aren’t good answers, but let put that aside.
Also not to mention, that the whole AI business in the SIAI and on the LW is at about ZERO level. I don’t know for ONE member of this community who can say—I did THIS in the AI field. (Myself excluded, but I am hardly a member here.)
What do you mean the ‘zero’ level? As in “you’re such n00bs you haven’t done anything?”. (Because even I can say I did a THIS in the AI field, even though I don’t consider this fact especially significant.)
What did you do? Tell us, be cause that IS significant.
(Will reply tomorrow if I get time. I will, of course, be in the position of arguing how utterly insignificant something I spent several years of my life doing actually is. I’m sure something is backwards here. This explains why I was never cut out to be an academic.)
Let me guess: The first person or corporation that develops a super-intelligent AI becomes the master of the universe. At least until the moment when a bug in program removes them from control.
Your guess is a bit naive.
Where are you going to get all that land from?
Solar system and beyond.
Where are you going to get the energy and time to reach those places?
What is the mass of the smallest self-replicating unit that can be reconfigured after receiving instructions from another planet or solar system? I would venture to speculate it dosen’t take that much.
In any case after it arrives wait a few years for the stuff to cook (self-replicate), build infrastructure and then just build the human bodies (or computers running ems) of the people you want to settle there.