“But most paperclipers will create many instrumental simulations,”
I don’t see this. They would solve science and almost certainly not make use of biological processes and so have no need to simulate us. The wisdom of nature would offer them nothing of value.
Each AI need to create at least several millions simulation in order to estimate distribution of other AIs in the universe and their most probable goal system. Probably it will model only part of the ansector history (something like only lesswrong members).
Excellent point. I agree. So the more we talk about AIs the greater our mind’s measure? My young son has the potential to be an excellent computer programmer. The chance that your theory is true should raise the odds that he will end up working on AI because AIs will make more simulations involving me if my son ends up working on creating AI.
I think that ultimate reality is more complex, and something like each mind evolves into maximum measure naturally (in his own branch of the universe). I need to write long and controversial post to show it, but it should combine ideas of anthropics, simulation and quantum immortality.
In short: if QI works, the most probable way for me to become immortal is to become a strong AI by self-upgrade. And the fact that I find my self near such possibility is not coincedence, because measure is not evenly distributed between observers, but more complex and conscious observers are more likely. (It is more probable to find one self a human than an ant). This argument itself have two versions: linear, and (less probable) quantum. Some people in MIRI spoke about the same ideas informally, so now I believe that I am not totally crazy )))
I don’t see this. They would solve science and almost certainly not make use of biological processes and so have no need to simulate us. The wisdom of nature would offer them nothing of value.
Each AI need to create at least several millions simulation in order to estimate distribution of other AIs in the universe and their most probable goal system. Probably it will model only part of the ansector history (something like only lesswrong members).
Excellent point. I agree. So the more we talk about AIs the greater our mind’s measure? My young son has the potential to be an excellent computer programmer. The chance that your theory is true should raise the odds that he will end up working on AI because AIs will make more simulations involving me if my son ends up working on creating AI.
I think that ultimate reality is more complex, and something like each mind evolves into maximum measure naturally (in his own branch of the universe). I need to write long and controversial post to show it, but it should combine ideas of anthropics, simulation and quantum immortality.
In short: if QI works, the most probable way for me to become immortal is to become a strong AI by self-upgrade. And the fact that I find my self near such possibility is not coincedence, because measure is not evenly distributed between observers, but more complex and conscious observers are more likely. (It is more probable to find one self a human than an ant). This argument itself have two versions: linear, and (less probable) quantum. Some people in MIRI spoke about the same ideas informally, so now I believe that I am not totally crazy )))