While it is true – it is Katja Grace’s Doomsday argument in a nutshell, it doesn’t take into account possibility of simulations. But most paperclipers will create many instrumental simulations, and in this case we are in it.
“But most paperclipers will create many instrumental simulations,”
I don’t see this. They would solve science and almost certainly not make use of biological processes and so have no need to simulate us. The wisdom of nature would offer them nothing of value.
Each AI need to create at least several millions simulation in order to estimate distribution of other AIs in the universe and their most probable goal system. Probably it will model only part of the ansector history (something like only lesswrong members).
Excellent point. I agree. So the more we talk about AIs the greater our mind’s measure? My young son has the potential to be an excellent computer programmer. The chance that your theory is true should raise the odds that he will end up working on AI because AIs will make more simulations involving me if my son ends up working on creating AI.
I think that ultimate reality is more complex, and something like each mind evolves into maximum measure naturally (in his own branch of the universe). I need to write long and controversial post to show it, but it should combine ideas of anthropics, simulation and quantum immortality.
In short: if QI works, the most probable way for me to become immortal is to become a strong AI by self-upgrade. And the fact that I find my self near such possibility is not coincedence, because measure is not evenly distributed between observers, but more complex and conscious observers are more likely. (It is more probable to find one self a human than an ant). This argument itself have two versions: linear, and (less probable) quantum. Some people in MIRI spoke about the same ideas informally, so now I believe that I am not totally crazy )))
I had exactly the same insight as James_Miller a couple of days ago. Are you sure this is Grace’s Doomsday argument? Her reasoning seems to be rather along the line that it is more likely that we’ll be experiencing a late Great Filter (argued by SIA which I’m not familiar with). The idea here is rather that for life to likely exist for a prolonged time there has to be a late Great Filter (like space travel being extremely difficult or UFAI), because otherwise Paperclippers would quickly conquer the entire space (at least in universes like ours where all points in space can be travelled to in principle).
Yes, I now see the the difference: “where life develops and there is a great filter that destroy civilizations before paperclip maximizers get going.”
But I understand it in the way that great filter is something that usually happens during tech development of a civilization before it creates AI. Like nuclear wars and bio catastrophes are so likely that no civilization survive until creation of strong AI.
While it is true – it is Katja Grace’s Doomsday argument in a nutshell, it doesn’t take into account possibility of simulations. But most paperclipers will create many instrumental simulations, and in this case we are in it.
I don’t see this. They would solve science and almost certainly not make use of biological processes and so have no need to simulate us. The wisdom of nature would offer them nothing of value.
Each AI need to create at least several millions simulation in order to estimate distribution of other AIs in the universe and their most probable goal system. Probably it will model only part of the ansector history (something like only lesswrong members).
Excellent point. I agree. So the more we talk about AIs the greater our mind’s measure? My young son has the potential to be an excellent computer programmer. The chance that your theory is true should raise the odds that he will end up working on AI because AIs will make more simulations involving me if my son ends up working on creating AI.
I think that ultimate reality is more complex, and something like each mind evolves into maximum measure naturally (in his own branch of the universe). I need to write long and controversial post to show it, but it should combine ideas of anthropics, simulation and quantum immortality.
In short: if QI works, the most probable way for me to become immortal is to become a strong AI by self-upgrade. And the fact that I find my self near such possibility is not coincedence, because measure is not evenly distributed between observers, but more complex and conscious observers are more likely. (It is more probable to find one self a human than an ant). This argument itself have two versions: linear, and (less probable) quantum. Some people in MIRI spoke about the same ideas informally, so now I believe that I am not totally crazy )))
I had exactly the same insight as James_Miller a couple of days ago. Are you sure this is Grace’s Doomsday argument? Her reasoning seems to be rather along the line that it is more likely that we’ll be experiencing a late Great Filter (argued by SIA which I’m not familiar with). The idea here is rather that for life to likely exist for a prolonged time there has to be a late Great Filter (like space travel being extremely difficult or UFAI), because otherwise Paperclippers would quickly conquer the entire space (at least in universes like ours where all points in space can be travelled to in principle).
Yes, I now see the the difference: “where life develops and there is a great filter that destroy civilizations before paperclip maximizers get going.”
But I understand it in the way that great filter is something that usually happens during tech development of a civilization before it creates AI. Like nuclear wars and bio catastrophes are so likely that no civilization survive until creation of strong AI.
It doesn’t contradict Katja’s version, which only claims that GF is in the future. It still in the future. https://meteuphoric.wordpress.com/2010/03/23/sia-doomsday-the-filter-is-ahead/