Consider three types of universes: Those where life never develops, those where life develops and there is no great filter and so paperclip maximizers quickly make it impossible for new life to develop after a short period, and those where life develops and there is a great filter that destroy civilizations before paperclip maximizers get going. Most observers like us will live in the third type of universe. And almost everyone who thinks about anthropics will live at a time close to when the great filter hits.
Your are privileging your hypothesis—there are vastly more types of universes …
There are universes where life develops and civilizations are abundant, and all of our observations to date are compatible with the universe being filled with advanced civs (which probably become mostly invisible to us given current tech as they approach optimal physical configurations of near zero temperature and tiny size).
The are universes like the above where advanced civs spawn new universes to gain god-like ‘magic’ anthropic powers, effectively manipulating/rewriting the laws of physics.
Universes in these categories are both more aggressive/capable replicators—they create new universes at a higher rate, so they tend to dominate any anthropic distribution.
And finally, there are considerations where the distribution over simulation observer moments diverges significantly from original observer moments, which tends to complicate these anthropic considerations.
For example, we could live in a universe with lots of civs, but they tend to focus far more simulations on the origins of the first civ or early civs.
While it is true – it is Katja Grace’s Doomsday argument in a nutshell, it doesn’t take into account possibility of simulations. But most paperclipers will create many instrumental simulations, and in this case we are in it.
“But most paperclipers will create many instrumental simulations,”
I don’t see this. They would solve science and almost certainly not make use of biological processes and so have no need to simulate us. The wisdom of nature would offer them nothing of value.
Each AI need to create at least several millions simulation in order to estimate distribution of other AIs in the universe and their most probable goal system. Probably it will model only part of the ansector history (something like only lesswrong members).
Excellent point. I agree. So the more we talk about AIs the greater our mind’s measure? My young son has the potential to be an excellent computer programmer. The chance that your theory is true should raise the odds that he will end up working on AI because AIs will make more simulations involving me if my son ends up working on creating AI.
I think that ultimate reality is more complex, and something like each mind evolves into maximum measure naturally (in his own branch of the universe). I need to write long and controversial post to show it, but it should combine ideas of anthropics, simulation and quantum immortality.
In short: if QI works, the most probable way for me to become immortal is to become a strong AI by self-upgrade. And the fact that I find my self near such possibility is not coincedence, because measure is not evenly distributed between observers, but more complex and conscious observers are more likely. (It is more probable to find one self a human than an ant). This argument itself have two versions: linear, and (less probable) quantum. Some people in MIRI spoke about the same ideas informally, so now I believe that I am not totally crazy )))
I had exactly the same insight as James_Miller a couple of days ago. Are you sure this is Grace’s Doomsday argument? Her reasoning seems to be rather along the line that it is more likely that we’ll be experiencing a late Great Filter (argued by SIA which I’m not familiar with). The idea here is rather that for life to likely exist for a prolonged time there has to be a late Great Filter (like space travel being extremely difficult or UFAI), because otherwise Paperclippers would quickly conquer the entire space (at least in universes like ours where all points in space can be travelled to in principle).
Yes, I now see the the difference: “where life develops and there is a great filter that destroy civilizations before paperclip maximizers get going.”
But I understand it in the way that great filter is something that usually happens during tech development of a civilization before it creates AI. Like nuclear wars and bio catastrophes are so likely that no civilization survive until creation of strong AI.
Consider three types of universes: Those where life never develops, those where life develops and there is no great filter and so paperclip maximizers quickly make it impossible for new life to develop after a short period, and those where life develops and there is a great filter that destroy civilizations before paperclip maximizers get going. Most observers like us will live in the third type of universe. And almost everyone who thinks about anthropics will live at a time close to when the great filter hits.
Your are privileging your hypothesis—there are vastly more types of universes …
There are universes where life develops and civilizations are abundant, and all of our observations to date are compatible with the universe being filled with advanced civs (which probably become mostly invisible to us given current tech as they approach optimal physical configurations of near zero temperature and tiny size).
The are universes like the above where advanced civs spawn new universes to gain god-like ‘magic’ anthropic powers, effectively manipulating/rewriting the laws of physics.
Universes in these categories are both more aggressive/capable replicators—they create new universes at a higher rate, so they tend to dominate any anthropic distribution.
And finally, there are considerations where the distribution over simulation observer moments diverges significantly from original observer moments, which tends to complicate these anthropic considerations.
For example, we could live in a universe with lots of civs, but they tend to focus far more simulations on the origins of the first civ or early civs.
While it is true – it is Katja Grace’s Doomsday argument in a nutshell, it doesn’t take into account possibility of simulations. But most paperclipers will create many instrumental simulations, and in this case we are in it.
I don’t see this. They would solve science and almost certainly not make use of biological processes and so have no need to simulate us. The wisdom of nature would offer them nothing of value.
Each AI need to create at least several millions simulation in order to estimate distribution of other AIs in the universe and their most probable goal system. Probably it will model only part of the ansector history (something like only lesswrong members).
Excellent point. I agree. So the more we talk about AIs the greater our mind’s measure? My young son has the potential to be an excellent computer programmer. The chance that your theory is true should raise the odds that he will end up working on AI because AIs will make more simulations involving me if my son ends up working on creating AI.
I think that ultimate reality is more complex, and something like each mind evolves into maximum measure naturally (in his own branch of the universe). I need to write long and controversial post to show it, but it should combine ideas of anthropics, simulation and quantum immortality.
In short: if QI works, the most probable way for me to become immortal is to become a strong AI by self-upgrade. And the fact that I find my self near such possibility is not coincedence, because measure is not evenly distributed between observers, but more complex and conscious observers are more likely. (It is more probable to find one self a human than an ant). This argument itself have two versions: linear, and (less probable) quantum. Some people in MIRI spoke about the same ideas informally, so now I believe that I am not totally crazy )))
I had exactly the same insight as James_Miller a couple of days ago. Are you sure this is Grace’s Doomsday argument? Her reasoning seems to be rather along the line that it is more likely that we’ll be experiencing a late Great Filter (argued by SIA which I’m not familiar with). The idea here is rather that for life to likely exist for a prolonged time there has to be a late Great Filter (like space travel being extremely difficult or UFAI), because otherwise Paperclippers would quickly conquer the entire space (at least in universes like ours where all points in space can be travelled to in principle).
Yes, I now see the the difference: “where life develops and there is a great filter that destroy civilizations before paperclip maximizers get going.”
But I understand it in the way that great filter is something that usually happens during tech development of a civilization before it creates AI. Like nuclear wars and bio catastrophes are so likely that no civilization survive until creation of strong AI.
It doesn’t contradict Katja’s version, which only claims that GF is in the future. It still in the future. https://meteuphoric.wordpress.com/2010/03/23/sia-doomsday-the-filter-is-ahead/