As I said, there is no reason why all forms of DA cannot be valid at the same time. They are just different methods of estimation and the conclusions are approximate. But most of them would agree that humans will not survive for billions of years, and that conclusion is virtually certain to be true.
If we agree that the DA is valid, there may be ways to cheat it. For example, if global population reduced to 1 person, he would escape the power of Carter’s DA, (but not Gott’s one). This 1 person could be superintelligent AI.
Another cheating option is memory reset or a large number of simulations.
The difference is on decision level. If extinction is inevitable, there is no reason to spend time on any prevention efforts, aтв it is better to start partying now trying to get as much utility as possible before a catastrophe.
But if there is a small probability of survival, such behavior will be maladaptive, as it will kill this small probability for another civilization which could survive if it goes all in in survival mode.
So from the decision point of view, in this case, we should ignore almost inevitable extinction probability provided by DA, - and it is the point which Stuart has been demonstrating in his post. But he didn’t suggest this example and it was not clear to me until now.
According to the argument here, we are at a nearly perfectly average place in the series of habitable planet years, rather than early in the universe as it first appears. If this is the case, it strongly suggests that the human race will go extinct on earth, rather than moving to any other place, ever. I think this is probably what will happen.
And even if it does not, physics suggests even more strongly that the human race will go extinct sooner or later. I am fairly sure this will happen.
The DA just supports all of those things that we already know: there is no reason except wishful thinking to think that humans will not go extinct in a normal way.
Totally agree. But in my interpretation of ADT, the DA should not stop us from trying to survive (in a comment above Stuart said that it is not DA, but “presumptuous philosopher” paradox) as there is still a small chance.
I also use what I call Meta Doomsday argument. It basically said that there is a logical uncertainty about if DA or any of its version are true, and thus we should give some subjective probability Ps to the DA is true. Let’s say it is 0.5.
As DA is also a probabilistic argument, we should multiply Ps on DA’s probability shift, and we will still get a large update in the extinction probability as a result.
As I said, there is no reason why all forms of DA cannot be valid at the same time. They are just different methods of estimation and the conclusions are approximate. But most of them would agree that humans will not survive for billions of years, and that conclusion is virtually certain to be true.
If we agree that the DA is valid, there may be ways to cheat it. For example, if global population reduced to 1 person, he would escape the power of Carter’s DA, (but not Gott’s one). This 1 person could be superintelligent AI.
Another cheating option is memory reset or a large number of simulations.
The difference is on decision level. If extinction is inevitable, there is no reason to spend time on any prevention efforts, aтв it is better to start partying now trying to get as much utility as possible before a catastrophe.
But if there is a small probability of survival, such behavior will be maladaptive, as it will kill this small probability for another civilization which could survive if it goes all in in survival mode.
So from the decision point of view, in this case, we should ignore almost inevitable extinction probability provided by DA, - and it is the point which Stuart has been demonstrating in his post. But he didn’t suggest this example and it was not clear to me until now.
According to the argument here, we are at a nearly perfectly average place in the series of habitable planet years, rather than early in the universe as it first appears. If this is the case, it strongly suggests that the human race will go extinct on earth, rather than moving to any other place, ever. I think this is probably what will happen.
And even if it does not, physics suggests even more strongly that the human race will go extinct sooner or later. I am fairly sure this will happen.
The DA just supports all of those things that we already know: there is no reason except wishful thinking to think that humans will not go extinct in a normal way.
Totally agree. But in my interpretation of ADT, the DA should not stop us from trying to survive (in a comment above Stuart said that it is not DA, but “presumptuous philosopher” paradox) as there is still a small chance.
I also use what I call Meta Doomsday argument. It basically said that there is a logical uncertainty about if DA or any of its version are true, and thus we should give some subjective probability Ps to the DA is true. Let’s say it is 0.5.
As DA is also a probabilistic argument, we should multiply Ps on DA’s probability shift, and we will still get a large update in the extinction probability as a result.
I agree with all this.