Seems to be a lot of buzz about Katja_Grace’s most recent input on the Doomsday problem. Background: see my points in this discussion a while back, about the problem of counting observers, which applies to her filling of the boxes.
Regarding this post and Katja Grace’s argument, I think James Miller’s point could generalize even further, showing clearly the reductio:
“Out of all attempts at significant technological gain in a civilization, few will succeed. We’re a civilization. Therefore, ours probably won’t advance.”
As far as I can tell, it’s analogous in all relevant respects, but feel free to prove me wrong.
“Out of all attempts at significant technological gain in a civilization, few will succeed. We’re a civilization. Therefore, ours probably won’t advance.”
You use anthropic reasoning to get the claim “few will succeed”, which is a conclusion, not a premise.
“Few will succeed” is an observation, not a premise, though perhaps I should have said, “Few have been observed to succeed”.
What is unjustified is the conclusion that ours will not have some success that makes up for the all the other failures, which is why I think Katja_Grace’s reasoning (and its reductios) fails.
It’s not an observation. It’s an inference for which you need the anthropic principle. “Few have succeeded so far” is an observation. You’d need to observe the future to observe “Few will succeed”.
Perhaps it’s because I couldn’t find “James Miller,” but the tech gain argument seems comparitively underdetermined. I did mean “how many attempts we get,” as in “most attempts will fail, but if we are allowed 3^^^3 attempts at a significant technological gain, we will expect to advance.” I think you need some sort of prior distribution for number of attempts to make it analogous to SIA doomsday.
“most attempts will fail, but if we are allowed 3^^^3 attempts at a significant technological gain, we will expect to advance.” I think you need some sort of prior distribution for number of attempts to make it analogous to SIA doomsday.
And you need a similar prior distribution for Katja_Grace’s SIA argument, which makes it underdetermined as well. (My “no anthropic reasoning” paradigm is really starting to pan out!)
James Miller is the author of this top-level post.
James Miller is the author of this top-level post.
Thanks, that underscores my difficulty finding relevant details.
I think you’re saying the relevant detail here is the applicability of anthropic reasoning to the universe we actually live in: Actually using the island argument doesn’t help us learn about the real world as much as looking up historic data about islands, and the SIA doomsday argument fails similarly in the face of real-world astronomy. Is this correct?
Seems to be a lot of buzz about Katja_Grace’s most recent input on the Doomsday problem. Background: see my points in this discussion a while back, about the problem of counting observers, which applies to her filling of the boxes.
Regarding this post and Katja Grace’s argument, I think James Miller’s point could generalize even further, showing clearly the reductio:
“Out of all attempts at significant technological gain in a civilization, few will succeed. We’re a civilization. Therefore, ours probably won’t advance.”
As far as I can tell, it’s analogous in all relevant respects, but feel free to prove me wrong.
You use anthropic reasoning to get the claim “few will succeed”, which is a conclusion, not a premise.
“Few will succeed” is an observation, not a premise, though perhaps I should have said, “Few have been observed to succeed”.
What is unjustified is the conclusion that ours will not have some success that makes up for the all the other failures, which is why I think Katja_Grace’s reasoning (and its reductios) fails.
It’s not an observation. It’s an inference for which you need the anthropic principle. “Few have succeeded so far” is an observation. You’d need to observe the future to observe “Few will succeed”.
Doesn’t that leave out a very significant term in the equation—the number of attempts at significant technological gain we get?
You mean succeed at? Yes, and that problem applies just the same that Katja_Grace’s use of the SIA to predict a future filter.
Perhaps it’s because I couldn’t find “James Miller,” but the tech gain argument seems comparitively underdetermined. I did mean “how many attempts we get,” as in “most attempts will fail, but if we are allowed 3^^^3 attempts at a significant technological gain, we will expect to advance.” I think you need some sort of prior distribution for number of attempts to make it analogous to SIA doomsday.
And you need a similar prior distribution for Katja_Grace’s SIA argument, which makes it underdetermined as well. (My “no anthropic reasoning” paradigm is really starting to pan out!)
James Miller is the author of this top-level post.
Thanks, that underscores my difficulty finding relevant details.
I think you’re saying the relevant detail here is the applicability of anthropic reasoning to the universe we actually live in: Actually using the island argument doesn’t help us learn about the real world as much as looking up historic data about islands, and the SIA doomsday argument fails similarly in the face of real-world astronomy. Is this correct?