Perhaps it’s because I couldn’t find “James Miller,” but the tech gain argument seems comparitively underdetermined. I did mean “how many attempts we get,” as in “most attempts will fail, but if we are allowed 3^^^3 attempts at a significant technological gain, we will expect to advance.” I think you need some sort of prior distribution for number of attempts to make it analogous to SIA doomsday.
“most attempts will fail, but if we are allowed 3^^^3 attempts at a significant technological gain, we will expect to advance.” I think you need some sort of prior distribution for number of attempts to make it analogous to SIA doomsday.
And you need a similar prior distribution for Katja_Grace’s SIA argument, which makes it underdetermined as well. (My “no anthropic reasoning” paradigm is really starting to pan out!)
James Miller is the author of this top-level post.
James Miller is the author of this top-level post.
Thanks, that underscores my difficulty finding relevant details.
I think you’re saying the relevant detail here is the applicability of anthropic reasoning to the universe we actually live in: Actually using the island argument doesn’t help us learn about the real world as much as looking up historic data about islands, and the SIA doomsday argument fails similarly in the face of real-world astronomy. Is this correct?
Doesn’t that leave out a very significant term in the equation—the number of attempts at significant technological gain we get?
You mean succeed at? Yes, and that problem applies just the same that Katja_Grace’s use of the SIA to predict a future filter.
Perhaps it’s because I couldn’t find “James Miller,” but the tech gain argument seems comparitively underdetermined. I did mean “how many attempts we get,” as in “most attempts will fail, but if we are allowed 3^^^3 attempts at a significant technological gain, we will expect to advance.” I think you need some sort of prior distribution for number of attempts to make it analogous to SIA doomsday.
And you need a similar prior distribution for Katja_Grace’s SIA argument, which makes it underdetermined as well. (My “no anthropic reasoning” paradigm is really starting to pan out!)
James Miller is the author of this top-level post.
Thanks, that underscores my difficulty finding relevant details.
I think you’re saying the relevant detail here is the applicability of anthropic reasoning to the universe we actually live in: Actually using the island argument doesn’t help us learn about the real world as much as looking up historic data about islands, and the SIA doomsday argument fails similarly in the face of real-world astronomy. Is this correct?