The argument is simple. If the future is filled with artificial intelligence of human origin, and if that AI is conscious, then any given observer should expect to be one of those AIs.
could you explain where the fallacy is? i searched ‘doomsday argument fallacy’ and was taken to a wikipedia page which doesn’t mention fallacies, and describes it as ‘a probabilistic argument’ attributed to various philosophers.
the argument also seems true to me prima facie so i’d like to know if i’m making a mistake.
On that Wikipedia page, the section “Rebuttals” briefly outlines numerous reasons not to believe it.
Anthropic reasoning is in general extremely weak. It is also much more easy than usual to accidentally double-count evidence, make assumptions without evidence, privilege specific hypotheses, or make other errors of reasoning without the usual means of checking such reasoning.
Was there anything which led you to believe this that I could read? (About the weakness of anthropic reasoning, not about the potential errors humans attempting to use it could make; I agree that those exist, and that they’re a good reason to be cautious and aware of one’s own capability, but I don’t really see them as arguments against the validity of the method when used properly.)
I guess it’s not universally considered a fallacy. But think of it as waiting on a bus stop without knowing the schedule. By a similar argument (your random arrival time has the mean and median in the middle between two buses), the time you should expect to wait for the bus to arrive is the same as the time you have already been waiting. This is not much of a prediction, since your expectation value changes constantly, and the eventual bus arrival, if any, will be a complete surprise. The most you can act on is giving up and leaving after waiting for a long time, not really a possible action in either the doomsday scenario or when talking about AGI consciousness.
I’m not sure I see the relevance of the bus example to anthropic reasoning. Below I explain why (maybe I spent too long on this; ended up hyperfocusing). Note that all uses of ‘average’ and ‘expectation’ are in the technical, mathematical sense.
By a similar argument (your random arrival time has the mean and median in the middle between two buses), the time you should expect to wait for the bus to arrive is the same as the time you have already been waiting
If the ‘random arrival time has the mean in the middle between two busses,’ one should expect to wait time equal to the remaining wait time in that average situation.
One could respond that we don’t know the interval between busses, and thus don’t know the remaining wait time; but this does not seem to be a reason to expect the bus to arrive after the time you’ve been waiting doubles (from the view of neither anthropic nor non-anthropic reasoners).
The bus example has some implicit assumptions about the interval between events (i.e., one assumes that busses operate on the timescales of human schedules). In the odd scenario where one was clueless when an event would happen, and only knew (a) it would eventually happen, (b) it happened at least once before, and (c) it’s happening on an interval, then that one would have no choice but to reason that the event could have happened most recently at any point in the past, and thus could happen next at any point in the future.
Or, more precisely, given the event could not have first happened earlier than the starting point of the universe, an anthropic reasoner could reason that the average observer will exist at the mid-point between (a) the average expected time of the initial event: half-way between when the universe began and now, and (b) when the event will next happen. Their ‘expectation’ would still shift forwards as time passes (correctly from their perspective; it would seem like they were updating on further evidence about which times the event has not happened at), but it would not start out equal to the time since they first started waiting.
This still wouldn’t seem quite analogous to real-world anthropic reasoning to me, though, because in reality an anthropic reasoner doesn’t expect the amount of observers to be evenly distributed over their range. There are a few possible distributions of the amount of observers over time which we consider plausible (some of which are mentioned in the original post), and in none of these distributions is the average in the center as it is with busses.
That’s a doomsday argument fallacy.
could you explain where the fallacy is? i searched ‘doomsday argument fallacy’ and was taken to a wikipedia page which doesn’t mention fallacies, and describes it as ‘a probabilistic argument’ attributed to various philosophers.
the argument also seems true to me prima facie so i’d like to know if i’m making a mistake.
On that Wikipedia page, the section “Rebuttals” briefly outlines numerous reasons not to believe it.
Anthropic reasoning is in general extremely weak. It is also much more easy than usual to accidentally double-count evidence, make assumptions without evidence, privilege specific hypotheses, or make other errors of reasoning without the usual means of checking such reasoning.
I’ll check out that section.
Was there anything which led you to believe this that I could read? (About the weakness of anthropic reasoning, not about the potential errors humans attempting to use it could make; I agree that those exist, and that they’re a good reason to be cautious and aware of one’s own capability, but I don’t really see them as arguments against the validity of the method when used properly.)
I guess it’s not universally considered a fallacy. But think of it as waiting on a bus stop without knowing the schedule. By a similar argument (your random arrival time has the mean and median in the middle between two buses), the time you should expect to wait for the bus to arrive is the same as the time you have already been waiting. This is not much of a prediction, since your expectation value changes constantly, and the eventual bus arrival, if any, will be a complete surprise. The most you can act on is giving up and leaving after waiting for a long time, not really a possible action in either the doomsday scenario or when talking about AGI consciousness.
I’m not sure I see the relevance of the bus example to anthropic reasoning. Below I explain why (maybe I spent too long on this; ended up hyperfocusing). Note that all uses of ‘average’ and ‘expectation’ are in the technical, mathematical sense.
If the ‘random arrival time has the mean in the middle between two busses,’ one should expect to wait time equal to the remaining wait time in that average situation.
One could respond that we don’t know the interval between busses, and thus don’t know the remaining wait time; but this does not seem to be a reason to expect the bus to arrive after the time you’ve been waiting doubles (from the view of neither anthropic nor non-anthropic reasoners).
The bus example has some implicit assumptions about the interval between events (i.e., one assumes that busses operate on the timescales of human schedules). In the odd scenario where one was clueless when an event would happen, and only knew (a) it would eventually happen, (b) it happened at least once before, and (c) it’s happening on an interval, then that one would have no choice but to reason that the event could have happened most recently at any point in the past, and thus could happen next at any point in the future.
Or, more precisely, given the event could not have first happened earlier than the starting point of the universe, an anthropic reasoner could reason that the average observer will exist at the mid-point between (a) the average expected time of the initial event: half-way between when the universe began and now, and (b) when the event will next happen. Their ‘expectation’ would still shift forwards as time passes (correctly from their perspective; it would seem like they were updating on further evidence about which times the event has not happened at), but it would not start out equal to the time since they first started waiting.
This still wouldn’t seem quite analogous to real-world anthropic reasoning to me, though, because in reality an anthropic reasoner doesn’t expect the amount of observers to be evenly distributed over their range. There are a few possible distributions of the amount of observers over time which we consider plausible (some of which are mentioned in the original post), and in none of these distributions is the average in the center as it is with busses.