I’ve noticed that “already” seems to be a very important word in LW-related arguments and posts, i.e. if X were a good idea, people would already be doing it; if Y is a plausible end for the universe, it’s probably happened already.
I’m sure you already (heh) know this, but I figured I would say this lest passer-by conclude that these two arguments are analogous instances of the same argument. They are not.
“if X were a good idea, people would already be doing it” has a structure entirely different from that of “if Y is a plausible end for the universe, it’s probably happened already.” The former is reasoning about the optimization power of already-existing agents, while the latter uses intuitive anthropic reasoning based on the questionable premise that the universe tends to re-form after it is destroyed.
The latest SMBC is on the singularity, fun theory and simulations.
Liked it a lot.
I’ve noticed that “already” seems to be a very important word in LW-related arguments and posts, i.e. if X were a good idea, people would already be doing it; if Y is a plausible end for the universe, it’s probably happened already.
I’m sure you already (heh) know this, but I figured I would say this lest passer-by conclude that these two arguments are analogous instances of the same argument. They are not.
“if X were a good idea, people would already be doing it” has a structure entirely different from that of “if Y is a plausible end for the universe, it’s probably happened already.” The former is reasoning about the optimization power of already-existing agents, while the latter uses intuitive anthropic reasoning based on the questionable premise that the universe tends to re-form after it is destroyed.