Your first point is that the handbook is not likely to be useful for the purpose of helping reconstruction after a disaster, because the chance of a disaster being total enough to destroy technology, but not total enough to destroy humanity, is small. I agree completely—you have a very strong argument there.
However, you go on to argue that IF a technology-destroying-humanity-sparing disaster occured, THEN technological societies would quickly establish contact, disperse knowledge, et cetera. In this after-the-disaster reasoning, you’re using our present notions of what is likely and unlikely to happen.
Reasoning like this beyond the very very unlikely occurrence seems fraught with danger. In order for such an unlikely occurrence to occur, we must have something significantly wrong with our current understanding of the world. If something like that happened, we would revise our understanding, not continue to use it. Anyone writing the handbook would have to plan for a wild array of possibilities.
Instead of focusing on the fact that the handbook is not likely to be used for its intended purpose, consider:
Might it have side benefits, spin-offs from its officially-intended purpose?
If we assume that there is “something significantly wrong with our current understanding of the world” but don’t know anything more specific, we can’t come to any useful conclusions. There’s a huge number of things we could do that we think aren’t likely to be useful but where we might be wrong.
So is writing this book something we should do (as the original comment seemed to suggest)? No. But I agree it’s something we could do, is very unlikely to be harmful, and is neat and fun into the bargain.
With that said, I’m going back to working on my cool, neat, fun, non-humanity-saving project :-)
Your first point is that the handbook is not likely to be useful for the purpose of helping reconstruction after a disaster, because the chance of a disaster being total enough to destroy technology, but not total enough to destroy humanity, is small. I agree completely—you have a very strong argument there.
However, you go on to argue that IF a technology-destroying-humanity-sparing disaster occured, THEN technological societies would quickly establish contact, disperse knowledge, et cetera. In this after-the-disaster reasoning, you’re using our present notions of what is likely and unlikely to happen.
Reasoning like this beyond the very very unlikely occurrence seems fraught with danger. In order for such an unlikely occurrence to occur, we must have something significantly wrong with our current understanding of the world. If something like that happened, we would revise our understanding, not continue to use it. Anyone writing the handbook would have to plan for a wild array of possibilities.
Instead of focusing on the fact that the handbook is not likely to be used for its intended purpose, consider:
Might it have side benefits, spin-offs from its officially-intended purpose?
Is it harmful?
Is it neat, cool, and fun?
If we assume that there is “something significantly wrong with our current understanding of the world” but don’t know anything more specific, we can’t come to any useful conclusions. There’s a huge number of things we could do that we think aren’t likely to be useful but where we might be wrong.
So is writing this book something we should do (as the original comment seemed to suggest)? No. But I agree it’s something we could do, is very unlikely to be harmful, and is neat and fun into the bargain.
With that said, I’m going back to working on my cool, neat, fun, non-humanity-saving project :-)