We can imagine a handbook that is written to be useful for a broad spectrum of possible disastrous situations.
The handbook could be written for post-disaster survivors finding themselves in many possible situations. For example, your first bullet “No technological human societies survive” could be expanded to “(No|Few|Distant|Hostile) technological human societies survive”. Indeed, uncertainty about which of the aforementioned possibilities actually hold might be quite probable, given both a civilization-destroying disaster and some survivors.
To some extent, the Long Now’s Rosetta project (to build sturdy discs inscribed with examples of many languages) is an example of this sort of handbook.
I agree a knowledge repository would be very useful for survivors right after the disaster. But I don’t think any scenario is probable that involves a society with a reasonably stable level of technology and food production existing and profiting from such a book.
BTW, the Rosetta project seems to be purely about describing languages so future people can understand them.
For example, your first bullet “No technological human societies survive” could be expanded to “(No|Few|Distant|Hostile) technological human societies survive”.
If a few distant technological societies survive, even just one with some reasonable shipping & industry, then I expect they will quickly establish contact with most of the world, if only to exploit natural resources & farming. Most or all tech. economies today rely on many imports of minerals, food, etc. And knowledge and technology would be dispersed quicker with the assistance of this society than by means of such a book.
If a ‘hostile’ society survives—well, hostile towards whom? Towards all other, non-high-tech survivors? I don’t see this as the default attitude of a surviving society that’s the most powerful country left on Earth, so without knowing more I hesitate to try to empower whoever they’re hostile towards. What did you have in mind here?
Your first point is that the handbook is not likely to be useful for the purpose of helping reconstruction after a disaster, because the chance of a disaster being total enough to destroy technology, but not total enough to destroy humanity, is small. I agree completely—you have a very strong argument there.
However, you go on to argue that IF a technology-destroying-humanity-sparing disaster occured, THEN technological societies would quickly establish contact, disperse knowledge, et cetera. In this after-the-disaster reasoning, you’re using our present notions of what is likely and unlikely to happen.
Reasoning like this beyond the very very unlikely occurrence seems fraught with danger. In order for such an unlikely occurrence to occur, we must have something significantly wrong with our current understanding of the world. If something like that happened, we would revise our understanding, not continue to use it. Anyone writing the handbook would have to plan for a wild array of possibilities.
Instead of focusing on the fact that the handbook is not likely to be used for its intended purpose, consider:
Might it have side benefits, spin-offs from its officially-intended purpose?
If we assume that there is “something significantly wrong with our current understanding of the world” but don’t know anything more specific, we can’t come to any useful conclusions. There’s a huge number of things we could do that we think aren’t likely to be useful but where we might be wrong.
So is writing this book something we should do (as the original comment seemed to suggest)? No. But I agree it’s something we could do, is very unlikely to be harmful, and is neat and fun into the bargain.
With that said, I’m going back to working on my cool, neat, fun, non-humanity-saving project :-)
We can imagine a handbook that is written to be useful for a broad spectrum of possible disastrous situations.
The handbook could be written for post-disaster survivors finding themselves in many possible situations. For example, your first bullet “No technological human societies survive” could be expanded to “(No|Few|Distant|Hostile) technological human societies survive”. Indeed, uncertainty about which of the aforementioned possibilities actually hold might be quite probable, given both a civilization-destroying disaster and some survivors.
To some extent, the Long Now’s Rosetta project (to build sturdy discs inscribed with examples of many languages) is an example of this sort of handbook.
http://rosettaproject.org/
I agree a knowledge repository would be very useful for survivors right after the disaster. But I don’t think any scenario is probable that involves a society with a reasonably stable level of technology and food production existing and profiting from such a book.
BTW, the Rosetta project seems to be purely about describing languages so future people can understand them.
If a few distant technological societies survive, even just one with some reasonable shipping & industry, then I expect they will quickly establish contact with most of the world, if only to exploit natural resources & farming. Most or all tech. economies today rely on many imports of minerals, food, etc. And knowledge and technology would be dispersed quicker with the assistance of this society than by means of such a book.
If a ‘hostile’ society survives—well, hostile towards whom? Towards all other, non-high-tech survivors? I don’t see this as the default attitude of a surviving society that’s the most powerful country left on Earth, so without knowing more I hesitate to try to empower whoever they’re hostile towards. What did you have in mind here?
Your first point is that the handbook is not likely to be useful for the purpose of helping reconstruction after a disaster, because the chance of a disaster being total enough to destroy technology, but not total enough to destroy humanity, is small. I agree completely—you have a very strong argument there.
However, you go on to argue that IF a technology-destroying-humanity-sparing disaster occured, THEN technological societies would quickly establish contact, disperse knowledge, et cetera. In this after-the-disaster reasoning, you’re using our present notions of what is likely and unlikely to happen.
Reasoning like this beyond the very very unlikely occurrence seems fraught with danger. In order for such an unlikely occurrence to occur, we must have something significantly wrong with our current understanding of the world. If something like that happened, we would revise our understanding, not continue to use it. Anyone writing the handbook would have to plan for a wild array of possibilities.
Instead of focusing on the fact that the handbook is not likely to be used for its intended purpose, consider:
Might it have side benefits, spin-offs from its officially-intended purpose?
Is it harmful?
Is it neat, cool, and fun?
If we assume that there is “something significantly wrong with our current understanding of the world” but don’t know anything more specific, we can’t come to any useful conclusions. There’s a huge number of things we could do that we think aren’t likely to be useful but where we might be wrong.
So is writing this book something we should do (as the original comment seemed to suggest)? No. But I agree it’s something we could do, is very unlikely to be harmful, and is neat and fun into the bargain.
With that said, I’m going back to working on my cool, neat, fun, non-humanity-saving project :-)