I’ve settled reasonably firmly on the questions I want to focus on—how to make decisions when gathering useful information to help make those decisions carries a risk of an immense cost, especially when the stakes are meaningful enough to be significant—and I’ve just finished working out the technical details of what my protagonist will have been doing up to the point where he can’t put off making those decisions any longer.
I have a rule-of-thumb for science-fiction, in that knowing what a character /can’t/ do is more important than knowing what they /can/, so by having worked out all the technical stuff up to that point, I now have a firm grasp of the limits my protagonist will be under when he has to deal with those choices. If I’m doing things right, the next part of the design process is going to be less focused on the technology, and more on decision theory, AI risk, existential risk, the Fermi Paradox and the Great Filter, and all that good, juicy stuff.
I’ve settled reasonably firmly on the questions I want to focus on—how to make decisions when gathering useful information to help make those decisions carries a risk of an immense cost, especially when the stakes are meaningful enough to be significant—and I’ve just finished working out the technical details of what my protagonist will have been doing up to the point where he can’t put off making those decisions any longer.
I have a rule-of-thumb for science-fiction, in that knowing what a character /can’t/ do is more important than knowing what they /can/, so by having worked out all the technical stuff up to that point, I now have a firm grasp of the limits my protagonist will be under when he has to deal with those choices. If I’m doing things right, the next part of the design process is going to be less focused on the technology, and more on decision theory, AI risk, existential risk, the Fermi Paradox and the Great Filter, and all that good, juicy stuff.
I’ll await the next iteration for further comments, then. Be sure to post it here!