Just because I don’t currently know the details of the relevant bits of organizational science doesn’t mean somebody around here doesn’t already know them. Just because I can’t do the math as easily as I could for rocket science is no excuse to try to cheat how reality functions.
That is an objection that is only valid for a story happening in a time near to us. But, as you say:
However, by the time that happens, enough evidence about organizational science will have been gathered for the founding charter to have been amended into unrecognizability
and nobody out there can possibly know their own field hundreds of years in the future. I state my case: handwave everything and concentrate on the story.
I believe I may have phrased that quoted part poorly. Perhaps, ”… long before the time the copies diverge enough to want to split into completely separate groups, they would likely have already learned enough about the current state-of-the-art of organizational theory to have amended the charter from its initial, preliminary form into something quite different”. I didn’t mean to imply ‘hundreds of years’, just a set of individuals learning about a field previously outside their area of expertise.
Because I am actually reasonably capable of creating some sort of actual charter that actually exists, and apply it to a scenario based on minor extrapolations of existing technologies that don’t require particularly fundamental breakthroughs (ie: increased computer power; increased understanding of how neural cells work, such as is being fiddled with in the OpenWorm project; and increased resolution of certain scanning technology). I wouldn’t know where to begin in even vaguely describing “an AI that can react to change and update itself on new information”, and if such a thing /could/ be written, it would nigh-certainly completely derail the entire scenario and make the multi-self charter completely irrelevant.
I’m just saying that a coordinating AI seems an obvious evolution, I was told just yesterday by one of my coworker that machine learning systems for the automatic checking of complex regulations are already used profitably. Anyway, if the charter itself is the focal point of the story, by all mean delve into organizational science. Just don’t forget that, when writing science fiction, it’s very easy to descend into info-dumping.
I’ve been skimming some of my setting-idea notes, such as ‘algorithms replacing middle-managers’ and have realized that, for a certain point of the planned setting, you’ve highlighted an approach that is likely to be common among many other people. However, one of the main reasons for my protagonist’s choice to try relying on himselves is that AIs which optimize for various easy-to-check metrics, such as profitability, tend not to take into account that human values are complex.
So there are likely going to be all manner of hyper-efficient, software-managed organizations who, in a straight fight, could out-organize my protagonist’s little personal co-op. Various copies of my protagonist, seeing the data, will conclude that the costs are worth the benefits, and leave the co-op to gain the benefits of said organizational methods. However, this will cause a sort of social ‘evaporative cooling’, so that the copies who remain in the co-op will tend to be the ones most dedicated to working towards the full complexity of their values. As long as they can avoid going completely bankrupt—in other words, as long as there’s enough income to pay for the hardware to run at least one copy that remains a member—then the co-op will be able to quietly chug along doing its own thing while wider society changes in various values-simplifying ways around it.
… That is, if I can do everything that such a story needs to get done right.
Just because I don’t currently know the details of the relevant bits of organizational science doesn’t mean somebody around here doesn’t already know them. Just because I can’t do the math as easily as I could for rocket science is no excuse to try to cheat how reality functions.
That is an objection that is only valid for a story happening in a time near to us. But, as you say:
and nobody out there can possibly know their own field hundreds of years in the future. I state my case: handwave everything and concentrate on the story.
I believe I may have phrased that quoted part poorly. Perhaps, ”… long before the time the copies diverge enough to want to split into completely separate groups, they would likely have already learned enough about the current state-of-the-art of organizational theory to have amended the charter from its initial, preliminary form into something quite different”. I didn’t mean to imply ‘hundreds of years’, just a set of individuals learning about a field previously outside their area of expertise.
I still don’t understand why it has to be a charter instead of, say, an AI that can react to change and update itself on new information.
Because I am actually reasonably capable of creating some sort of actual charter that actually exists, and apply it to a scenario based on minor extrapolations of existing technologies that don’t require particularly fundamental breakthroughs (ie: increased computer power; increased understanding of how neural cells work, such as is being fiddled with in the OpenWorm project; and increased resolution of certain scanning technology). I wouldn’t know where to begin in even vaguely describing “an AI that can react to change and update itself on new information”, and if such a thing /could/ be written, it would nigh-certainly completely derail the entire scenario and make the multi-self charter completely irrelevant.
I’m just saying that a coordinating AI seems an obvious evolution, I was told just yesterday by one of my coworker that machine learning systems for the automatic checking of complex regulations are already used profitably.
Anyway, if the charter itself is the focal point of the story, by all mean delve into organizational science. Just don’t forget that, when writing science fiction, it’s very easy to descend into info-dumping.
I’ve been skimming some of my setting-idea notes, such as ‘algorithms replacing middle-managers’ and have realized that, for a certain point of the planned setting, you’ve highlighted an approach that is likely to be common among many other people. However, one of the main reasons for my protagonist’s choice to try relying on himselves is that AIs which optimize for various easy-to-check metrics, such as profitability, tend not to take into account that human values are complex.
So there are likely going to be all manner of hyper-efficient, software-managed organizations who, in a straight fight, could out-organize my protagonist’s little personal co-op. Various copies of my protagonist, seeing the data, will conclude that the costs are worth the benefits, and leave the co-op to gain the benefits of said organizational methods. However, this will cause a sort of social ‘evaporative cooling’, so that the copies who remain in the co-op will tend to be the ones most dedicated to working towards the full complexity of their values. As long as they can avoid going completely bankrupt—in other words, as long as there’s enough income to pay for the hardware to run at least one copy that remains a member—then the co-op will be able to quietly chug along doing its own thing while wider society changes in various values-simplifying ways around it.
… That is, if I can do everything that such a story needs to get done right.