Relevant excerpt from Ian Banks new Culture novel, The Hydrogen Sonata:
The Simming Problem – in the circumstances, it was usually a bad sign when something was so singular and/or notorious it deserved to be capitalised – was of a moral nature, as the really meaty, chewy, most intractable problems generally were.
The Simming Problem boiled down to, How true to life was it morally justified to be?
Simulating the course of future events in a virtual environment to see what might happen back in reality, and tweaking one’s own actions accordingly in different runs of the simulated problem to see what difference these would make and to determine whether it was possible to refine those actions such that a desired outcome might be engineered, was hardly new; in a sense it long pre-dated AIs, computational matrices, substrates, computers and even the sort of mechanical or hydrological arrangements of ball-bearings, weights and springs or water, tubes and valves that enthusiastic optimists had once imagined might somehow model, say, an economy.
In a sense, indeed, such simulations first took place in the minds of only proto-sentient creatures, in the deep pre-historic age of any given species. If you weren’t being too strict about your definitions you could claim that the first simulations happened in the heads – or other appropriate body- or being-parts – of animals, or the equivalent, probably shortly after they developed a theory of mind and started to think about how to manipulate their peers to ensure access to food, shelter, mating opportunities or greater social standing.
Thoughts like, If I do this, then she does that … No; if I do that, making him do this … in creatures still mystified by fire, or unable to account for the existence of air, or ice, above their watery environment – or whatever – were arguably the start of the first simulations, no matter how dim, limited or blinded by ignorance and prejudice the whole process might be. They were, also, plausibly, the start of a line that led directly through discussions amongst village elders,
through collegiate essays, flow charts, war games and the first computer programs to the sort of ultra-detailed simulations that could be shown – objectively, statistically, scientifically – to work.
Long before most species made it to the stars, they would be entirely used to the idea that you never made any significant societal decision with large-scale or long-term consequences without running simulations of the future course of events, just to make sure you were doing the right thing. Simming problems at that stage were usually constrained by not having the calculational power to run a sufficiently detailed analysis, or disagreements regarding what the
initial conditions ought to be.
Later, usually round about the time when your society had developed the sort of processal tech you could call Artificial Intelligence without blushing, the true nature of the Simming Problem started to appear.
Once you could reliably model whole populations within your simulated environment, at the level of detail and complexity that meant individuals within that simulation had some sort of independent existence, the question became: how god-like, and how cruel, did you want to be?
Most problems, even seemingly really tricky ones, could be handled by simulations which happily modelled slippery concepts like public opinion or the likely reactions of alien societies by the appropriate use of some especially cunning and devious algorithms; whole populations of slightly different simulative processes could be bred, evolved and set to compete against each other to come up with the most reliable example employing the most decisive short-cuts to
accurately modelling, say, how a group of people would behave; nothing more processorhungry than the right set of equations would – once you’d plugged the relevant data in – produce a reliable estimate of how that group of people would react to a given stimulus, whether the group represented a tiny ruling clique of the most powerful, or an entire civilisation.
But not always. Sometimes, if you were going to have any hope of getting useful answers, there really was no alternative to modelling the individuals themselves, at the sort of scale and level of complexity that meant they each had to exhibit some kind of discrete personality, and that was where the Problem kicked in.
Once you’d created your population of realistically reacting and – in a necessary sense – cogitating individuals, you had – also in a sense – created life. The particular parts of whatever computational substrate you’d devoted to the problem now held beings; virtual beings capable of reacting so much like the back-in-reality beings they were modelling – because how else were they to do so convincingly without also hoping, suffering, rejoicing, caring, loving and
dreaming? – that by most people’s estimation they had just as much right to be treated as fully recognised moral agents as did the originals in the Real, or you yourself.
If the prototypes had rights, so did the faithful copies, and by far the most fundamental right that any creature ever possessed or cared to claim was the right to life itself, on the not unreasonable grounds that without that initial right, all others were meaningless.
By this reasoning, then, you couldn’t just turn off your virtual environment and the living, thinking creatures it contained at the completion of a run or when a simulation had reached the end of its useful life; that amounted to genocide, and however much it might feel like serious promotion from one’s earlier primitive state to realise that you had, in effect, become the kind of cruel and pettily vengeful god you had once, in your ignorance, feared, it was still
hardly the sort of mature attitude or behaviour to be expected of a truly civilised society, or anything to be proud of.
Some civs, admittedly, simply weren’t having any of this, and routinely bred whole worlds, even whole galaxies, full of living beings which they blithely consigned to oblivion the instant they were done with them, sometimes, it seemed, just for the glorious fun of it, and to annoy their more ethically angst-tangled co-civilisationalists, but they – or at least those who admitted to the practice, rather than doing it but keeping quiet about it – were in a tiny minority, as well as being not entirely welcome at all the highest tables of the galactic community, which was usually precisely where the most ambitious and ruthless species/civs most desired to be.
Others reckoned that as long as the termination was instant, with no warning and therefore no chance that those about to be switched off could suffer, then it didn’t really matter. The wretches hadn’t existed, they’d been brought into existence for a specific, contributory purpose, and now they were nothing again; so what?
Most people, though, were uncomfortable with such moral brusqueness, and took their responsibilities in the matter more seriously. They either avoided creating virtual populations of genuinely living beings in the first place, or only used sims at that sophistication and level of detail on a sustainable basis, knowing from the start that they would be leaving them running indefinitely, with no intention of turning the environment and its inhabitants off at any point.
Whether these simulated beings were really really alive, and how justified it was to create entire populations of virtual creatures just for your own convenience under any circumstances, and whether or not – if/once you had done so – you were sort of duty-bound to be honest with your creations at some point and straight out tell them that they weren’t really real, and existed at the whim of another order of beings altogether – one with its metaphorical finger hovering over an Off switch capable of utterly and instantly obliterating their entire universe … well, these were all matters which by general and even relieved consent were best left to philosophers.
It seems to me that prohibitions on mistreating sims might be the only example of a reasonable moral stricture with no apparent up-side—it ’s just avoiding a down-side.
Decent treatment of sentients at your own reality level increases opportunities for cooperation and avoids cycles of revenge, neither of which apply to sims.… unless you also have an obligation to let them join your society.
Not necessarily. Think for example of the controversy of linking FPS (the shooter variety, not the one with moving pictures per second) games and real life violence. Now, I’m not advocating such a link here at all, but it is conceivable that how you treat sims carries over to how you treat sentients at your own reality level to some extent, no matter how minor. Yielding a potential up-side.
At least in theory, this could be tested. We have the real world example of people who torture sims (something which seems more psychologically indicative to me than first-person shooter games). It might be possible to find out whether they’re different from people who play Sim City but don’t torture sims, and also whether torturing sims for the fun of it changes people.
Yes, although it would be really, really strange if there were no effect whatsoever, if in fact there were any activity period that you can engage in long term without in some way or form shaping your brain. This is anthropomorphizing of course, who knows what will or won’t affect far future individuals. Still, we could test for current humans the effect size, relative to which we could define some threshold at which we’d call the effect non-negligible.
I haven’t read THS yet, but I’m surprised that even a civilization written by Banks didn’t think that the correct response to finding oneself as a “vengeful god” is to create an afterlife.
http://wiki.lesswrong.com/wiki/Nonperson_predicate (open problem!)
Relevant excerpt from Ian Banks new Culture novel, The Hydrogen Sonata:
It seems to me that prohibitions on mistreating sims might be the only example of a reasonable moral stricture with no apparent up-side—it ’s just avoiding a down-side.
Decent treatment of sentients at your own reality level increases opportunities for cooperation and avoids cycles of revenge, neither of which apply to sims.… unless you also have an obligation to let them join your society.
Not necessarily. Think for example of the controversy of linking FPS (the shooter variety, not the one with moving pictures per second) games and real life violence. Now, I’m not advocating such a link here at all, but it is conceivable that how you treat sims carries over to how you treat sentients at your own reality level to some extent, no matter how minor. Yielding a potential up-side.
At least in theory, this could be tested. We have the real world example of people who torture sims (something which seems more psychologically indicative to me than first-person shooter games). It might be possible to find out whether they’re different from people who play Sim City but don’t torture sims, and also whether torturing sims for the fun of it changes people.
Yes, although it would be really, really strange if there were no effect whatsoever, if in fact there were any activity period that you can engage in long term without in some way or form shaping your brain. This is anthropomorphizing of course, who knows what will or won’t affect far future individuals. Still, we could test for current humans the effect size, relative to which we could define some threshold at which we’d call the effect non-negligible.
I haven’t read THS yet, but I’m surprised that even a civilization written by Banks didn’t think that the correct response to finding oneself as a “vengeful god” is to create an afterlife.