Do you imagine this to be doable in such a way that the model of the volunteer’s mind is not a morally relevant conscious person (or at least not one who is suffering)? I could be convinced either way.
Are you thinking that the model might suffer psychologically because it knows it will cease to exist after each run is finished? I guess you could minimize that danger by picking someone who thinks they won’t mind being put into that situation, and do a test run to verify this. Let me know if you have another concern in mind.
Mmm, it’s not so much that think the mind-model is especially likely to suffer; I just want to make sure that possibility is being considered. The test run sounds like a good idea. Or you could inspect a random sampling and somehow see how they’re doing. Perhaps we need a tool along the lines of the nonperson predicate—something like an is-this-person-observer-moment-suffering function.
Do you imagine this to be doable in such a way that the model of the volunteer’s mind is not a morally relevant conscious person (or at least not one who is suffering)? I could be convinced either way.
Are you thinking that the model might suffer psychologically because it knows it will cease to exist after each run is finished? I guess you could minimize that danger by picking someone who thinks they won’t mind being put into that situation, and do a test run to verify this. Let me know if you have another concern in mind.
Mmm, it’s not so much that think the mind-model is especially likely to suffer; I just want to make sure that possibility is being considered. The test run sounds like a good idea. Or you could inspect a random sampling and somehow see how they’re doing. Perhaps we need a tool along the lines of the nonperson predicate—something like an is-this-person-observer-moment-suffering function.