There are two different senses of fidelity of simulation: how well a simulation resembles the original, and how detailed a simulacrum is in itself, regardless of its resemblance to some original.
I realised that the level of suffering and the fidelity of the simulation don’t need to be correlated, but I didn’t make an explicit distinction.
Most think that you need dedicated cognitive structures to generate a subjective I, if that’s so, then there’s no room for conscious simulacra that feel things that the simulator doesn’t.
I think it’s somewhat plausible that observer moments of mental models are close to those of their authors in moral significance, because they straightforwardly reuse the hardware. Language models can be controlled with a system message that merely tells them who they are, and that seems to be sufficient to install a particular consistent mask, very different from other possible masks.
If humans are themselves masks, that would place other masks just beside the original driver of the brain. The distinction would be having less experience and self-awareness as a mental model, or absence of privileges to steer. Which doesn’t make such a mental model a fundamentally different kind of being.
Yeah, I find that plausible, although that doesn’t have to do very much with the question of how much they suffer (as far as I can say). Even if consciousness is cognitively just a form of awareness of your own perception of things (like in AST or HOT theories), you at least still need a bound locus to experience, and if the locus is the same as ‘yours’, then whatever the simulacra experience will be registered within your own experiences.
I think the main problem here is to simulate beings that are suffering considerably, if you don’t suffer too much while simulating them (which is how most people experience the simulations, except maybe for those that are hyper-empathic or people with really detailed tulpas/personas, maybe) then it’s not a problem.
It might be a problem something like if you consciously create a persona that then you want to delete, and they are aware of it, and they feel bad about it (or more generally, if you know that you’ll create a persona that will suffer because of things, like disliking certain aspects of the world). But you should notice those feelings just like you notice the feelings of any of the ‘conflicting agents’ that you might have in your mind.
There are two different senses of fidelity of simulation: how well a simulation resembles the original, and how detailed a simulacrum is in itself, regardless of its resemblance to some original.
I realised that the level of suffering and the fidelity of the simulation don’t need to be correlated, but I didn’t make an explicit distinction.
Most think that you need dedicated cognitive structures to generate a subjective I, if that’s so, then there’s no room for conscious simulacra that feel things that the simulator doesn’t.
I think it’s somewhat plausible that observer moments of mental models are close to those of their authors in moral significance, because they straightforwardly reuse the hardware. Language models can be controlled with a system message that merely tells them who they are, and that seems to be sufficient to install a particular consistent mask, very different from other possible masks.
If humans are themselves masks, that would place other masks just beside the original driver of the brain. The distinction would be having less experience and self-awareness as a mental model, or absence of privileges to steer. Which doesn’t make such a mental model a fundamentally different kind of being.
Yeah, I find that plausible, although that doesn’t have to do very much with the question of how much they suffer (as far as I can say). Even if consciousness is cognitively just a form of awareness of your own perception of things (like in AST or HOT theories), you at least still need a bound locus to experience, and if the locus is the same as ‘yours’, then whatever the simulacra experience will be registered within your own experiences.
I think the main problem here is to simulate beings that are suffering considerably, if you don’t suffer too much while simulating them (which is how most people experience the simulations, except maybe for those that are hyper-empathic or people with really detailed tulpas/personas, maybe) then it’s not a problem.
It might be a problem something like if you consciously create a persona that then you want to delete, and they are aware of it, and they feel bad about it (or more generally, if you know that you’ll create a persona that will suffer because of things, like disliking certain aspects of the world). But you should notice those feelings just like you notice the feelings of any of the ‘conflicting agents’ that you might have in your mind.