Oh, you are biting this bullet with gusto! Well, at least you are consistent. Basically, all thinking must cease then. If someone doubted that there would be a lot of people happy to assist an evil AI in killing everyone, you are an example of a person with such a mindset: consciousness is indescribably evil.
Surely the brain doesn’t simulate ‘high-fidelity’ simulations of people in excruciating pain, except maybe for people with hyper-empathy and maybe maybe people that can imagine qualia experiences as actually experienced sensations.
Even then, if the brain hasn’t registered any kind of excruciating pain, while it still keeps the memories, it’s difficult to think that there even was that experience. Extremely vivid experiences are complex enough to be coupled with physiological effects, there’s no point on reducing this to a minimal Platonic concept of ‘simulating’ in which simulating excruciating pain causes excruciating pain regardless of physiological effects.
There are two different senses of fidelity of simulation: how well a simulation resembles the original, and how detailed a simulacrum is in itself, regardless of its resemblance to some original.
I realised that the level of suffering and the fidelity of the simulation don’t need to be correlated, but I didn’t make an explicit distinction.
Most think that you need dedicated cognitive structures to generate a subjective I, if that’s so, then there’s no room for conscious simulacra that feel things that the simulator doesn’t.
I think it’s somewhat plausible that observer moments of mental models are close to those of their authors in moral significance, because they straightforwardly reuse the hardware. Language models can be controlled with a system message that merely tells them who they are, and that seems to be sufficient to install a particular consistent mask, very different from other possible masks.
If humans are themselves masks, that would place other masks just beside the original driver of the brain. The distinction would be having less experience and self-awareness as a mental model, or absence of privileges to steer. Which doesn’t make such a mental model a fundamentally different kind of being.
Yeah, I find that plausible, although that doesn’t have to do very much with the question of how much they suffer (as far as I can say). Even if consciousness is cognitively just a form of awareness of your own perception of things (like in AST or HOT theories), you at least still need a bound locus to experience, and if the locus is the same as ‘yours’, then whatever the simulacra experience will be registered within your own experiences.
I think the main problem here is to simulate beings that are suffering considerably, if you don’t suffer too much while simulating them (which is how most people experience the simulations, except maybe for those that are hyper-empathic or people with really detailed tulpas/personas, maybe) then it’s not a problem.
It might be a problem something like if you consciously create a persona that then you want to delete, and they are aware of it, and they feel bad about it (or more generally, if you know that you’ll create a persona that will suffer because of things, like disliking certain aspects of the world). But you should notice those feelings just like you notice the feelings of any of the ‘conflicting agents’ that you might have in your mind.
I can definitely create mental models of people who have a pain-analogue which affects their behavior in ways similar to how pain affects mine, without their pain-analogue causing me pain.
there’s no point on reducing this to a minimal Platonic concept of ‘simulating’ in which simulating excruciating pain causes excruciating pain regardless of physiological effects.
I think this is the crux of where we disagree. I don’t think it matters if pain is “physiological” in the sense of being physiologically like how a regular human feels pain. I only care if there is an experience of pain.
I don’t know of any difference between physiological pain and the pain-analogues I inflicted on my mental models which I would accept as necessary for it to qualify as an experience of pain. But since you clearly do think that there is such a difference, what would you say the difference is?
How are you confident that you’ve simulated another conscious being that feels emotions with the same intensity as the ones you would feel if you were in that situation?, instead of just running a low-fidelity simulation with decrease emotional intentisity, which is how it registers within your brain’s memories.
Whatever subjective experience you are simulating, it’s still running in your brain and with the cognitive structures that you have to generate your subjective I (I find this to be the simplest hypothesis), and that means that the simplest conclusion to draw is that whatever your simulation felt gets registered in your brain’s memories, and if you find that those emotions lack much of the intensity that you would experience if you were to be in that situation, that is also the degree of emotional intensity that that being felt while being simulated.
I’d understood that already, but I would need a reason to find that believable, because it seems really unlikely. You are not directly simulating the cognitive structures of the being, it’s impossible, the only way you are simulating someone is by repurposing your cognitive structures to simulate them, and then the intensity of their emotions is the same as what you registered.
How simple do you think the emergency of subjective awareness is?, most people will say that you need dedicated cognitive structures to generate the subjective I, even in theories that are mostly just something like strange loops or higher-level awareness, like HOT or AST, you at least still need a bound locus to experience. If that’s so, then there’s no room for conscious simulacra that feel things that the simulator doesn’t.
This is from a reply that I gave to Vladimir:
I think the main problem here is to simulate beings that are suffering considerably, if you don’t suffer too much while simulating them (which is how most people experience the simulations, except maybe for those that are hyper-empathic or people with really detailed tulpas/personas, maybe) then it’s not a problem.
It might be a problem something like if you consciously create a persona that then you want to delete, and they are aware of it, and they feel bad about it (or more generally, if you know that you’ll create a persona that will suffer because of things, like disliking certain aspects of the world). But you should notice those feelings just like you notice the feelings of any of the ‘conflicting agents’ that you might have in your mind.
It is, that’s why I also replied with a longer explanation of these points to OP. I just wanted to say that to counter the idea that simulating people could be such a horrible thing even within that mindset (for most people)
I disagree that it means that all thinking must cease. Only a certain type of thinking, the one involving creating sufficiently detailed mental models (edit: of people). I have already stopped doing that personally, though it was difficult and has harmed my ability to understand others. Though I suppose I can’t be sure about what happens when I sleep.
The subjective awareness that you simulate while simulating a character or real person’s mind is pretty low-fidelity, and when you imagine someone suffering I assume your brain doesn’t register it with the level of suffering you would experience, mine certainly doesn’t. Some people experience hyper-empathy and some can imagine certain types of qualia experiences as actually experienced
The people that only belong to the second type probably still don’t simulate accurate experiences of excruciating pain that feel like excruciating pain, because there’s no strong physiological effects of those that correlate with that experience. Even if the brain is simulating a person,it’s pretty unbelievable to say that the brain doesn’t work like always and still creates the same exact experience (I don’t have memories of that in my brain while simulating).
Even if the subjective I is swapped (in whatever sense), the simulation still registers in the brain’s memories, and in my case I don’t have any memories of simulating a lot of suffering. Does that apply to you?
Oh, you are biting this bullet with gusto! Well, at least you are consistent. Basically, all thinking must cease then. If someone doubted that there would be a lot of people happy to assist an evil AI in killing everyone, you are an example of a person with such a mindset: consciousness is indescribably evil.
end forgetting
Surely the brain doesn’t simulate ‘high-fidelity’ simulations of people in excruciating pain, except maybe for people with hyper-empathy and maybe maybe people that can imagine qualia experiences as actually experienced sensations.
Even then, if the brain hasn’t registered any kind of excruciating pain, while it still keeps the memories, it’s difficult to think that there even was that experience. Extremely vivid experiences are complex enough to be coupled with physiological effects, there’s no point on reducing this to a minimal Platonic concept of ‘simulating’ in which simulating excruciating pain causes excruciating pain regardless of physiological effects.
There are two different senses of fidelity of simulation: how well a simulation resembles the original, and how detailed a simulacrum is in itself, regardless of its resemblance to some original.
I realised that the level of suffering and the fidelity of the simulation don’t need to be correlated, but I didn’t make an explicit distinction.
Most think that you need dedicated cognitive structures to generate a subjective I, if that’s so, then there’s no room for conscious simulacra that feel things that the simulator doesn’t.
I think it’s somewhat plausible that observer moments of mental models are close to those of their authors in moral significance, because they straightforwardly reuse the hardware. Language models can be controlled with a system message that merely tells them who they are, and that seems to be sufficient to install a particular consistent mask, very different from other possible masks.
If humans are themselves masks, that would place other masks just beside the original driver of the brain. The distinction would be having less experience and self-awareness as a mental model, or absence of privileges to steer. Which doesn’t make such a mental model a fundamentally different kind of being.
Yeah, I find that plausible, although that doesn’t have to do very much with the question of how much they suffer (as far as I can say). Even if consciousness is cognitively just a form of awareness of your own perception of things (like in AST or HOT theories), you at least still need a bound locus to experience, and if the locus is the same as ‘yours’, then whatever the simulacra experience will be registered within your own experiences.
I think the main problem here is to simulate beings that are suffering considerably, if you don’t suffer too much while simulating them (which is how most people experience the simulations, except maybe for those that are hyper-empathic or people with really detailed tulpas/personas, maybe) then it’s not a problem.
It might be a problem something like if you consciously create a persona that then you want to delete, and they are aware of it, and they feel bad about it (or more generally, if you know that you’ll create a persona that will suffer because of things, like disliking certain aspects of the world). But you should notice those feelings just like you notice the feelings of any of the ‘conflicting agents’ that you might have in your mind.
I can definitely create mental models of people who have a pain-analogue which affects their behavior in ways similar to how pain affects mine, without their pain-analogue causing me pain.
I think this is the crux of where we disagree. I don’t think it matters if pain is “physiological” in the sense of being physiologically like how a regular human feels pain. I only care if there is an experience of pain.
I don’t know of any difference between physiological pain and the pain-analogues I inflicted on my mental models which I would accept as necessary for it to qualify as an experience of pain. But since you clearly do think that there is such a difference, what would you say the difference is?
How are you confident that you’ve simulated another conscious being that feels emotions with the same intensity as the ones you would feel if you were in that situation?, instead of just running a low-fidelity simulation with decrease emotional intentisity, which is how it registers within your brain’s memories.
Whatever subjective experience you are simulating, it’s still running in your brain and with the cognitive structures that you have to generate your subjective I (I find this to be the simplest hypothesis), and that means that the simplest conclusion to draw is that whatever your simulation felt gets registered in your brain’s memories, and if you find that those emotions lack much of the intensity that you would experience if you were to be in that situation, that is also the degree of emotional intensity that that being felt while being simulated.
Points similar to this have come up in many comments, so I’ve added an addendum at the end of my post where I give my point of view on this.
I’d understood that already, but I would need a reason to find that believable, because it seems really unlikely. You are not directly simulating the cognitive structures of the being, it’s impossible, the only way you are simulating someone is by repurposing your cognitive structures to simulate them, and then the intensity of their emotions is the same as what you registered.
How simple do you think the emergency of subjective awareness is?, most people will say that you need dedicated cognitive structures to generate the subjective I, even in theories that are mostly just something like strange loops or higher-level awareness, like HOT or AST, you at least still need a bound locus to experience. If that’s so, then there’s no room for conscious simulacra that feel things that the simulator doesn’t.
This is from a reply that I gave to Vladimir:
Sounds like a counterargument to the OP
It is, that’s why I also replied with a longer explanation of these points to OP. I just wanted to say that to counter the idea that simulating people could be such a horrible thing even within that mindset (for most people)
I disagree that it means that all thinking must cease. Only a certain type of thinking, the one involving creating sufficiently detailed mental models (edit: of people). I have already stopped doing that personally, though it was difficult and has harmed my ability to understand others. Though I suppose I can’t be sure about what happens when I sleep.
Still, no, I don’t want everyone to die.
The subjective awareness that you simulate while simulating a character or real person’s mind is pretty low-fidelity, and when you imagine someone suffering I assume your brain doesn’t register it with the level of suffering you would experience, mine certainly doesn’t. Some people experience hyper-empathy and some can imagine certain types of qualia experiences as actually experienced
The people that only belong to the second type probably still don’t simulate accurate experiences of excruciating pain that feel like excruciating pain, because there’s no strong physiological effects of those that correlate with that experience. Even if the brain is simulating a person,it’s pretty unbelievable to say that the brain doesn’t work like always and still creates the same exact experience (I don’t have memories of that in my brain while simulating).
Even if the subjective I is swapped (in whatever sense), the simulation still registers in the brain’s memories, and in my case I don’t have any memories of simulating a lot of suffering. Does that apply to you?