That is, it could surely make you suffer from depression, but since you are not the type of person who’d naturally suffer from depression, it’s not you.
Either you really don’t understand depression, or your definition of identity revolves around some very transient chemical conditions in the body.
It wasn’t meant that literally. What I rather meant is that the AI can make you fear pink balloons and expose you to a world full of pink balloons. But if you extrapolate this reasoning then the AI could as well just torture a torture-optimization-device.
Here is where I’m coming from. All my life since abandoning religion I fear something could happen to my brain that makes me fall for such delusions again. But I think that fear is unreasonable. If I’d like to become religious again I wouldn’t care because that’s my preference then. In other words, I’m suffering from my imagination of a impossible being that is me but not really me, that is dumb enough to strive to be religious but fears being religious.
That means some AI could torture me but not infinitely so while retaining anything I’d care about on average previous to being tortured infinitely.
P.S.
I’m just having some fun trying to figure out why some people here are very horrified by such scenarios. I can’t help but feel nothing about it.
I assign high negative utility to the torture of any entity. The scenario might be more salient if the entity in question is me (for whatever definition of identity you care to use), but I don’t care much more about myself than I do other intelligences.
The only reasons we care about other people is either to survive, i.e. get what we want, or because it is part of our preferences to see other people being happy. Accordingly, trying to maximize happiness for everybody can be seen as purely selfish. Either as an effort to survive, by making everybody wanting to make everybody else happy, given that not you but somebody else wins. Or simply because it makes oneself happy.
You can reduce every possible motivation to selfishness if you like, but that makes the term kind of useless; if all choices are selfish, describing a particular choice as selfish has zero information content.
Accordingly, trying to maximize happiness for everybody can be seen as purely selfish. Either as an effort to survive, by making everybody wanting to make everybody else happy, given that not you but somebody else wins. Or simply because it makes oneself happy.
You should be more cautious about telling other people what their motivations are. I woulddie to save the world, and I don’t seem to be alone in this preference. And this neither helps me survive nor makes me momentarily happy enough to offset the whole dying thing.
You should be careful not to conflate “preference” and “things that make oneself happy”. Or make that a more clearly falsifiable component of your hypothesis.
Why would anyone have a preference detached from their personal happiness? I do what I do because it makes me feel good because I think it is the right thing to do. Doing the wrong thing deliberately makes me unhappy.
I don’t care much more about myself than I care about other intelligences.
I care about other intelligences and myself to an almost equal extent.
I care about myself and other intelligences.
I care about myself. I care about other intelligences.
I care about my preferences.
What does it mean to care more about others? Who’s caring here? If you want other people to be happy, why do you want it if not for your own comfort?
I’m vegetarian because I don’t like unnecessary suffering. That is, I care about myself not feeling bad because if others are unhappy I’m also unhappy. If you’d rather die than to cause a lot of suffering in others that is not to say that you care more about others than yourself, that is nonsense.
Either you really don’t understand depression, or your definition of identity revolves around some very transient chemical conditions in the body.
Good point. I should have picked up on that.
I’m a manic depressive. Does this mean I’m a different person at each level along the scale between mania and depression?
It wasn’t meant that literally. What I rather meant is that the AI can make you fear pink balloons and expose you to a world full of pink balloons. But if you extrapolate this reasoning then the AI could as well just torture a torture-optimization-device.
Here is where I’m coming from. All my life since abandoning religion I fear something could happen to my brain that makes me fall for such delusions again. But I think that fear is unreasonable. If I’d like to become religious again I wouldn’t care because that’s my preference then. In other words, I’m suffering from my imagination of a impossible being that is me but not really me, that is dumb enough to strive to be religious but fears being religious.
That means some AI could torture me but not infinitely so while retaining anything I’d care about on average previous to being tortured infinitely.
P.S. I’m just having some fun trying to figure out why some people here are very horrified by such scenarios. I can’t help but feel nothing about it.
I assign high negative utility to the torture of any entity. The scenario might be more salient if the entity in question is me (for whatever definition of identity you care to use), but I don’t care much more about myself than I do other intelligences.
The only reasons we care about other people is either to survive, i.e. get what we want, or because it is part of our preferences to see other people being happy. Accordingly, trying to maximize happiness for everybody can be seen as purely selfish. Either as an effort to survive, by making everybody wanting to make everybody else happy, given that not you but somebody else wins. Or simply because it makes oneself happy.
You can reduce every possible motivation to selfishness if you like, but that makes the term kind of useless; if all choices are selfish, describing a particular choice as selfish has zero information content.
You should be more cautious about telling other people what their motivations are. I would die to save the world, and I don’t seem to be alone in this preference. And this neither helps me survive nor makes me momentarily happy enough to offset the whole dying thing.
That terminology is indeed useless. All it does is to obfuscate matters.
What’s your point anyway?
You should be careful not to conflate “preference” and “things that make oneself happy”. Or make that a more clearly falsifiable component of your hypothesis.
Why would anyone have a preference detached from their personal happiness? I do what I do because it makes me feel good because I think it is the right thing to do. Doing the wrong thing deliberately makes me unhappy.
I don’t care much more about myself than I care about other intelligences.
I care about other intelligences and myself to an almost equal extent.
I care about myself and other intelligences.
I care about myself. I care about other intelligences.
I care about my preferences.
What does it mean to care more about others? Who’s caring here? If you want other people to be happy, why do you want it if not for your own comfort?
I’m vegetarian because I don’t like unnecessary suffering. That is, I care about myself not feeling bad because if others are unhappy I’m also unhappy. If you’d rather die than to cause a lot of suffering in others that is not to say that you care more about others than yourself, that is nonsense.