Tulpa creation is effectively the creation of a form of sentinent AI that runs on the hardware of your brain instead of silicon.
That brings up a moral question. To what extend is it immoral to create a Tulpa and have it be in pain?
Tulpa are supposed to suffer from not getting enough attention so if you can’t commit to giving it a lot of attention for the rest of your life you might commit an immoral act by creating it.
Just so facts without getting entangled in the argument: In anecdotes tulpas seem to report more abstract and less intense types of suffering than humans. The by far dominant source of suffering in tulpas seems to be via empathy with the host. The suffering from not getting enough attention is probably fully explainable by loneliness, and sadness over fading away losing the ability to think and do things.
Look around yourself on http://www.reddit.com/r/Tulpas/ or ask some yourself on the verius IRC rooms that can be reached form there. I only have vague memories built from threads buried noths back on that redit.
I think the really relevant ethical question is whether a tulpa has a separate consciousness from its host. From my own researches in the area (which have been very casual, mind you), I consider it highly unlikely that they have separate consciousness, but not so unlikely that I would be willing to create a tulpa and then let it die, for example.
In fact, my uncertainty on this issue is the main reason I am ambivalent about creating a tulpa. It seems like it would be very useful: I solve problems much better when working with other people, even if they don’t contribute much; a tulpa more virtuous than myself could be a potent tool for self-improvement; it could help ameliorate the “fear of social isolation” obstacle to potential ambitious projects; I would gain a better understanding of how tulpas work; I could practice dancing and shaking hands more often; etc. etc. But I worry about being responsible for what may be (even with only ~15% subjective probability) a conscious mind, which will then literally die if I don’t spend time with it regularly (ref).
Wow, I had forgotten about that non-person predicates post. I definitely never thought it would have any bearing on a decision I personally would have to make. I was wrong.
Really? I was under the impression that there was a strong consensus, at least here on LW, that a sufficiently accurate simulation of consciousness is the moral equivalent of consciousness.
“Sufficiently accurate simulation of consciousness” is a subset of set of things that are artificial minds. You might have a consensus for that class. I don’t think you have an understanding that all minds have the same moral value. Even all minds with a certain level of intelligence.
That’s my understanding as well.… though I would say, rather, that being artificial is not a particularly important attribute towards evaluating the moral status of a consciousness. IOW, an artificial consciousness is a consciousness, and the same moral considerations apply to it as other consciousnesses with the same properties. That said, I also think this whole “a tulpa {is,isn’t} an artificial intelligence” discussion is an excellent example of losing track of referents in favor of manipulating symbols, so I don’t think it matters much in context.
It’s not your normal mind, so it’s artifical for ethical considerations.
I don’t find this argument convincing.
As far as I read stuff written by people with Tulpa’s they treat them as entity who’s desires matter.
Yes, and..?
Let me quote William Gibson here:
Addictions … started out like magical pets, pocket monsters. They did extraordinary tricks, showed you things you hadn’t seen, were fun. But came, through some gradual dire alchemy, to make decisions for you. Eventually, they were making your most crucial life-decisions. And they were … less intelligent than goldfish.
There a good chance that you will also hold that belief when you will interact with the Tulpa on a daily basis. As such it makes sense to think about the implications of the whole affair before creating one.
I still don’t see what you are getting at. If I treat a tulpa as a shard of my own mind, of course its desires matter, it’s the desires of my own mind.
Think of having an internal dialogue with yourself. I think of tulpas as a boosted/uplifted version of a party in that internal dialogue.
Tulpa creation is effectively the creation of a form of sentinent AI that runs on the hardware of your brain instead of silicon.
That brings up a moral question. To what extend is it immoral to create a Tulpa and have it be in pain?
Tulpa are supposed to suffer from not getting enough attention so if you can’t commit to giving it a lot of attention for the rest of your life you might commit an immoral act by creating it.
Just so facts without getting entangled in the argument: In anecdotes tulpas seem to report more abstract and less intense types of suffering than humans. The by far dominant source of suffering in tulpas seems to be via empathy with the host. The suffering from not getting enough attention is probably fully explainable by loneliness, and sadness over fading away losing the ability to think and do things.
This is very useful information if true. Could you link to some of the anecdotes which you draw this from?
Look around yourself on http://www.reddit.com/r/Tulpas/ or ask some yourself on the verius IRC rooms that can be reached form there. I only have vague memories built from threads buried noths back on that redit.
No, I don’t think so. It’s notably missing the “artificial” part of AI.
I think of tulpa creation as splitting off a shard of your own mind. It’s still your own mind, only split now.
I think the really relevant ethical question is whether a tulpa has a separate consciousness from its host. From my own researches in the area (which have been very casual, mind you), I consider it highly unlikely that they have separate consciousness, but not so unlikely that I would be willing to create a tulpa and then let it die, for example.
In fact, my uncertainty on this issue is the main reason I am ambivalent about creating a tulpa. It seems like it would be very useful: I solve problems much better when working with other people, even if they don’t contribute much; a tulpa more virtuous than myself could be a potent tool for self-improvement; it could help ameliorate the “fear of social isolation” obstacle to potential ambitious projects; I would gain a better understanding of how tulpas work; I could practice dancing and shaking hands more often; etc. etc. But I worry about being responsible for what may be (even with only ~15% subjective probability) a conscious mind, which will then literally die if I don’t spend time with it regularly (ref).
Just to clarify this a little… how many separate consciousnesses do you estimate your brain currently hosts?
By my current (layman’s) understanding of consciousness, my brain currently hosts exactly one.
OK, thanks.
It’s not your normal mind, so it’s artifical for ethical considerations.
As far as I read stuff written by people with Tulpa’s they treat them as entity who’s desires matter.
This might be a stupid question, but what ethical considerations are different for an “artificial” mind?
When talking about AGI few people label it as murder to shut down the AI that’s in the box. At least it’s worth a discussion whether it is.
Only if it’s not sapient, which is a non-trivial question.
Wow, I had forgotten about that non-person predicates post. I definitely never thought it would have any bearing on a decision I personally would have to make. I was wrong.
Really? I was under the impression that there was a strong consensus, at least here on LW, that a sufficiently accurate simulation of consciousness is the moral equivalent of consciousness.
“Sufficiently accurate simulation of consciousness” is a subset of set of things that are artificial minds. You might have a consensus for that class. I don’t think you have an understanding that all minds have the same moral value. Even all minds with a certain level of intelligence.
At least for me, personally, the relevant property for moral status is whether it has consciousness.
That’s my understanding as well.… though I would say, rather, that being artificial is not a particularly important attribute towards evaluating the moral status of a consciousness. IOW, an artificial consciousness is a consciousness, and the same moral considerations apply to it as other consciousnesses with the same properties. That said, I also think this whole “a tulpa {is,isn’t} an artificial intelligence” discussion is an excellent example of losing track of referents in favor of manipulating symbols, so I don’t think it matters much in context.
I don’t find this argument convincing.
Yes, and..?
Let me quote William Gibson here:
Addictions … started out like magical pets, pocket monsters. They did extraordinary tricks, showed you things you hadn’t seen, were fun. But came, through some gradual dire alchemy, to make decisions for you. Eventually, they were making your most crucial life-decisions. And they were … less intelligent than goldfish.
There a good chance that you will also hold that belief when you will interact with the Tulpa on a daily basis. As such it makes sense to think about the implications of the whole affair before creating one.
I still don’t see what you are getting at. If I treat a tulpa as a shard of my own mind, of course its desires matter, it’s the desires of my own mind.
Think of having an internal dialogue with yourself. I think of tulpas as a boosted/uplifted version of a party in that internal dialogue.