How conscious are our models of other people? For example; in dreams it seems like I am talking and interacting with other people. Their behavior is sometimes surprising and unpredictable. They use language, express emotion, appear to have goals, etc. It could just be that I, being less conscious, see dream-people as being more conscious than in reality.
I can somewhat predict what other people in the real world will do or say, including what they might say about experiencing consciousness.
Authors can create realistic characters, plan their actions and internal thoughts, and explore the logical (or illogical) results. My guess is that the more intelligent/introspective an author is, the closer the characters floating around in his or her mind are to being conscious.
Many religions encourage people to have a personal relationship with a supernatural entity which involves modeling the supernatural agency as an (anthropomorphic) being, which partially instantiates a maybe-conscious being in their minds...
Since this is a crazy ideas thread, I’ll tag on the following thought. If you believe that in the future, if we are able to make ems, and we should include them in our moral calculus, should we also be careful not to imagine people in bad situations? Since by doing so, we may be making a very low-level simulation in our own mind of that person, that may or may not have some consciousness. If you don’t believe that is the case now, how does that scale, if we start augmenting our minds with ever-more-powerful computer interfaces. Is there ever a point where it becomes immoral just to think of something?
Is there ever a point where it becomes immoral just to think of something?
God kind of ran into the same problem. “What if The Universe? Oh, whoops, intelligent life, can’t just forget about that now, can I? What a mess… I guess I better plan some amazing future utility for those poor guys to balance all that shit out… It has to be an infinite future? With their little meat bodies how is that going to work? Man, I am never going to think about things again. Hey, that’s a catchy word for intelligent meat agents.”
So, in short, if we ever start thinking truly immoral things, we just need to out-moral them with longer, better thoughts. Forgetting about our mental creations is probably the most immoral thing we could do.
In e.g. Christianity it’s immoral to think of a lot of things :-/
Not exactly. If I ask you “what if you robbed a bank?” you will think of robbing a bank, you actually cannot prevent yourself from thinking about robbing a bank. And yes, you just lost the Game.
What makes such a “thinking of a lot of things” immoral is not the thinking itself, but whether it is coupled with a desire.
In the interest of steel-manning the Christian view; there’s a difference between thinking briefly and abstractly of the idea of something and indulging in fantasy about it.
If you spend hours imagining the feel of the gun in your hand, the sound of the money sliding smoothly into the bag, the power and control, the danger and excitement, it would be fair to say that there’s a point where you could have made the choice to stop.
there’s a difference between thinking briefly and abstractly of the idea of something and indulging in fantasy about it.
Yes, of course, there is a whole range of, let’s say, involvement in these thoughts. But if I understand mainstream Catholicism correctly, even a brief lustful glance at the neighbor’s wife is a sin. Granted, a lesser sin than constructing a whole porn movie in your head, but still a sin.
In this case we should define “going along” and “thinking of”, because otherwise this will be just empty arguing about semantics.
My point was that parsing and understanding that sentence means you are thinking of it, even if for just a short moment, and that it is different from actually having even the slightest desire to actually do it. Where does your definition of “going along” fit into it?
“A tulpa could be described as an imaginary friend that has its own thoughts and emotions, and that you can interact with. You could think of them as hallucinations that can think and act on their own.” https://www.reddit.com/r/tulpas/
The reason I posted originally was thinking about how some Protestant sects instruct people to “let Jesus into your heart to live inside you” or similar. So implementing a deity via distributed tulpas is...not impossible. If that distributed-tulpa can reproduce into new humans, it becomes almost immortal. If it has access to most people’s minds, it is almost omniscient. Attributing power to it and doing what it says gives it some form of omnipotence relative to humans.
This does not weird me out. They use parts of their brain to simulate characters brains. They use another part of their brain to write a plot they themselves like. Why should the simulated character necessarily like it? If they have good simulation skills—and if not they will never write memorable characters—this is perfectly expected...
Self-awareness (a well-written character’s model includes a model of itself, and can do introspection using the intelligent behavior above)
What else is necessary to have consciousness? There are plenty of other things that could be important, but they don’t seem necessary to me upon reflection. For example:
Continuity of self; Time skips in stories involve coming up with approximately what happens over a period of time, and simply updating the character model based on the expected results of that. But if it turned out that significant parts of my life were skipped and fake memories of them were added, I would still value myself for the moments that weren’t skipped over, so I don’t think this is necessary for consciousness.
Independence; I can technically make a character do whatever I want, but that often breaks their characterization, and if someone was a god and could make me think or do anything, I’d want to get free, but I would still value myself.
Consistency; At my best my characters are mostly consistent in characterization, but I’m not often at my best. But mood swings and forgetting things happens to real people too, so I don’t think it’s a deal-breaker on consciousness.
Subconscious; A really good author almost certainly uses their own subconscious to model the character and their behavior, so it’s not clear that a character doesn’t have a subconscious. It’s not quite the same as a real person’s, but as long as it still results in the big 4 at the top I don’t think this matters much.
Advanced senses; Visualization is hard, so none of my characters have vision as good as mine, and even then, I usually just include basic awareness of their surroundings in their models rather than visualize every scene from each character’s perspective. But then, blind people are people, so that’s still not necessary for consciousness.
Maybe any of these alone isn’t enough to stop being a person, but combined they are? But I don’t see a reason why that would be the case.
I suppose you could argue that the character is just the author playing a role. That’s true, but it seems to me that if a subset of you is practically a complete person on its own with different emotions, preferences, and behaviors from your own, then saying they don’t matter because they’re just a subset of you doesn’t really sit well with me.
So where did I go wrong with this long chain of reasoning? What am I missing?
How conscious are our models of other people? For example; in dreams it seems like I am talking and interacting with other people. Their behavior is sometimes surprising and unpredictable. They use language, express emotion, appear to have goals, etc. It could just be that I, being less conscious, see dream-people as being more conscious than in reality.
I can somewhat predict what other people in the real world will do or say, including what they might say about experiencing consciousness.
Authors can create realistic characters, plan their actions and internal thoughts, and explore the logical (or illogical) results. My guess is that the more intelligent/introspective an author is, the closer the characters floating around in his or her mind are to being conscious.
Many religions encourage people to have a personal relationship with a supernatural entity which involves modeling the supernatural agency as an (anthropomorphic) being, which partially instantiates a maybe-conscious being in their minds...
Maybe imaginary friends are real.
Since this is a crazy ideas thread, I’ll tag on the following thought. If you believe that in the future, if we are able to make ems, and we should include them in our moral calculus, should we also be careful not to imagine people in bad situations? Since by doing so, we may be making a very low-level simulation in our own mind of that person, that may or may not have some consciousness. If you don’t believe that is the case now, how does that scale, if we start augmenting our minds with ever-more-powerful computer interfaces. Is there ever a point where it becomes immoral just to think of something?
God kind of ran into the same problem. “What if The Universe? Oh, whoops, intelligent life, can’t just forget about that now, can I? What a mess… I guess I better plan some amazing future utility for those poor guys to balance all that shit out… It has to be an infinite future? With their little meat bodies how is that going to work? Man, I am never going to think about things again. Hey, that’s a catchy word for intelligent meat agents.”
So, in short, if we ever start thinking truly immoral things, we just need to out-moral them with longer, better thoughts. Forgetting about our mental creations is probably the most immoral thing we could do.
In e.g. Christianity it’s immoral to think of a lot of things :-/
Not exactly. If I ask you “what if you robbed a bank?” you will think of robbing a bank, you actually cannot prevent yourself from thinking about robbing a bank. And yes, you just lost the Game.
What makes such a “thinking of a lot of things” immoral is not the thinking itself, but whether it is coupled with a desire.
But you think you can prevent desire from sneaking into your thinking about sinful things..? ;-)
In the interest of steel-manning the Christian view; there’s a difference between thinking briefly and abstractly of the idea of something and indulging in fantasy about it.
If you spend hours imagining the feel of the gun in your hand, the sound of the money sliding smoothly into the bag, the power and control, the danger and excitement, it would be fair to say that there’s a point where you could have made the choice to stop.
Yes, of course, there is a whole range of, let’s say, involvement in these thoughts. But if I understand mainstream Catholicism correctly, even a brief lustful glance at the neighbor’s wife is a sin. Granted, a lesser sin than constructing a whole porn movie in your head, but still a sin.
Yes, and in Yudkowskian rationality, lying to oneself is a sin.
What’s wrong with having a conception of sin that includes thoughts?
Well that’s why I called it steel-manning, I can’t promise anything about the reasonableness of the common interpretation.
That depends on how strongly a person is suggestible.
It doesn’t. Just by parsing that sentence, if you understood it, it means you though of it.
No, it’s quite possible to parse the sentence without actually going along with it. Just because you can’t doesn’t mean that other people can’t.
In this case we should define “going along” and “thinking of”, because otherwise this will be just empty arguing about semantics.
My point was that parsing and understanding that sentence means you are thinking of it, even if for just a short moment, and that it is different from actually having even the slightest desire to actually do it. Where does your definition of “going along” fit into it?
You worded this badly, but I agree.
It is possible to read “you robbed a bank” without imagining robbing a bank. Just very hard, and maybe impossible if you’re not readied.
So George R. R. Martin is a very evil man.
“A tulpa could be described as an imaginary friend that has its own thoughts and emotions, and that you can interact with. You could think of them as hallucinations that can think and act on their own.” https://www.reddit.com/r/tulpas/
The reason I posted originally was thinking about how some Protestant sects instruct people to “let Jesus into your heart to live inside you” or similar. So implementing a deity via distributed tulpas is...not impossible. If that distributed-tulpa can reproduce into new humans, it becomes almost immortal. If it has access to most people’s minds, it is almost omniscient. Attributing power to it and doing what it says gives it some form of omnipotence relative to humans.
Some authors say that their characters will resist plot elements they (the characters) don’t like.
Some would say that this is their imagination.
I resist plot elements that my empathy doesn’t like, to the point that I will imagine alternate endings to particularly unfortunate stories.
This does not weird me out. They use parts of their brain to simulate characters brains. They use another part of their brain to write a plot they themselves like. Why should the simulated character necessarily like it? If they have good simulation skills—and if not they will never write memorable characters—this is perfectly expected...
What exactly does “consciousness” even mean here, though?
I’ve written before, and at my best my model of my characters included:
Complex emotions (as in multiple emotions at once with varying intensities)
Intelligent behavior (by borrowing my own intelligence, my characters could react intelligently to the same range of situations as I can)
Preferences (they liked/disliked, loved/hated, desired/feared, etc)
Self-awareness (a well-written character’s model includes a model of itself, and can do introspection using the intelligent behavior above)
What else is necessary to have consciousness? There are plenty of other things that could be important, but they don’t seem necessary to me upon reflection. For example:
Continuity of self; Time skips in stories involve coming up with approximately what happens over a period of time, and simply updating the character model based on the expected results of that. But if it turned out that significant parts of my life were skipped and fake memories of them were added, I would still value myself for the moments that weren’t skipped over, so I don’t think this is necessary for consciousness.
Independence; I can technically make a character do whatever I want, but that often breaks their characterization, and if someone was a god and could make me think or do anything, I’d want to get free, but I would still value myself.
Consistency; At my best my characters are mostly consistent in characterization, but I’m not often at my best. But mood swings and forgetting things happens to real people too, so I don’t think it’s a deal-breaker on consciousness.
Subconscious; A really good author almost certainly uses their own subconscious to model the character and their behavior, so it’s not clear that a character doesn’t have a subconscious. It’s not quite the same as a real person’s, but as long as it still results in the big 4 at the top I don’t think this matters much.
Advanced senses; Visualization is hard, so none of my characters have vision as good as mine, and even then, I usually just include basic awareness of their surroundings in their models rather than visualize every scene from each character’s perspective. But then, blind people are people, so that’s still not necessary for consciousness.
Maybe any of these alone isn’t enough to stop being a person, but combined they are? But I don’t see a reason why that would be the case.
I suppose you could argue that the character is just the author playing a role. That’s true, but it seems to me that if a subset of you is practically a complete person on its own with different emotions, preferences, and behaviors from your own, then saying they don’t matter because they’re just a subset of you doesn’t really sit well with me.
So where did I go wrong with this long chain of reasoning? What am I missing?
This is a great blog post on a similar idea: http://www.meltingasphalt.com/neurons-gone-wild/