Hitler’s evil actions were determined by the physical structure of his brain. His brain was built by genes (which he didn’t choose), and modified by his environment (which didn’t choose), and then certain environmental inputs (which he didn’t choose) caused his brain to output genocide. If you had Hitler’s genes and Hitler’s environment, you would have Hitler’s brain and so you would do as Hitler did.
To punish someone, or in this case withhold high resolution paradise, can only be useful and good in so far as it changes behaviour or acts as a deterrent to others, ultimately reducing suffering. If you have infinite power, there is no longer a need to punish anyone since you can just end all suffering directly by giving everyone their own high resolution paradise, or whatever the ideal heaven is. Punishment becomes nothing but pointless evil cruelty the second we achieve the ability to prevent people from hurting each other without it.
So there’s two different facets of the hypothetical ancestor simulation response I came up with.
A) deliberately not being a paradise
B) not connecting it to some broader network of simulation paradises.
I can totally buy coming to believe the first part is pointlessly cruel. The second part feels more like its… actually enforcing boundaries for the safety of others.
The ‘infinite energy’ clause is a bit weird here. If ‘you’ have total control over not just infinite energy but also the entire posthuman world, then yeah you can do things like let Hitler wander around making new allies and… somehow intervene if this starts to go awry. But I have an easier time imagining being confident in ‘not let Hitler out of the box until he’s trustworthy’ then the latter. (Ie there can be infinite energy ‘around’ but not actually in a uniform control)
Also, it’s not obvious to me which is more cruel. (I think it depends on Hitler’s own values)
Also, while I said ‘infinite energy’ in the hypothetical, I do think in most optimistic worlds we still end up with only ‘very large finite energy’ and I don’t even know that I’d get around to doing any kind of ancestor sim at all for him, let alone getting to optimize it fully for him. I think I love Hitler, but I also think I love everyone else and it just seems reasonable to prioritize both the safety and well being of people who didn’t go out of their way to create horrific death camps, and manipulate their way into power.
You raise two very valid concerns. That Hitler might hurt others if you allow him to interact with them, and that Hitler might find a way to escape the box.
Even if Hitler was willing to reflect on his actions and change, his presence in the network (B) would likely make other people unhappy.
So while I think (A) is ethically mandatory if you can contain him, (B) comes with a lot of complex problems that might not be solvable.
Hitler’s evil actions were determined by the physical structure of his brain. [...] certain environmental inputs (which he didn’t choose) caused his brain to output genocide.
I can’t speak for you, but I personally can choose to stop thinking thoughts if they are causing suffering, and instead think a different thought. For example, if I notice that I’m replaying a stressful memory, I might choose to pick up the guitar and focus on those sounds and feelings instead. This trains neural pathways that make me less and less susceptible to compulsively “output genocide.”
Sure, “I” am as much a part of the environment as anything else, as is “my” decision-making process. So you could say that it’s the environment choosing a brain-training input, not me. But “I” am what the environment feels like in the model of reality simulated by a particular brain. And there is a decision-making process happening within the model, led by its intentions.
Hitler had a choice. He could make an effort to train certain neural pathways of the brain, or he could train others by default. He chose to write divisive propaganda when he should have painted.
The bad outcomes that followed were not compelled by the environment. They are attributable to particular minds. We who have capacity for decision-making are all accountable for our own moral deeds.
The bit of your brain that chooses to think nice thoughts (“I”/“me”) is just as much a product of your genes and environment as the bit of your brain that wants to think bad thoughts.
You didn’t choose to have a brain that tries not to think bad thoughts and Hitler didn’t choose to have a brain that outputs genocide when given some specific environmental conditions. The only way Hitler could have realised that his actions were bad and choose to be good would be if his genes and environment built a brain that would do so given some environmental input.
The only way Hitler could have realised that his actions were bad and choose to be good would be if his genes and environment built a brain that would do so given some environmental input.
The brain is an ongoing process, not a fixed thing that is given at birth. Hitler was part of the environment that built his brain. Many crucial developmental inputs came from the part of the environment we call Hitler.
You didn’t choose to have a brain that tries not to think bad thoughts
I did and do choose my intentions deliberately, repeatedly, with focused effort. That’s a major reason the brain develops the way it does. It generates inputs for itself, through conscious modeling. It doesn’t just process information passively and automatically based solely on genes and sensory input. That’s the Chinese Room thought experiment—information processing devoid of any understanding. The human mind reflects and practices ways of relating to itself and the environment.
You never get a pass to say, “Sorry I’m killing you! I’m not happy about it either. It’s just that my genes and the environment require this to happen. Some crazy ride we’re on together here, huh?” That’s more like how a mouse trap processes information. With the human level of awareness, you can actually make an effort and choose to stop killing.
We help create the world—discover the unknown future—by resolving uncertainty through this lived process. The fact that decision-making and choosing occur within reality (or “the environment”) rather than outside of it is logical and necessary. It doesn’t mean that there is no choosing. Choosing is merely real, another step in the causal chain of events.
So why do some people choose to do good while others choose to do evil? I think genes and environment are fully sufficient to explain why people make different choices, but if you have an alternate hypothesis I’d be interested to hear it. But the answer can’t be something like “because some people choose different intentions” because then you’d have to explain why some people have different intentions.
To put it another way, you may choose your intentions deliberately, but did you make the choice to be the kind of person who chooses intentions deliberately? And if so, did you make the choice to be the kind of person who made that choice? (and so on…). If you go far enough back in the causal chain, it all goes back to the genes and environment that built a brain that does all those other things.
I can kind of see what you’re getting at with the self modification thing. I self-modified my own thought pattens to become a nicer person. But as for why I did that: my genetics gave me high trait openness and I was given a book that encouraged self-modification toward niceness when I was a child. So in this way, I chose to be a nicer person, and so I choose to do nice things, but factors outside of my control caused my original choice to become nicer.
We can simulate the brain of C. elegans, I see no reason why it couldn’t theoretically be scaled up to a human brain. I guess technically you need computation AND a full map of the human brain not just computation for that.
You do have some computing power, though. You compute choices according to processes that are interconnected with all other processes, including genetic evolution and the broader environment.
These choosing-algorithms operate according to causes (“inputs”), which means they are not random. Rather, they can result in the creation of information instead of entropy.
The environment is not something that happens to us. We are part of it. We are informed by it and also inform it in turn, as an output of energy expenditure.
Omega hasn’t run the calculation that you’re running right now. Until you decide, the future is literally undecided.
I think the atoms in my brain will follow the laws of physics until a choice is made. And to me that process feels like I’m deciding something, because that’s what computation feels like from the inside. But actually the outcome is predetermined.
So why do some people choose to do good while others choose to do evil?
Intentions depend on beliefs, i.e. the views a person holds, their model of reality. A bad choice follows from a lack of understanding: confusion, delusion, or ignorance about the causal laws of this world.
A “choice to do evil” in the extreme could be understood as a choice stemming from a worldview such as harm leads to happiness. (In reality, harm leads to suffering.)
How could someone become so deluded? They succumbed to evolved default behaviors like anger, instead of using their freedom of thought to cultivate more accurate beliefs about what does and does not lead to suffering.
People like Hitler made a long series of such errors, causing massive suffering. They failed to use innumerable opportunities, moment by moment, to allow their model to investigate itself and strive to learn the truth. Not because they were externally compelled, but because they chose wrongly.
I think there’s lots of specific internal reasons why people make bad choices: sometimes it’s just pure selfishness of sadism.
But as for why some people are delusional, selfish, sadistic. As for why some people “succumb to evolved default behaviors like anger, instead of using their freedom of thought.” I’m not really seeing an alternate explanation here other than some people where unlucky enough to have genes and environment that built a brain that followed the laws of physics until it they did something bad. And from an internal perspective, maybe the people who did good things had a self modification step where the environment that is their brain modified their brain to have better intentions. But that doesn’t really matter from the perspective of judging someone because all the factors that made a brain that would do self-modification in the first place were outside of that persons control.
And that doesn’t mean that you shouldn’t punish people where it will change their behaviour or act as a deterrent, or keep others safe.
But does mean that there is no justice in retributive punishment. And it means theirs no point in hating people and wanting them to suffer. And it means that if you have infinite energy and resurrect Hitler then you should give him paradise rather than punishment.
Hitler’s evil actions were determined by the physical structure of his brain. His brain was built by genes (which he didn’t choose), and modified by his environment (which didn’t choose), and then certain environmental inputs (which he didn’t choose) caused his brain to output genocide. If you had Hitler’s genes and Hitler’s environment, you would have Hitler’s brain and so you would do as Hitler did.
To punish someone, or in this case withhold high resolution paradise, can only be useful and good in so far as it changes behaviour or acts as a deterrent to others, ultimately reducing suffering. If you have infinite power, there is no longer a need to punish anyone since you can just end all suffering directly by giving everyone their own high resolution paradise, or whatever the ideal heaven is. Punishment becomes nothing but pointless evil cruelty the second we achieve the ability to prevent people from hurting each other without it.
So there’s two different facets of the hypothetical ancestor simulation response I came up with.
A) deliberately not being a paradise B) not connecting it to some broader network of simulation paradises.
I can totally buy coming to believe the first part is pointlessly cruel. The second part feels more like its… actually enforcing boundaries for the safety of others.
The ‘infinite energy’ clause is a bit weird here. If ‘you’ have total control over not just infinite energy but also the entire posthuman world, then yeah you can do things like let Hitler wander around making new allies and… somehow intervene if this starts to go awry. But I have an easier time imagining being confident in ‘not let Hitler out of the box until he’s trustworthy’ then the latter. (Ie there can be infinite energy ‘around’ but not actually in a uniform control)
Also, it’s not obvious to me which is more cruel. (I think it depends on Hitler’s own values)
Also, while I said ‘infinite energy’ in the hypothetical, I do think in most optimistic worlds we still end up with only ‘very large finite energy’ and I don’t even know that I’d get around to doing any kind of ancestor sim at all for him, let alone getting to optimize it fully for him. I think I love Hitler, but I also think I love everyone else and it just seems reasonable to prioritize both the safety and well being of people who didn’t go out of their way to create horrific death camps, and manipulate their way into power.
You raise two very valid concerns. That Hitler might hurt others if you allow him to interact with them, and that Hitler might find a way to escape the box.
Even if Hitler was willing to reflect on his actions and change, his presence in the network (B) would likely make other people unhappy.
So while I think (A) is ethically mandatory if you can contain him, (B) comes with a lot of complex problems that might not be solvable.
I can’t speak for you, but I personally can choose to stop thinking thoughts if they are causing suffering, and instead think a different thought. For example, if I notice that I’m replaying a stressful memory, I might choose to pick up the guitar and focus on those sounds and feelings instead. This trains neural pathways that make me less and less susceptible to compulsively “output genocide.”
Sure, “I” am as much a part of the environment as anything else, as is “my” decision-making process. So you could say that it’s the environment choosing a brain-training input, not me. But “I” am what the environment feels like in the model of reality simulated by a particular brain. And there is a decision-making process happening within the model, led by its intentions.
Hitler had a choice. He could make an effort to train certain neural pathways of the brain, or he could train others by default. He chose to write divisive propaganda when he should have painted.
The bad outcomes that followed were not compelled by the environment. They are attributable to particular minds. We who have capacity for decision-making are all accountable for our own moral deeds.
The bit of your brain that chooses to think nice thoughts (“I”/“me”) is just as much a product of your genes and environment as the bit of your brain that wants to think bad thoughts.
You didn’t choose to have a brain that tries not to think bad thoughts and Hitler didn’t choose to have a brain that outputs genocide when given some specific environmental conditions. The only way Hitler could have realised that his actions were bad and choose to be good would be if his genes and environment built a brain that would do so given some environmental input.
The brain is an ongoing process, not a fixed thing that is given at birth. Hitler was part of the environment that built his brain. Many crucial developmental inputs came from the part of the environment we call Hitler.
I did and do choose my intentions deliberately, repeatedly, with focused effort. That’s a major reason the brain develops the way it does. It generates inputs for itself, through conscious modeling. It doesn’t just process information passively and automatically based solely on genes and sensory input. That’s the Chinese Room thought experiment—information processing devoid of any understanding. The human mind reflects and practices ways of relating to itself and the environment.
You never get a pass to say, “Sorry I’m killing you! I’m not happy about it either. It’s just that my genes and the environment require this to happen. Some crazy ride we’re on together here, huh?” That’s more like how a mouse trap processes information. With the human level of awareness, you can actually make an effort and choose to stop killing.
We help create the world—discover the unknown future—by resolving uncertainty through this lived process. The fact that decision-making and choosing occur within reality (or “the environment”) rather than outside of it is logical and necessary. It doesn’t mean that there is no choosing. Choosing is merely real, another step in the causal chain of events.
So why do some people choose to do good while others choose to do evil? I think genes and environment are fully sufficient to explain why people make different choices, but if you have an alternate hypothesis I’d be interested to hear it. But the answer can’t be something like “because some people choose different intentions” because then you’d have to explain why some people have different intentions.
To put it another way, you may choose your intentions deliberately, but did you make the choice to be the kind of person who chooses intentions deliberately? And if so, did you make the choice to be the kind of person who made that choice? (and so on…). If you go far enough back in the causal chain, it all goes back to the genes and environment that built a brain that does all those other things.
I can kind of see what you’re getting at with the self modification thing. I self-modified my own thought pattens to become a nicer person. But as for why I did that: my genetics gave me high trait openness and I was given a book that encouraged self-modification toward niceness when I was a child. So in this way, I chose to be a nicer person, and so I choose to do nice things, but factors outside of my control caused my original choice to become nicer.
Can you make precise predictions of behaviour, given that information...?
No, but only because I lack the computing power to do so. I very powerful AI could.
Huw do you know that computational power is the only limitation?
We can simulate the brain of C. elegans, I see no reason why it couldn’t theoretically be scaled up to a human brain. I guess technically you need computation AND a full map of the human brain not just computation for that.
How do you know that indeterminism isn’t also a limitation to prediction?
You do have some computing power, though. You compute choices according to processes that are interconnected with all other processes, including genetic evolution and the broader environment.
These choosing-algorithms operate according to causes (“inputs”), which means they are not random. Rather, they can result in the creation of information instead of entropy.
The environment is not something that happens to us. We are part of it. We are informed by it and also inform it in turn, as an output of energy expenditure.
Omega hasn’t run the calculation that you’re running right now. Until you decide, the future is literally undecided.
I think the atoms in my brain will follow the laws of physics until a choice is made. And to me that process feels like I’m deciding something, because that’s what computation feels like from the inside. But actually the outcome is predetermined.
Intentions depend on beliefs, i.e. the views a person holds, their model of reality. A bad choice follows from a lack of understanding: confusion, delusion, or ignorance about the causal laws of this world.
A “choice to do evil” in the extreme could be understood as a choice stemming from a worldview such as harm leads to happiness. (In reality, harm leads to suffering.)
How could someone become so deluded? They succumbed to evolved default behaviors like anger, instead of using their freedom of thought to cultivate more accurate beliefs about what does and does not lead to suffering.
People like Hitler made a long series of such errors, causing massive suffering. They failed to use innumerable opportunities, moment by moment, to allow their model to investigate itself and strive to learn the truth. Not because they were externally compelled, but because they chose wrongly.
I think there’s lots of specific internal reasons why people make bad choices: sometimes it’s just pure selfishness of sadism.
But as for why some people are delusional, selfish, sadistic. As for why some people “succumb to evolved default behaviors like anger, instead of using their freedom of thought.” I’m not really seeing an alternate explanation here other than some people where unlucky enough to have genes and environment that built a brain that followed the laws of physics until it they did something bad. And from an internal perspective, maybe the people who did good things had a self modification step where the environment that is their brain modified their brain to have better intentions. But that doesn’t really matter from the perspective of judging someone because all the factors that made a brain that would do self-modification in the first place were outside of that persons control.
And that doesn’t mean that you shouldn’t punish people where it will change their behaviour or act as a deterrent, or keep others safe.
But does mean that there is no justice in retributive punishment. And it means theirs no point in hating people and wanting them to suffer. And it means that if you have infinite energy and resurrect Hitler then you should give him paradise rather than punishment.