First a few minor things I would like to get out there:
We are according to consensus which I do not dispute (since its well founded) slowly approaching heat-death. If I recall correctly we are supposed to approach maxentrhopy asymptotically. Can we with our current knowledge completley rule out the possibility of some kind computation machinery existing and waking up every now and then (at longer and longer intervals) in the wasteland universe to churn a few cycles of a simulated universe?
I don’t quite see the difference between real and simulated torture in the context of a civilization as advanced as the one you are arguing against we let develop. So I’m not sure you are getting at by mentioning them as separate things.
You need to read up on fun theory. And if you disregard it, let me just point out that worrying about people not having fun is different concern than from assuming they will experience mental anguish at the prospect of suicide or a inevitable death. Actually not having fun can be neatly solved by suicide if you exhaust all other options as long as you aren’t built to find it stressful committing to it.
Now assuming your overall argument has merit:
My value function says its better to have loved and lost than not to have loved at all.
Humans may have radically different values once they are blown up to scale. Unless you get your finger first into the first AI’s values, there will always be a nonzero fraction of agents who would wish to carry on even knowing it will increase total suffering, because they feel their values are worth suffering for. I am basically talking about practicality now: So what if you are right? The only way to do anything about it is to make sure your AI eliminates anything be it human or alien AI that can paperclip anything like beings capable of suffering. To do this in the long run (not just kill or sterilize all humans which is easy) properly you need to understand friendliness much better than we do now.
If you want to learn about friendliness you better try and learn to deceive agents with whom you might be able to work together to figure out more about it, especially concerning your motives. ;)
We are according to consensus which I do not dispute since its well founded slowly approach heat-death. If I recall correctly we are supposed to approach maxentrhopy asymptotically. Can we with our current knowledge completley rule out the possibility of some kind computation machinery existing and waking up every now and then (at longer and longer intervals) in the wasteland universe to churn a few cycles of a simulated universe?
Dyson’s eternal intelligence. Unfortunately I know next to nothing about physics so I have no idea how this is related to what we know about the universe.
It runs into edge conditions we know little about; like, are protons stable or not. (The answer appears to be no, by the way.)
At this point in time I would not expect to be able to do infinite computation in the future. The future has a way of surprising, though; I’d prefer to wait and see.
I don’t quite see the difference between real and simulated torture...
I tried to highlight the increased period of time you have to take into account. This allows for even more suffering than the already huge time span implies from a human perspective.
You need to read up on fun theory.
Indeed, but I felt this additional post was required as many people were questioning this point in the other post. Also, I came across a post by a physicist which triggered this post. I simply have my doubts that the sequence you mention has resolved this issue? But I will read it of course.
My value function says its better to have loved and lost than not to have loved at all.
Mine too. I would never recommend to give up. I want to see the last light shine. But I perceive many people here to be focused on the amount of possible suffering, so I thought to inquire on what they would recommend if it is more likely that the overall suffering will increase. Would they rather pull the plug?
First a few minor things I would like to get out there:
We are according to consensus which I do not dispute (since its well founded) slowly approaching heat-death. If I recall correctly we are supposed to approach maxentrhopy asymptotically. Can we with our current knowledge completley rule out the possibility of some kind computation machinery existing and waking up every now and then (at longer and longer intervals) in the wasteland universe to churn a few cycles of a simulated universe?
I don’t quite see the difference between real and simulated torture in the context of a civilization as advanced as the one you are arguing against we let develop. So I’m not sure you are getting at by mentioning them as separate things.
You need to read up on fun theory. And if you disregard it, let me just point out that worrying about people not having fun is different concern than from assuming they will experience mental anguish at the prospect of suicide or a inevitable death. Actually not having fun can be neatly solved by suicide if you exhaust all other options as long as you aren’t built to find it stressful committing to it.
Now assuming your overall argument has merit: My value function says its better to have loved and lost than not to have loved at all.
Humans may have radically different values once they are blown up to scale. Unless you get your finger first into the first AI’s values, there will always be a nonzero fraction of agents who would wish to carry on even knowing it will increase total suffering, because they feel their values are worth suffering for. I am basically talking about practicality now: So what if you are right? The only way to do anything about it is to make sure your AI eliminates anything be it human or alien AI that can paperclip anything like beings capable of suffering. To do this in the long run (not just kill or sterilize all humans which is easy) properly you need to understand friendliness much better than we do now.
If you want to learn about friendliness you better try and learn to deceive agents with whom you might be able to work together to figure out more about it, especially concerning your motives. ;)
Dyson’s eternal intelligence. Unfortunately I know next to nothing about physics so I have no idea how this is related to what we know about the universe.
It runs into edge conditions we know little about; like, are protons stable or not. (The answer appears to be no, by the way.)
At this point in time I would not expect to be able to do infinite computation in the future. The future has a way of surprising, though; I’d prefer to wait and see.
I tried to highlight the increased period of time you have to take into account. This allows for even more suffering than the already huge time span implies from a human perspective.
Indeed, but I felt this additional post was required as many people were questioning this point in the other post. Also, I came across a post by a physicist which triggered this post. I simply have my doubts that the sequence you mention has resolved this issue? But I will read it of course.
Mine too. I would never recommend to give up. I want to see the last light shine. But I perceive many people here to be focused on the amount of possible suffering, so I thought to inquire on what they would recommend if it is more likely that the overall suffering will increase. Would they rather pull the plug?