Maybe the right question here is: is it possible to create more and more strong qualia of pain, or the level of pain is limited.
If maximum level of pain is limited, by, say, 10 of 10, when evil AI have to create complex worlds, like in the story “I have not mouth but I must scream”, trying to affect many our values in most unpleasant combination, that is playing anti-music by pressing different values.
If there is no limits to the possible intensity of pain, the evil AI will invest more in upgrading human brain so it will be able to feel more and more pain. In that case there will be no complexity but just growing intensity. One could see this type of hell in the ending of the last Trier movie “The house that Jack built”. This type of hell is more disturbing to me.
In the Middle Ages the art of torture existed, and this distinction also existed: some tortures were sophisticated, but other were simple but infinitely intense, like the testicle torture.
I don’t understand why you are ruling them out completely: at least at personal level long intense suffering do exist and happened in mass in the past (cancer patients, concentration camps, witch hunting).
I suggested two different argument against s-risks:
1) Anthropic: s-risks are not dominating type of experience in the universe, or we will be already here.
2) Larger AIs could “save” minds from smaller but evil AIs by creating many copies of such minds and thus creating indexical uncertainty (detailed explanation here), as well as punish copies of such AI for this, and thus discouraging any AI to implement s-risks.
The question of this post is whether there exist indescribable hellworlds—worlds that are bad, but where it cannot be explained to humans how/why they are bad.
Yes, I probably understood “indescribable” as a synonymous of “very intense”, not of literary “can’t be described”.
But now I have one more idea about really “indescribable hellworld”: imagine that there is a qualia of suffering which is infinitely worse than anything that any living being ever felt on Earth, and it appears in some hellword, but only in animals or in humans who can’t speak (young children, patients just before death, or it paralises the ability to speak by its intensity and also can’t be remembered—I read historical cases of pain so intense that a person was not able to provide very important information).
So, this hellworld will look almost as our normal world: animals live and die, people live normal and happy (in time-average) lives and also die. But some counterfactual observer which will be able to feel qualia of any living being will find it infinitely more hellish than our world.
We also could live now in such hellworld but don’t know it.
The main reason why it can’t be described as most people don’t believe in qualia, and and observable characteristics of this world will be not hellish. Beings in such world could be also called reverse-p-zombies, as they have much more stronger capability to “experiencing” than ordinary humans.
We also could live now in such hellworld but don’t know it.
Indeed. But you’ve just described it to us ^_^
What I’m mainly asking is “if we end up in world W, and no honest AI can describe to us how this might be a hellworld, is it automatically not a hellworld?”
It looks like examples are not working here, as any example is an explanation, so it doesn’t count :)
But in some sense it could be similar to the Godel theorem: there are true propositions which can’t be proved by AI (and explanation could be counted as a type of prove).
Ok, another example: there are bad pieces of art, I know it, but I can’t explain why they are bad in formal language.
Maybe the right question here is: is it possible to create more and more strong qualia of pain, or the level of pain is limited.
If maximum level of pain is limited, by, say, 10 of 10, when evil AI have to create complex worlds, like in the story “I have not mouth but I must scream”, trying to affect many our values in most unpleasant combination, that is playing anti-music by pressing different values.
If there is no limits to the possible intensity of pain, the evil AI will invest more in upgrading human brain so it will be able to feel more and more pain. In that case there will be no complexity but just growing intensity. One could see this type of hell in the ending of the last Trier movie “The house that Jack built”. This type of hell is more disturbing to me.
In the Middle Ages the art of torture existed, and this distinction also existed: some tortures were sophisticated, but other were simple but infinitely intense, like the testicle torture.
But you seem to have described these hells quite well—enough for us to clearly rule them out.
I don’t understand why you are ruling them out completely: at least at personal level long intense suffering do exist and happened in mass in the past (cancer patients, concentration camps, witch hunting).
I suggested two different argument against s-risks:
1) Anthropic: s-risks are not dominating type of experience in the universe, or we will be already here.
2) Larger AIs could “save” minds from smaller but evil AIs by creating many copies of such minds and thus creating indexical uncertainty (detailed explanation here), as well as punish copies of such AI for this, and thus discouraging any AI to implement s-risks.
The question of this post is whether there exist indescribable hellworlds—worlds that are bad, but where it cannot be explained to humans how/why they are bad.
Yes, I probably understood “indescribable” as a synonymous of “very intense”, not of literary “can’t be described”.
But now I have one more idea about really “indescribable hellworld”: imagine that there is a qualia of suffering which is infinitely worse than anything that any living being ever felt on Earth, and it appears in some hellword, but only in animals or in humans who can’t speak (young children, patients just before death, or it paralises the ability to speak by its intensity and also can’t be remembered—I read historical cases of pain so intense that a person was not able to provide very important information).
So, this hellworld will look almost as our normal world: animals live and die, people live normal and happy (in time-average) lives and also die. But some counterfactual observer which will be able to feel qualia of any living being will find it infinitely more hellish than our world.
We also could live now in such hellworld but don’t know it.
The main reason why it can’t be described as most people don’t believe in qualia, and and observable characteristics of this world will be not hellish. Beings in such world could be also called reverse-p-zombies, as they have much more stronger capability to “experiencing” than ordinary humans.
Indeed. But you’ve just described it to us ^_^
What I’m mainly asking is “if we end up in world W, and no honest AI can describe to us how this might be a hellworld, is it automatically not a hellworld?”
It looks like examples are not working here, as any example is an explanation, so it doesn’t count :)
But in some sense it could be similar to the Godel theorem: there are true propositions which can’t be proved by AI (and explanation could be counted as a type of prove).
Ok, another example: there are bad pieces of art, I know it, but I can’t explain why they are bad in formal language.
That’s what I’m fearing, so I’m trying to see if the concept makes sense.