I think the idea is that most areas in the space contain barely any conscious experience. If you have some rogue AI optimizing all matter for some criterion x, there’s no reason why the resulting structures should be conscious. (To what extent the AI itself would be is actually talked about in this other comment thread.)
But the objection is good, I think “how well values are satisfied” was not the right description of the axes. Probably more like, if one of your values is y, like physical temperature to choose something mundane, then y can take different values but only a tiny subset of those will be to your liking; most would mean you die immediately. (Note that I’m only trying to paraphrase, this is not my model.) If most values work like this, you get the above picture.
I think the idea is that most areas in the space contain barely any conscious experience. If you have some rogue AI optimizing all matter for some criterion x, there’s no reason why the resulting structures should be conscious. (To what extent the AI itself would be is actually talked about in this other comment thread.)
But the objection is good, I think “how well values are satisfied” was not the right description of the axes. Probably more like, if one of your values is y, like physical temperature to choose something mundane, then y can take different values but only a tiny subset of those will be to your liking; most would mean you die immediately. (Note that I’m only trying to paraphrase, this is not my model.) If most values work like this, you get the above picture.
See also Value is Fragile.