Perhaps, but she’s far from alone. I’m mostly with her on this one; letting people live in ignorance we can cure just so they can appreciate knowledge more when it’s eventually obtained makes about as much sense to me as letting them suffer from illness we can cure just so they can appreciate health.
It seems to me that Elizier’s post was a list of things that typically seem, in the real world, to be component of people’s happiness, but are commonly missed out when people propose putative (fictional or futuristic) utopias.
It seemed to me that Elizier was saying “If you propose a utopia without any challenge, humans will not find it satisfying” not “It’s possible to artificially provide challenge in a utopia”.
Sure, at that level of abstraction, we’re all in agreement: challenge is better than the absence of challenge.
The question is whether this particular form of challenge is better than the absence of this particular form of challenge.
Just to make the difference between those two levels of abstraction clear: were I to argue, from the general claim that challenge is good, that creating a world where people experience suffering and death so that we can all have the challenge of defeating suffering and death is therefore good, I would fully expect that the vast majority of LW would immediately reject that argument. They would point out, rightly, that just because a general category is good, does not mean that every instance of that category is good, and they would, rightly, refocus the conversation on the pros and cons, not of challenge in general, but of suffering and death in particular.
Similarly, the discussion in this comment thread is not about the pros and cons of challenge in general, but of ignorance in particular.
In this particular context, “that sounds like something I wouldn’t enjoy at all” is a reasonable argument, since the whole point is to set up a world that’s optimally enjoyable. “AAAAAAAAAAAAH” is just the extreme form of that argument.
Yeah, I get that, but I was under the impression that Alicorn was saying not merely, “I personally wouldn’t enjoy that at all, YMMV”, but “I wouldn’t enjoy that at all and neither would most other people”. I could’ve been reading too much into her statement, though.
I’m pretty sure the more accurate actual-words form of the argument is more like “that would be torture for me” than “I think most people would prefer that not to happen”. Sufficient dust specks might be > torture, but unlike dust specks, torture never belongs in a utopia.
Taboo “surprise”, perhaps? I wouldn’t like to already know all the sensory inputs I’m going to receive in the next month, but maybe Alicorn is interpreting “surprise” according to a more narrow definition. (Though Eliezer does seem to value surprise more than some people here—e.g. his aversion to non-rot13′d spoilers.)
I also would not like to know all of my sensory inputs in advance. I don’t actually believe that condition is coherent. That said, I would also not like to know that an accurate prediction of all my sensory inputs for the next month is sitting in a file somewhere that I am not permitted to see, even though in that case all of those inputs would come as a surprise.
Perhaps, but she’s far from alone. I’m mostly with her on this one; letting people live in ignorance we can cure just so they can appreciate knowledge more when it’s eventually obtained makes about as much sense to me as letting them suffer from illness we can cure just so they can appreciate health.
It seems to me that Elizier’s post was a list of things that typically seem, in the real world, to be component of people’s happiness, but are commonly missed out when people propose putative (fictional or futuristic) utopias.
It seemed to me that Elizier was saying “If you propose a utopia without any challenge, humans will not find it satisfying” not “It’s possible to artificially provide challenge in a utopia”.
Sure, at that level of abstraction, we’re all in agreement: challenge is better than the absence of challenge.
The question is whether this particular form of challenge is better than the absence of this particular form of challenge.
Just to make the difference between those two levels of abstraction clear: were I to argue, from the general claim that challenge is good, that creating a world where people experience suffering and death so that we can all have the challenge of defeating suffering and death is therefore good, I would fully expect that the vast majority of LW would immediately reject that argument. They would point out, rightly, that just because a general category is good, does not mean that every instance of that category is good, and they would, rightly, refocus the conversation on the pros and cons, not of challenge in general, but of suffering and death in particular.
Similarly, the discussion in this comment thread is not about the pros and cons of challenge in general, but of ignorance in particular.
I agree with you (and Alicorn), but “AAAAAAAAAAAAAAAAAAAAAAAH” doesn’t make for a very strong argument.
In this particular context, “that sounds like something I wouldn’t enjoy at all” is a reasonable argument, since the whole point is to set up a world that’s optimally enjoyable. “AAAAAAAAAAAAH” is just the extreme form of that argument.
Yeah, I get that, but I was under the impression that Alicorn was saying not merely, “I personally wouldn’t enjoy that at all, YMMV”, but “I wouldn’t enjoy that at all and neither would most other people”. I could’ve been reading too much into her statement, though.
I’m pretty sure the more accurate actual-words form of the argument is more like “that would be torture for me” than “I think most people would prefer that not to happen”. Sufficient dust specks might be > torture, but unlike dust specks, torture never belongs in a utopia.
Alicorn hates surprises, and I’ve never known her to assume that this means everyone else, or a lot of other people, must also hate surprises.
Fair enough, I apologize for reading too much into her words.
Taboo “surprise”, perhaps? I wouldn’t like to already know all the sensory inputs I’m going to receive in the next month, but maybe Alicorn is interpreting “surprise” according to a more narrow definition. (Though Eliezer does seem to value surprise more than some people here—e.g. his aversion to non-rot13′d spoilers.)
I also would not like to know all of my sensory inputs in advance.
I don’t actually believe that condition is coherent.
That said, I would also not like to know that an accurate prediction of all my sensory inputs for the next month is sitting in a file somewhere that I am not permitted to see, even though in that case all of those inputs would come as a surprise.