Peterdjones, too, if I’m not mistaken.
BerryPick6
In what sense ‘should’ individuals be motivated by their CEV rather than by their non-CEV preferences? Wouldn’t breaking down the word ‘should’ in that previous sentence give you “Individuals want to achieve a state whereby they want to achieve what a perfect version of themselves would want to achieve rather than what they want to achieve”? Isn’t that vaguely self-defeating?
You may have to be a bit more specific. What in the FAI’s code would look different between world 1 and world 2?
Could you please point me in the direction of some discussion about ‘extrapolated morality’ (unless you mean CEV, in which case there’s no need)?
Game 1: I take the second option. I want 1000 years of exquisite bliss much more than I don’t want to have a box of hornets in my hand.
Game 2: First option. I value perfect simulations of myself none at all, and a billion dollars is pretty sick.
I have no preference regarding what choices perfect simulations of me would choose, since I don’t care about them at all, though I would assume that they make the same choices I would make since they have the same values.
How does increasing the amount or length of time change the question?
social consequences aside, is it morally correct to kill one person to create a million people who would not have otherwise existed?
How would a world in which it is morally correct to kill one person in order to create a million people look different than a world in which this is not the case?
General Anti-Leftism: People are not equally competent, nor virtuous, nor do they deserve equal social power as compensation for their lack of ability at accruing institutional power (starting positions on such capital tho may best be equalized).
Could I ask you to taboo ‘deserve’ in this context?
Nope, that’s a pretty accurate description of my sensory memory of the experience. :p
You’ve never licked a doorknob just to see what it tastes like?
Am I cheating by already having a list of things to study and a large collection of papers to read?
Not really, but only because the example you gave was Astronomy. If we’re talking specifically about Existentialism (although I guess the conversation has progressed a bit passed that) I’m not entirely sure how one would come up with a list of readings and concepts without turning to the writings of the Central Figures (I’m not even sure it’s legitimate to call Camus an ‘early’ thinker, since the Golden Age of Existentialism was definitely when he and Sartre were publishing.)
I would very much agree with your assessment for many if not most scientific fields, but in this particular instance, I happen to disagree that disregarding the Central Figures won’t hurt your knowledge and understanding of the topic.
Yeah, the third one I linked too isn’t really existentialism either now that I think about it...
I would want you to change the world so that what I want is actualized, yes. If you wouldn’t endorse an alteration of the world towards your current values, in what sense do you really ‘value’ said values?
I’m going to need to taboo ‘value’, aren’t I?
Then again, there’s no underlying reason for me to accept that I should accept my current collection of habits and surface-level judgments and so forth as the best implementation of my values, either.
Isn’t this begging the question? By ‘my values’ I’m pretty sure I literally mean ‘my current collection of habits and surface-level judgements and so forth’.
Could I have terminal values of which I am completely unaware in any way shape or form? How would I even recognize such things, and what reason do I have to prefer them over ‘my values’.
Did I just go in a circle?
More generally, it seems unlikely to me that the system which best implements my values would feel comfortable or even acceptable to me, any more than the diet that best addresses my nutritional needs will necessarily conform to my aesthetic preferences about food.
At first I thought this comparison was absolutely perfect, but I’m not really sure about that anymore. With a diet, you have other values to fall back on which might make your decision to adopt an aesthetically displeasing regimen still be something that you should do. With CEV, it’s not entirely clear to me in why I would want to prefer CEV values over my own current ones, so there’s no underlying reason for me to accept that I should accept CEV as the best implementation of my values.
That got a little complicated, and I’m not sure it’s exactly what I meant to say. Basically, I’m trying to say that while you may not be entirely comfortable with a better diet, you would still implement it for yourself since it’s a rational thing to do, whereas if you aren’t comfortable with implementing your own CEV, there’s no rational reason to compel you to do so.
How so?
Sucky. As in: “That movie was really sucky.”
Ah, I see. I’m pretty sure you’ve run up against the “ought implies can” issue, not the issue of demandingness. IIRC, this is a contested principle, but I don’t really know much about it other than Kant originally endorsing it. I think the first part of Larks’ answer gives you a good idea of what consequentialists would say in response to this issue.
For example, the best thing to do in situation X might be A. A may be so difficult or require so much sacrifice, that B might be preferable, even if the overall outcome is not as good.
Maybe I’m reading this wrong, but it seems like A is the “commonsense” interpretation of what ‘morality’ means. I honestly don’t know what you mean by B, though. If the overall outcome of B is not as good as A, in what way does it make sense to say we should prefer B?
Further, plenty of contemporary Moral Philosophers deny that “applicability” (I believe the phil-jargon word is “demandingness”) has any relevance to morality. See Singer, or better yet, Shelly Kagan’s book The Limits of Morality for a more in-depth discussion of this.
How is Searle’s actual response to the accusation that he has just dressed up the Other Minds Problem at all satisfactory? Does anyone find it convincing?