Why should I care about G695? In particular, why should I prefer it over G696, which is the CEV of all humans with volition alive in 2010, or over G697, which is the CEV of myself?
So your point is there is no point in caring for anything. Do you call yourself a nihilist?
I then investigate the two unrelated phenomena individually and eventually come to the conclusion that there is one reality between all humans, but a separate morality for each human.
Would you call yourself a naive realist? What about people on LSD, schizophrenics and religious people who see their Almighty Lord Spaghetti Monster in what you would call clouds. You surely mean that there is one reality between all humans that are “sane”.
Suppose you’re getting into a car, and you’re wondering whether you will get into a crash. The optimistic view is that you will definitely not crash. The pessimistic view is that you will definitely crash. Neither of these is right.
I would say, the optimistic view is saying “There is probably/hopefully no crash”. But don’t let us fight over words.
You’re constructing a universal CEV. It’s not an already-existing ontologically fundamental entity. It’s not a thing that actually exists.
So your point is there is no point in caring for anything. Do you call yourself a nihilist?
No, I care about things. It’s just that I don’t think that G695 (assuming it’s defined—see below) would be particularly humane or good or desirable, any more than (say) Babyeater morality.
Edit: oh, sorry, forgot to address your actual point.
At a certain point, the working model of reality begins to predict what the insane will claim to perceive and how those errors come about.
I would say, the optimistic view is saying “There is probably/hopefully no crash”. But don’t let us fight over words.
Very well. Let us assume that (warning: numbers just made up) one in every 100,000 car trips results in a crash. The G698 view says “The chances of a crash are low.” The G699 view says “The chances of a crash are high.” The G700 view says “The chances of a crash are 1/1000000.” I advocate the G700 view, and assert that believing G698 or G699 interferes with believing G700.
Does CEV of humankind exists?
I personally don’t think the extrapolated volition of humanity coheres, but I have the impression that others disagree with me.
I would be very surprised, however, if the extrapolated volition of all volitional entities cohered and the extrapolated volition of all volitional humans did not.
At a certain point, the working model of reality begins to predict what the insane will claim to perceive and how those errors come about.
What if you can’t predict?
I advocate the G700 view, and assert that believing G698 or G699 interferes with believing G700.
That is not how your brain works (a rough guess). Your brain thinks either G698 or G699 and then comes out with a decision about either driving or not. This heuristic process is called optimism or pessimism.
Since I’m Pavitra, it doesn’t really matter to me if G101 has a point; I care about it anyway.
So there is no normative rule that Pavitra (you) should care about G101. It just happens, it could also be different and it does not matter. That is what I call (moral) nihilism.
Don’t you ever ask why you should care (about anything, incl. yourself caring about things)? (I am not suggesting you becoming suicidal, but on the other hand, there is no normative rule against it, so… hm… I still won’t)
Their claims are basically noisy. If a large group of crazies started agreeing with each other, that might require looking into more carefully.
A large group of crazies agreeing: Ever heard of religion, homeopathy, TCM et cetera?
Not natively, no. That’s why it requires advocacy.
You care about things. I assume you care about your health. In that case, you don’t want to be in a crash. So you’ll evaluate whether you should get into a car. If you get into the car, you are an optimist, if not, you are a pessimist.
Again, why is important to advocate anything? -- Because you care about it. -- So what?
So there is no normative rule that Pavitra (you) should care about G101. It just happens, it could also be different and it does not matter. That is what I call (moral) nihilism.
Don’t you ever ask why you should care (about anything, incl. yourself caring about things)? (I am not suggesting you becoming suicidal, but on the other hand, there is no normative rule against it, so… hm… I still won’t)
Again, it’s not that I don’t care about anything. I just happen to have a few core axioms, things that I care about for no reason. They don’t feel arbitrary to me—after all, I care about them a great deal! -- but I didn’t choose to care about them. I just do.
A large group of crazies agreeing: Ever heard of religion, homeopathy, TCM et cetera?
Sure, and those are the claims I take the time to evaluate and debunk.
If you get into the car, you are a G701, if not, you are a G702.
Please explain the relationship between G701-702 and G698-700.
Again, it’s not that I don’t care about anything. I just happen to have a few core axioms, things that I care about for no reason. They don’t feel arbitrary to me—after all, I care about them a great deal! -- but I didn’t choose to care about them. I just do.
And you believe that other minds have different core believs?
Sure, and those are the claims I take the time to evaluate and debunk.
I think we should close the discussion and take some time thinking.
Please explain the relationship between G701-702 and G698-700.
“chance is low” or “chance is high” are not mere descriptive, they also contain values. chance is low --> probably safe to drive, high --> probably not, based on the more fundamental axiom that surviving is good. And “surviving is good” is not descriptive, it is normative because good is a value. you can also say instead: “you should survive”, which is a normative rule.
And you believe that other minds have different core believs?
“Belief” isn’t quite right; it’s not an anticipation of how the world will turn out, but a preference of how the world will turn out. But yes, I anticipate that other minds will have different core preferences.
I think we should close the discussion and take some time thinking.
So your point is there is no point in caring for anything. Do you call yourself a nihilist?
Would you call yourself a naive realist? What about people on LSD, schizophrenics and religious people who see their Almighty Lord Spaghetti Monster in what you would call clouds. You surely mean that there is one reality between all humans that are “sane”.
I would say, the optimistic view is saying “There is probably/hopefully no crash”. But don’t let us fight over words.
Does CEV of humankind exists?
No, I care about things. It’s just that I don’t think that G695 (assuming it’s defined—see below) would be particularly humane or good or desirable, any more than (say) Babyeater morality.
Certainly not—hence “eventually”. Science requires interpreting data.
Edit: oh, sorry, forgot to address your actual point.
At a certain point, the working model of reality begins to predict what the insane will claim to perceive and how those errors come about.
Very well. Let us assume that (warning: numbers just made up) one in every 100,000 car trips results in a crash. The G698 view says “The chances of a crash are low.” The G699 view says “The chances of a crash are high.” The G700 view says “The chances of a crash are 1/1000000.” I advocate the G700 view, and assert that believing G698 or G699 interferes with believing G700.
I personally don’t think the extrapolated volition of humanity coheres, but I have the impression that others disagree with me.
I would be very surprised, however, if the extrapolated volition of all volitional entities cohered and the extrapolated volition of all volitional humans did not.
I like gensyms.
G101: Pavitra (me) cares about something.
What is the point in caring for G101?
What if you can’t predict?
That is not how your brain works (a rough guess). Your brain thinks either G698 or G699 and then comes out with a decision about either driving or not. This heuristic process is called optimism or pessimism.
Since I’m Pavitra, it doesn’t really matter to me if G101 has a point; I care about it anyway.
Their claims are basically noisy. If a large group of crazies started agreeing with each other, that might require looking into more carefully.
Not natively, no. That’s why it requires advocacy.
So there is no normative rule that Pavitra (you) should care about G101. It just happens, it could also be different and it does not matter. That is what I call (moral) nihilism.
Don’t you ever ask why you should care (about anything, incl. yourself caring about things)? (I am not suggesting you becoming suicidal, but on the other hand, there is no normative rule against it, so… hm… I still won’t)
A large group of crazies agreeing: Ever heard of religion, homeopathy, TCM et cetera?
You care about things. I assume you care about your health. In that case, you don’t want to be in a crash. So you’ll evaluate whether you should get into a car. If you get into the car, you are an optimist, if not, you are a pessimist.
Again, why is important to advocate anything? -- Because you care about it. -- So what?
Again, it’s not that I don’t care about anything. I just happen to have a few core axioms, things that I care about for no reason. They don’t feel arbitrary to me—after all, I care about them a great deal! -- but I didn’t choose to care about them. I just do.
Sure, and those are the claims I take the time to evaluate and debunk.
Please explain the relationship between G701-702 and G698-700.
And you believe that other minds have different core believs?
I think we should close the discussion and take some time thinking.
“chance is low” or “chance is high” are not mere descriptive, they also contain values. chance is low --> probably safe to drive, high --> probably not, based on the more fundamental axiom that surviving is good. And “surviving is good” is not descriptive, it is normative because good is a value. you can also say instead: “you should survive”, which is a normative rule.
“Belief” isn’t quite right; it’s not an anticipation of how the world will turn out, but a preference of how the world will turn out. But yes, I anticipate that other minds will have different core preferences.
Yes, okay.