The general principle was: Take some human idea or concept and drive it to its logical conclusion. To its extreme. The result: It either became trivial or stopped to make sense.
The reason for this is that sometimes you can’t separate out components of a complex system and try to optimize them in isolation without losing important details.
You can’t just optimize happyness. It leads to wireheading and that’s not what you want.
You can’t just optimize religious following. It leads to crussades and witch hunts.
You can’t just maximize empathy. Harmless as that sounds. Humans can empathize with anything. There are monks that try avoid hurting bug (good that they didn’t know of bacteria). Empathy driven to the extreme leads to empathy with other beings (here: simulated, even lied about, life forms) that don’t match up with complex values.
Don’t hurt your complex values.
Somewhere on the way from
your future self
your kin
your livestock
children
innocents
strangers
foreigners
enemies
the biosphere
higher animals
lower animals
plants
bacteria
hypothetical life forms
imaginary life forms
you should consider that your empathy diverged from being integrated well with your other values.
But when the monks belief structure formed they didn’t know. Now that that is a given they incorporate that in a suitable compartment as the rest is obviously stable.
They (not just the monks—many of the ordinary practitioners) put filters on water faucets to drink as few bacteria as possible; don’t eat root crops because harvesting them disturbs the soil too much; may refuse antibiotics; and in general, do everything they think is within reason to minimize killing.
There’s probably some compartmentalization, but not a huge amount in that area.
Coscott’s philosophy is more or less orthogonal to the choice of empathy. E.g. a “Coscottist” might only care about here kin, but she will care more about versions of her kin embedded in simple universes.
Would you mind elaborating on why this reminded you of those discussions?
Is it because in the first half of the dialogue, I am following a chain where I am starting with a belief and slightly tweaking it little by little, and end up with a similar belief in a different circumstance?
My main point was not that I care about non-existing things in any way that conflicts with cares about existing things. My point was that believing in the concept of existence is not necessary for having preferences. I suppose I would agree with
A: Hmm, I guess I want [happiness of non-existing agents] too. However, that is negligible compared to my preferences about things that really do exist.
If not for the fact that I don’t think existence has any meaning.
Reminds me of some discussions I had long ago.
The general principle was: Take some human idea or concept and drive it to its logical conclusion. To its extreme. The result: It either became trivial or stopped to make sense.
The reason for this is that sometimes you can’t separate out components of a complex system and try to optimize them in isolation without losing important details.
You can’t just optimize happyness. It leads to wireheading and that’s not what you want.
You can’t just optimize religious following. It leads to crussades and witch hunts.
You can’t just maximize empathy. Harmless as that sounds. Humans can empathize with anything. There are monks that try avoid hurting bug (good that they didn’t know of bacteria). Empathy driven to the extreme leads to empathy with other beings (here: simulated, even lied about, life forms) that don’t match up with complex values.
Don’t hurt your complex values.
Somewhere on the way from
your future self
your kin
your livestock
children
innocents
strangers
foreigners
enemies
the biosphere
higher animals
lower animals
plants
bacteria
hypothetical life forms
imaginary life forms
you should consider that your empathy diverged from being integrated well with your other values.
(yes, its not linear but a DAG)
These monks still exist. They have been informed.
Sure they know now.
But when the monks belief structure formed they didn’t know. Now that that is a given they incorporate that in a suitable compartment as the rest is obviously stable.
They (not just the monks—many of the ordinary practitioners) put filters on water faucets to drink as few bacteria as possible; don’t eat root crops because harvesting them disturbs the soil too much; may refuse antibiotics; and in general, do everything they think is within reason to minimize killing.
There’s probably some compartmentalization, but not a huge amount in that area.
I take this as a piece of factual information. I do not assume that you want to imply some lesson with it.
Yea, I’m just providing additional information because your model of the Jains seemed incomplete.
Jainism
Very incomplete. I just relayed the anecdote about the monks to add color.
Interesting. Jains texts should be read by vegetarians to put them into perspective I guess.
Coscott’s philosophy is more or less orthogonal to the choice of empathy. E.g. a “Coscottist” might only care about here kin, but she will care more about versions of her kin embedded in simple universes.
Would you mind elaborating on why this reminded you of those discussions?
Is it because in the first half of the dialogue, I am following a chain where I am starting with a belief and slightly tweaking it little by little, and end up with a similar belief in a different circumstance?
No. Because you explored the extremes of empathy toward beings.
You don’t follow a path going from less to more extreme as I outlined but an extreme in different aspects you explore nonetheless.
Oh, ok, I get the connection now, thanks.
My main point was not that I care about non-existing things in any way that conflicts with cares about existing things. My point was that believing in the concept of existence is not necessary for having preferences. I suppose I would agree with
If not for the fact that I don’t think existence has any meaning.