That’s difficult because I don’t really believe in ‘terminal values’, so everything looks like “new facts” that change how my “ethics” should be applied. (ETA: Like, falling in love with a new girl or a new piece of music can look like learning a new fact about the world. This perspective makes more sense after reading the rest of my comment.) Once you change your ‘terminal values’ enough they stop looking so terminal and you start to get a really profound respect for moral uncertainty and the epistemic nature of shouldness. My morality is largely directed at understanding itself. So you could say that one of my ‘terminal values’ is ‘thinking things through from first principles’, but once you’re that abstract and that meta it’s unclear what it means for it to change rather than, say, just a change in emphasis relative to something else like ‘going meta’ or ‘justification for values must be even better supported than justification for beliefs’ or ‘arbitrariness is bad’. So it’s not obvious at which level of abstraction I should answer your question.
Like, your beliefs get changed constantly whereas methods only get changed during paradigm shifts. The thing is that once you move that pattern up a few levels of abstraction where your simple belief update is equivalent to another person’s paradigm shift, it gets hard to communicate in a natural way. Like, for the ‘levels of organization’ flavor of levels of abstraction, the difference between “I love Jane more than any other woman and would trade the world for her” and “I love humanity more than other memeplex instantiation and would trade the multiverse for it”. It is hard for those two values to communicate with each other in an intelligible way; if they enter into an economy with each other it’s like they’d be making completely different kinds of deals. kaldkghaslkghldskg communication is difficult and the inferential distance here is way too big.
To be honest I think that though efforts like this post are well-intentioned and thus should be promoted to the extent that they don’t give people an excuse to not notice confusion, Less Wrong really doesn’t have the necessarily set of skills or knowledge to think about morality (ethics, meta-ethics) in a particularly insightful manner. Unfortunately I don’t think this is ever going to change. But maybe five years’ worth of posts like this at many levels of abstraction and drawing on many different sciences and perspectives will lead somewhere? But people won’t even do that. dlakghjadokghaoghaok. Ahem.
That’s difficult because I don’t really believe in ‘terminal values’, so everything looks like “new facts” that change how my “ethics” should be applied. (ETA: Like, falling in love with a new girl or a new piece of music can look like learning a new fact about the world. This perspective makes more sense after reading the rest of my comment.) Once you change your ‘terminal values’ enough they stop looking so terminal and you start to get a really profound respect for moral uncertainty and the epistemic nature of shouldness. My morality is largely directed at understanding itself. So you could say that one of my ‘terminal values’ is ‘thinking things through from first principles’, but once you’re that abstract and that meta it’s unclear what it means for it to change rather than, say, just a change in emphasis relative to something else like ‘going meta’ or ‘justification for values must be even better supported than justification for beliefs’ or ‘arbitrariness is bad’. So it’s not obvious at which level of abstraction I should answer your question.
Like, your beliefs get changed constantly whereas methods only get changed during paradigm shifts. The thing is that once you move that pattern up a few levels of abstraction where your simple belief update is equivalent to another person’s paradigm shift, it gets hard to communicate in a natural way. Like, for the ‘levels of organization’ flavor of levels of abstraction, the difference between “I love Jane more than any other woman and would trade the world for her” and “I love humanity more than other memeplex instantiation and would trade the multiverse for it”. It is hard for those two values to communicate with each other in an intelligible way; if they enter into an economy with each other it’s like they’d be making completely different kinds of deals. kaldkghaslkghldskg communication is difficult and the inferential distance here is way too big.
To be honest I think that though efforts like this post are well-intentioned and thus should be promoted to the extent that they don’t give people an excuse to not notice confusion, Less Wrong really doesn’t have the necessarily set of skills or knowledge to think about morality (ethics, meta-ethics) in a particularly insightful manner. Unfortunately I don’t think this is ever going to change. But maybe five years’ worth of posts like this at many levels of abstraction and drawing on many different sciences and perspectives will lead somewhere? But people won’t even do that. dlakghjadokghaoghaok. Ahem.