Ok, I will rework it for improved clarity; but not all the options I chose have deep philosophical justifications. As I said, I was aiming for an adequate resolution, with people’s internal meta-values working as philosophical justifications for their own resolution.
As for the specific case that tripped you up: I wanted to distinguish between endorsing a reward or value, endorsing its negative, and endorsing not having it. “I want to be thin” vs “I want to be fat” vs “I don’t want to care about my weight”. The first one I track as a positive endorsement of R, the second as a positive endorsement of -R, the third as a negative endorsement of R (and of -R).
not all the options I chose have deep philosophical justifications.
Just to be clear, when I said that each section would be served by having a philosophical justification, I don’t mean that it would necessarily need to be super-deep; just something like “this seems to make sense because X”, which e.g. sections 2.4 and 2.5 already have.
Thanks!
Ok, I will rework it for improved clarity; but not all the options I chose have deep philosophical justifications. As I said, I was aiming for an adequate resolution, with people’s internal meta-values working as philosophical justifications for their own resolution.
As for the specific case that tripped you up: I wanted to distinguish between endorsing a reward or value, endorsing its negative, and endorsing not having it. “I want to be thin” vs “I want to be fat” vs “I don’t want to care about my weight”. The first one I track as a positive endorsement of R, the second as a positive endorsement of -R, the third as a negative endorsement of R (and of -R).
But I’ll work on it more.
Thanks!
Just to be clear, when I said that each section would be served by having a philosophical justification, I don’t mean that it would necessarily need to be super-deep; just something like “this seems to make sense because X”, which e.g. sections 2.4 and 2.5 already have.