Romeo and I ended up chatting. Some takeaways I could remember a few days after the fact:
There are a few levels of abstraction that I could have engaged with, when I was thinking about this question
One is the ‘object-level’ of ‘is this particular variant of utilitarianism cogent?’
A higher level question is ’notice that I am the sort of process that generated this question. Why did I do that? What elements of the ancestral environment generated these intuitions?”
In the ancestral environment, there *is* something of an upper bound of how good things can possibly get – I can be prosperous and have lots of children. But my adaptation-executor-process sees that as way-less-good than getting killed forever is bad. Some of my intuitions with the OP might be a carryover of that. How should I think about that?
Romeo and I ended up chatting. Some takeaways I could remember a few days after the fact:
There are a few levels of abstraction that I could have engaged with, when I was thinking about this question
One is the ‘object-level’ of ‘is this particular variant of utilitarianism cogent?’
A higher level question is ’notice that I am the sort of process that generated this question. Why did I do that? What elements of the ancestral environment generated these intuitions?”
In the ancestral environment, there *is* something of an upper bound of how good things can possibly get – I can be prosperous and have lots of children. But my adaptation-executor-process sees that as way-less-good than getting killed forever is bad. Some of my intuitions with the OP might be a carryover of that. How should I think about that?