It sounds like I might have skipped a few inferential steps in this post and/or chose a bad title. Yes, I’m assuming that if we are selfish, then evolution made us that way. The post starts at the followup question “if we are selfish, how might that selfishness be implemented as a decision procedure?” (i.e., how would you program selfishness into an AI?) and then considers “what implications does that have as to what our values actually are or should be?”
What I meant by my post is that starting with random preferences, those that we designate as selfish survive. So what we intuitively think of selfishness—me-first, a utility function with an index pointing to myself—arises naturally from non-indexical starting points (evolving agents with random preferences).
If it arose this way, then it is less mysterious as to what it is, and we could start looking at evolutionary stable decision theories or suchlike. You don’t even have to have evolution, simply “these are preferences that would be advantageous should the AI be subject to evolutionary pressure”.
It sounds like I might have skipped a few inferential steps in this post and/or chose a bad title. Yes, I’m assuming that if we are selfish, then evolution made us that way. The post starts at the followup question “if we are selfish, how might that selfishness be implemented as a decision procedure?” (i.e., how would you program selfishness into an AI?) and then considers “what implications does that have as to what our values actually are or should be?”
What I meant by my post is that starting with random preferences, those that we designate as selfish survive. So what we intuitively think of selfishness—me-first, a utility function with an index pointing to myself—arises naturally from non-indexical starting points (evolving agents with random preferences).
If it arose this way, then it is less mysterious as to what it is, and we could start looking at evolutionary stable decision theories or suchlike. You don’t even have to have evolution, simply “these are preferences that would be advantageous should the AI be subject to evolutionary pressure”.