That’s one element in what started my line of thought..I was imagining situations where I would consider the exchange of human lives for non-human objects. How many people’s lives would be a fair exchange for a pod of bottlenose dolphins? A West Virginia mountaintop? An entire species of snail?
I think what I’m getting towards is there’s a difference between human preferences and human preference for other humans. And by human preferences, I mean my own.
I think what I’m getting towards is there’s a difference between human preferences and human preference for other humans. And by human preferences, I mean my own.
That is one objection to Coherent Extrapolated Volition (CEV), i.e. that human values are too diverse.
Though the space of possible futures that an AGI could spit out is VERY large compared to the space of futures people would want, even if one takes into consideration the diversity of human values.
We’re humans, so we maximize human utility. If squirrels were building AIs, they ought to maximize what’s best for squirrels.
There’s nothing inherently better about people vs paperclips vs squirrels. But since humans are making the AI, we might as well make it prefer people.
That’s one element in what started my line of thought..I was imagining situations where I would consider the exchange of human lives for non-human objects. How many people’s lives would be a fair exchange for a pod of bottlenose dolphins? A West Virginia mountaintop? An entire species of snail?
I think what I’m getting towards is there’s a difference between human preferences and human preference for other humans. And by human preferences, I mean my own.
That is one objection to Coherent Extrapolated Volition (CEV), i.e. that human values are too diverse. Though the space of possible futures that an AGI could spit out is VERY large compared to the space of futures people would want, even if one takes into consideration the diversity of human values.