I don’t assign any negative utility to ending a life. It’s not like it’s something you can experience. I suppose it might be since all your experiences are all about a change in brain state, but still, it lasts an instant. It can’t be that bad. As such, of course I’d want an AI that would kill us all.
In general, I would consider it odd for someone to find killing that bad. At that point, it would probably be better to just design an AI to destroy the world, because if you don’t a large number of people would be born and die in the intervening time while waiting for FAI.
I don’t assign any negative utility to ending a life.
Um … most people don’t want to die. That in and of itself would seem to suggest you may possibly have gone wrong somewhere in this line of reasoning.
More generally, are you a strict hedonic utilitarian? Because I can see how focusing solely on pleasure etc. could lead to that conclusion, but I think most LWers are closer to preference utilitarians.
That in and of itself would seem to suggest you may possibly have gone wrong somewhere in this line of reasoning.
It would suggest it, but it’s not that strong evidence. Most people are okay with factory farming. Most people put little value on things they don’t consider themselves responsible for.
More generally, are you a strict hedonic utilitarian?
I am but I could understand valuing anything you can experience. Valuing things that can’t be experienced just seems silly. Would you value ice cream independent of your ability to taste it?
It would suggest it, but it’s not that strong evidence. Most people are okay with factory farming. Most people put little value on things they don’t consider themselves responsible for.
OTOH, in my experience at least, people become a lot less biased when it comes to themselves. Few people would want to be factory farmed ;)
Would you value ice cream independent of your ability to taste it?
Personally? No. But I can imagine a paperclipper that would gladly sacrifice it’s life to save the paperclip collection.
OTOH, in my experience at least, people become a lot less biased when it comes to themselves. Few people would want to be factory farmed ;)
Do they, or does their bias just change?
In my experience, people value themselves vastly more than they value other people. Ergo, if you replace them with someone else, they consider it a huge loss in utility.
It’s possible that rationality tends to bias you away from your natural impassiveness to large groups and other “far” situations. There’s a post on this, “Shut up and divide”. But there do seem to be genuine biases leading to underestimates of their suffering, not just knowing about it and not caring.
I don’t assign any negative utility to ending a life. It’s not like it’s something you can experience.
What? Are you talking about decision theoretic utility or hedonic utility? You can’t experience any decision-utility, and you can’t make decisions based on hedonic utility (directly). What are you trying to do?
I don’t assign any negative utility to ending a life. It’s not like it’s something you can experience. I suppose it might be since all your experiences are all about a change in brain state, but still, it lasts an instant. It can’t be that bad. As such, of course I’d want an AI that would kill us all.
In general, I would consider it odd for someone to find killing that bad. At that point, it would probably be better to just design an AI to destroy the world, because if you don’t a large number of people would be born and die in the intervening time while waiting for FAI.
Um … most people don’t want to die. That in and of itself would seem to suggest you may possibly have gone wrong somewhere in this line of reasoning.
More generally, are you a strict hedonic utilitarian? Because I can see how focusing solely on pleasure etc. could lead to that conclusion, but I think most LWers are closer to preference utilitarians.
It would suggest it, but it’s not that strong evidence. Most people are okay with factory farming. Most people put little value on things they don’t consider themselves responsible for.
I am but I could understand valuing anything you can experience. Valuing things that can’t be experienced just seems silly. Would you value ice cream independent of your ability to taste it?
OTOH, in my experience at least, people become a lot less biased when it comes to themselves. Few people would want to be factory farmed ;)
Personally? No. But I can imagine a paperclipper that would gladly sacrifice it’s life to save the paperclip collection.
Do they, or does their bias just change?
In my experience, people value themselves vastly more than they value other people. Ergo, if you replace them with someone else, they consider it a huge loss in utility.
It’s possible that rationality tends to bias you away from your natural impassiveness to large groups and other “far” situations. There’s a post on this, “Shut up and divide”. But there do seem to be genuine biases leading to underestimates of their suffering, not just knowing about it and not caring.
What? Are you talking about decision theoretic utility or hedonic utility? You can’t experience any decision-utility, and you can’t make decisions based on hedonic utility (directly). What are you trying to do?
I don’t assign decision theoretic utility to it. It doesn’t seem like something could actually matter unless someone experiences it.