This is yet another poorly phrased, factually inaccurate post containing some unorthodox viewpoints that are unlikely to be taken seriously because people around here are vastly better at deconstructing others’ arguments than fixing them for them.
Ignoring any formal and otherwise irrelevant errors such as what utilitarianism actually is, I’ll try to address the crucial questions; both to make Bundle_Gerbe’s viewpoints more accessible to LW members and also to make it more clear to him why they’re not as obvious as he seems to think.
1: How does creating new life compare to preserving existing life in terms of utility or value?
Bundle_Gerbe seems to be of the view that they are of identical value. That’s not a view I share, mostly because I don’t assign any value to the creation of new life, but I must admit that I am somewhat confused (or undecided) about the value of existing human life, both in general and as a function of parameters such as remaining life expectancy. Maybe there’s some kind of LW consensus I’m not aware of, but the whole issue seems like a matter of axioms to me rather than anything that could objectively be inferred from some sort of basic truth.
2: If creation of life has some positive value, does this value increase if creation is preponed?
Not a question relevant to me, but it seems that this would partly depend on whether earlier creation implied higher total amount of lives, or just earlier saturation, for example because humans live forever and ultimately the only constraints will be space. I’m not entirely certain I correctly understand Bundle_Gerbe’s position on this, but it seems that his utility function is actually based on total lifetime as opposed to actual number of human lives, meaning that two humans existing for one second each would be equivalent to one human existing for two seconds. That’s kind of an interesting approach with lots of implied questions, such as whether travelling at high speeds would reduce value because of relativistic effects.
3: Is sacrificing personal lifetime to increase total humanity lifetime value a good idea?
If your utility function is based on total humanity lifetime value, and you’re completely altruistic, sure. Most people don’t seem to be all that altruistic, though. If I had to choose between saving one or two human beings, I would choose the latter option, but I’d never sacrifice myself to save a measly two humans. I would be very suprised if CEV turned out to require my death after 20 years, and in fact I would immediately reclassify the FAI in question as UFAI. Sounds like an interesting setup for an SF story, though.
For what it’s worth, I upvoted the post. Not because the case was particularly well presented, obviously, but because I think it’s not completely uninteresting and because I perceived some of the comments such as Vladimir_Nesov’s which got quite some upvotes as rather unfair.
That being said, the title is badly phrased while not being very relevant, either.
Thanks for this response. One comment about one of your main points: I agree that the tradeoff of number of humans vs. length of life is ambiguous. But to the extent our utility function favors numbers of people over total life span, that makes the second scenario more plausible, whereas if total life span is more important, the first is more plausible.
I agree with you that both the scenarios would be totally unacceptable to me personally, because of my limited altruism. I would badly want to stop it from happening, and I would oppose creating any AI that did it. But I disagree in that I can’t say that any such AI is unfriendly or “evil”. Maybe if I was less egoistic, and had a better capacity to understand the consequences, I really would feel the sacrifice was worth it.
If you would oppose an AI attempting to enforce a CEV that would be detrimental to you, but still classify it as FAI and not evil, then wouldn’t that make you evil?
Obviously this is a matter of definitions, but it still seems to be the logical conclusion.
If your utility function is based on total humanity lifetime value, and you’re completely altruistic, sure. Most people don’t seem to be all that altruistic, though. If I had to choose between saving one or two human beings, I would choose the latter option, but I’d never sacrifice myself to save a measly two humans.
That seems like a bias/heuristic; people are known to be biased in favor of themseves, and there is instrumental value in more life to help people with.
This is yet another poorly phrased, factually inaccurate post containing some unorthodox viewpoints that are unlikely to be taken seriously because people around here are vastly better at deconstructing others’ arguments than fixing them for them.
Ignoring any formal and otherwise irrelevant errors such as what utilitarianism actually is, I’ll try to address the crucial questions; both to make Bundle_Gerbe’s viewpoints more accessible to LW members and also to make it more clear to him why they’re not as obvious as he seems to think.
1: How does creating new life compare to preserving existing life in terms of utility or value?
Bundle_Gerbe seems to be of the view that they are of identical value. That’s not a view I share, mostly because I don’t assign any value to the creation of new life, but I must admit that I am somewhat confused (or undecided) about the value of existing human life, both in general and as a function of parameters such as remaining life expectancy. Maybe there’s some kind of LW consensus I’m not aware of, but the whole issue seems like a matter of axioms to me rather than anything that could objectively be inferred from some sort of basic truth.
2: If creation of life has some positive value, does this value increase if creation is preponed?
Not a question relevant to me, but it seems that this would partly depend on whether earlier creation implied higher total amount of lives, or just earlier saturation, for example because humans live forever and ultimately the only constraints will be space. I’m not entirely certain I correctly understand Bundle_Gerbe’s position on this, but it seems that his utility function is actually based on total lifetime as opposed to actual number of human lives, meaning that two humans existing for one second each would be equivalent to one human existing for two seconds. That’s kind of an interesting approach with lots of implied questions, such as whether travelling at high speeds would reduce value because of relativistic effects.
3: Is sacrificing personal lifetime to increase total humanity lifetime value a good idea?
If your utility function is based on total humanity lifetime value, and you’re completely altruistic, sure. Most people don’t seem to be all that altruistic, though. If I had to choose between saving one or two human beings, I would choose the latter option, but I’d never sacrifice myself to save a measly two humans. I would be very suprised if CEV turned out to require my death after 20 years, and in fact I would immediately reclassify the FAI in question as UFAI. Sounds like an interesting setup for an SF story, though.
For what it’s worth, I upvoted the post. Not because the case was particularly well presented, obviously, but because I think it’s not completely uninteresting and because I perceived some of the comments such as Vladimir_Nesov’s which got quite some upvotes as rather unfair.
That being said, the title is badly phrased while not being very relevant, either.
Thanks for this response. One comment about one of your main points: I agree that the tradeoff of number of humans vs. length of life is ambiguous. But to the extent our utility function favors numbers of people over total life span, that makes the second scenario more plausible, whereas if total life span is more important, the first is more plausible.
I agree with you that both the scenarios would be totally unacceptable to me personally, because of my limited altruism. I would badly want to stop it from happening, and I would oppose creating any AI that did it. But I disagree in that I can’t say that any such AI is unfriendly or “evil”. Maybe if I was less egoistic, and had a better capacity to understand the consequences, I really would feel the sacrifice was worth it.
If you would oppose an AI attempting to enforce a CEV that would be detrimental to you, but still classify it as FAI and not evil, then wouldn’t that make you evil?
Obviously this is a matter of definitions, but it still seems to be the logical conclusion.
That seems like a bias/heuristic; people are known to be biased in favor of themseves, and there is instrumental value in more life to help people with.
That’s not bias, it’s subjective morals.
Source?