The second is that I challenge you to define “pleasure,” “happiness,” or “lack of suffering.”
Can you explain why you’re giving me this challenge? Because I don’t understand, if I couldn’t define them except vaguely, how does it strengthen your case that we should care about technology and not these values.
As far as I understand him, he is saying that technological progress can be quantified. While all your ideas of how to rate world states can either not be quantified, and therefore can’t be rated, or run into problems and contradictions.
He further seems to believe that technological progress leads to “complexity” which leads to other kinds of values. Even if they are completely alien to us humans and our values, they will still be intrinsically valuable.
His view of a universe where an “unfriendly” AI takes over is a universe where there will be a society of paperclip maximizer’s and their offspring. Those AI’s will not only diverge from maximizing paperclips, and evolve complex values, but also pursue various instrumental goals, as exploration will never cease. And pursuing those goals will satisfy their own concept of pleasure.
And he believes that having such a culture of paperclip maximizer’s having fun while pursuing their goals isn’t less valuable than having our current volition being extrapolated, which might end up being similarly alien to our current values.
In other words, there is one thing that we can rate and that is complexity. If we can increase it then we should do so. Never mind the outcome, it will be good.
Would you change your mind if I could give a precise definition of, say, “suffering”, and showed you two paths to the future that end up with similar levels of technology but different amounts of suffering? I’ll assume the answer is yes, because otherwise why did you give me that challenge.
What if I said that I don’t know how to define it now, but I think if you made me a bit (or a lot) smarter and gave me a few decades of subjective time to work on the problem, I could probably give you such a definition and tell you how to achieve the “less suffering, same tech” outcome? Would you be willing to give me that chance (assuming it was in your power to do so)? Or are you pretty sure that “suffering” is not just hard to define, but actually impossible, and/or that it’s impossible to reduce suffering to any significant extent below the default outcome, while keeping technology at the same level? If you are pretty sure about this, are you equally sure about every other value that I could cite instead of suffering?
Or are you pretty sure that “suffering” is not just hard to define, but actually impossible, and/or that it’s impossible to reduce suffering to any significant extent below the default outcome, while keeping technology at the same level?
Masochist: Please hurt me!
Sadist: No.
If you are pretty sure about this, are you equally sure about every other value that I could cite instead of suffering?
What if I said that I don’t know how to define it now, but I think if you made me a bit (or a lot) smarter...
If you were to uplift a chimpanzee onto the human level and told it to figure out how to reduce suffering for chimpanzees, it would probably come up with ideas like democracy, health insurance and supermarkets. Problem is that chimpanzees wouldn’t appreciate those ideas...
XiXiDu, I’m aware that I’m hardly making a watertight case that I can definitely do better than davidad’s plan (from the perspective of his current apparent values). I’m merely trying to introduce some doubt. (Note how Eliezer used to be a technophile like David, and said things like “But if it comes down to Us or Them, I’m with Them.”, but then changed his mind.)
As far as I understand him, he is saying that technological progress can be quantified. While all your ideas of how to rate world states can either not be quantified, and therefore can’t be rated, or run into problems and contradictions.
He further seems to believe that technological progress leads to “complexity” which leads to other kinds of values. Even if they are completely alien to us humans and our values, they will still be intrinsically valuable.
His view of a universe where an “unfriendly” AI takes over is a universe where there will be a society of paperclip maximizer’s and their offspring. Those AI’s will not only diverge from maximizing paperclips, and evolve complex values, but also pursue various instrumental goals, as exploration will never cease. And pursuing those goals will satisfy their own concept of pleasure.
And he believes that having such a culture of paperclip maximizer’s having fun while pursuing their goals isn’t less valuable than having our current volition being extrapolated, which might end up being similarly alien to our current values.
In other words, there is one thing that we can rate and that is complexity. If we can increase it then we should do so. Never mind the outcome, it will be good.
Correct me if I misinterpreted anything.
I couldn’t have said it better myself.
Would you change your mind if I could give a precise definition of, say, “suffering”, and showed you two paths to the future that end up with similar levels of technology but different amounts of suffering? I’ll assume the answer is yes, because otherwise why did you give me that challenge.
What if I said that I don’t know how to define it now, but I think if you made me a bit (or a lot) smarter and gave me a few decades of subjective time to work on the problem, I could probably give you such a definition and tell you how to achieve the “less suffering, same tech” outcome? Would you be willing to give me that chance (assuming it was in your power to do so)? Or are you pretty sure that “suffering” is not just hard to define, but actually impossible, and/or that it’s impossible to reduce suffering to any significant extent below the default outcome, while keeping technology at the same level? If you are pretty sure about this, are you equally sure about every other value that I could cite instead of suffering?
Masochist: Please hurt me!
Sadist: No.
Not sure, but it might be impossible.
If you were to uplift a chimpanzee onto the human level and told it to figure out how to reduce suffering for chimpanzees, it would probably come up with ideas like democracy, health insurance and supermarkets. Problem is that chimpanzees wouldn’t appreciate those ideas...
XiXiDu, I’m aware that I’m hardly making a watertight case that I can definitely do better than davidad’s plan (from the perspective of his current apparent values). I’m merely trying to introduce some doubt. (Note how Eliezer used to be a technophile like David, and said things like “But if it comes down to Us or Them, I’m with Them.”, but then changed his mind.)