Have you asked the trillions of farm animals that are as smart 2-4-year-olds and feel the same emotions they if we are monsters? So let’s say you took a group of humans and made them so smart that the difference in intelligence between us and them is greater than the distance in intelligence between us and a pig. They can build AI that doesn’t have what they perceive as negative pitfalls like consuming other beings for energy, immortality, and most importantly thinking the universe revolves around them all said in a much more eloquent justified way with better reasons. Why are these human philosophies wrong and yours correct?
I mean, I sure would ask a bunch of sapient farm animals what utopia they want me to build, if I became god and they turned out to be sapient after all? As would a lot of people from this community. You seem to think, from your other comments, that beings of greater intelligence never care about beings of lesser intelligence, but that’s factually incorrect.
Why are these human philosophies wrong and yours correct?
A paperclip-maximizer is not “incorrect”, it’s just not aligned with my values. These philosophers, likewise, would not be “incorrect”, just not aligned with my values. And that’s the outcome we want to prevent, here.
So pigs are roughly as smart as four-year-olds yet humans generally are cool with torturing and killing them in the billions for the temporary pleasure of taste. Humans are essentially biological computers. I don’t see how you can make a smarter robot that can improve itself indefinitely forever serve a dumber human and it also gives it clear motive to kill you. I also don’t see how alignment could possibly be moral.
Have you asked the trillions of farm animals that are as smart 2-4-year-olds and feel the same emotions they if we are monsters? So let’s say you took a group of humans and made them so smart that the difference in intelligence between us and them is greater than the distance in intelligence between us and a pig. They can build AI that doesn’t have what they perceive as negative pitfalls like consuming other beings for energy, immortality, and most importantly thinking the universe revolves around them all said in a much more eloquent justified way with better reasons. Why are these human philosophies wrong and yours correct?
I mean, I sure would ask a bunch of sapient farm animals what utopia they want me to build, if I became god and they turned out to be sapient after all? As would a lot of people from this community. You seem to think, from your other comments, that beings of greater intelligence never care about beings of lesser intelligence, but that’s factually incorrect.
A paperclip-maximizer is not “incorrect”, it’s just not aligned with my values. These philosophers, likewise, would not be “incorrect”, just not aligned with my values. And that’s the outcome we want to prevent, here.
So pigs are roughly as smart as four-year-olds yet humans generally are cool with torturing and killing them in the billions for the temporary pleasure of taste. Humans are essentially biological computers. I don’t see how you can make a smarter robot that can improve itself indefinitely forever serve a dumber human and it also gives it clear motive to kill you. I also don’t see how alignment could possibly be moral.