For one, the world would be a much better place if people actually cared about things like saving the world or even helping others, and put a little thought to it.
Why do you want to save the world? To allow people, humans, to do what they like to do for much longer than they would otherwise be able to. Status-seeking is one of those things that people are especially fond of.
Ask yourself, would you have written this post after a positive Singularity? Would it matter if some people were engaged in status games all day long?
What you are really trying to tell people is that they want to help solving friendly AI because it is universally instrumentally useful.
In case you want to argue that status-seeking is bad, no matter what, under any circumstances, you have to explain why that is so. And if you are unable to ground utility in something that is physically measurable, like the maximization of certain brain states, then I don’t think that you could convincingly demonstrate it to be a relatively undesirable human activity.
Umm. Sure, status-seeking may be fine once we have solved all possible problems anyway and we’re living in a perfect utopia. But that’s not very relevant if we want to discuss the world as it is today.
But that’s not very relevant if we want to discuss the world as it is today.
It is very relevant, because the reason why we want to solve friendly AI in the first place is to protect our complex values given to us by the Blind Idiot God.
Why do you want to save the world? To allow people, humans, to do what they like to do for much longer than they would otherwise be able to. Status-seeking is one of those things that people are especially fond of.
Ask yourself, would you have written this post after a positive Singularity? Would it matter if some people were engaged in status games all day long?
What you are really trying to tell people is that they want to help solving friendly AI because it is universally instrumentally useful.
In case you want to argue that status-seeking is bad, no matter what, under any circumstances, you have to explain why that is so. And if you are unable to ground utility in something that is physically measurable, like the maximization of certain brain states, then I don’t think that you could convincingly demonstrate it to be a relatively undesirable human activity.
Umm. Sure, status-seeking may be fine once we have solved all possible problems anyway and we’re living in a perfect utopia. But that’s not very relevant if we want to discuss the world as it is today.
It is very relevant, because the reason why we want to solve friendly AI in the first place is to protect our complex values given to us by the Blind Idiot God.
If we’re talking about Friendly AI design, sure. I wasn’t.