For one, status-seeking is a zero sum game and only indirectly causes overall gains. The world would be a much better place if people actually cared about things like saving the world or even helping others, and put a little thought to it.
Also, mismatches between our consciously-held goals and our behavior cause plenty of frustration and unhappiness, like in the case of the person who keeps stressing out because their studies don’t progress.
For one, status-seeking is a zero sum game and only indirectly causes overall gains. The world would be a much better place if people actually cared about things like saving the world or even helping others, and put a little thought to it.
If I actually cared about saving the world and about conserving my resources, it seems like I would choose some rate of world-saving A.
If I actually cared about saving the world, about conserving my resources, and the opinion of my peers, it seems like I would choose some rate of world-saving B. For reasonable scenarios, B would be greater than A because I can also get respect from my peers, and when you raise demand and keep supply constant quantity supplied increases.
That is, I understand that status causes faking behavior that’s a drain. (Status conflicts also lower supply, but it’s not clear how much.) I don’t think it’s clear that the mechanism of status-seeking conflicts with actually caring about other goals or detracts from them on net.
I’m sure you’ve considered that “X is a 0 sum game” doesn’t always mean that you should unilaterally avoid playing that game entirely. It does mean you’ll want to engineer environments where X taxes at a lower rate.
For one, the world would be a much better place if people actually cared about things like saving the world or even helping others, and put a little thought to it.
Why do you want to save the world? To allow people, humans, to do what they like to do for much longer than they would otherwise be able to. Status-seeking is one of those things that people are especially fond of.
Ask yourself, would you have written this post after a positive Singularity? Would it matter if some people were engaged in status games all day long?
What you are really trying to tell people is that they want to help solving friendly AI because it is universally instrumentally useful.
In case you want to argue that status-seeking is bad, no matter what, under any circumstances, you have to explain why that is so. And if you are unable to ground utility in something that is physically measurable, like the maximization of certain brain states, then I don’t think that you could convincingly demonstrate it to be a relatively undesirable human activity.
Umm. Sure, status-seeking may be fine once we have solved all possible problems anyway and we’re living in a perfect utopia. But that’s not very relevant if we want to discuss the world as it is today.
But that’s not very relevant if we want to discuss the world as it is today.
It is very relevant, because the reason why we want to solve friendly AI in the first place is to protect our complex values given to us by the Blind Idiot God.
For one, status-seeking is a zero sum game and only indirectly causes overall gains.
But if status-seeking is what you really want, as evidenced by your decisions, how can you say it’s bad that you do it? Can’t I just go and claim any goal you’re not optimizing for as your “real” goal you “should” have? Alternatively, can’t I claim that you only want us to drop status-seeking to get rid of the competition? Where’s your explanatory power?
For one, status-seeking is a zero sum game and only indirectly causes overall gains. The world would be a much better place if people actually cared about things like saving the world or even helping others, and put a little thought to it.
Also, mismatches between our consciously-held goals and our behavior cause plenty of frustration and unhappiness, like in the case of the person who keeps stressing out because their studies don’t progress.
If I actually cared about saving the world and about conserving my resources, it seems like I would choose some rate of world-saving A.
If I actually cared about saving the world, about conserving my resources, and the opinion of my peers, it seems like I would choose some rate of world-saving B. For reasonable scenarios, B would be greater than A because I can also get respect from my peers, and when you raise demand and keep supply constant quantity supplied increases.
That is, I understand that status causes faking behavior that’s a drain. (Status conflicts also lower supply, but it’s not clear how much.) I don’t think it’s clear that the mechanism of status-seeking conflicts with actually caring about other goals or detracts from them on net.
I’m sure you’ve considered that “X is a 0 sum game” doesn’t always mean that you should unilaterally avoid playing that game entirely. It does mean you’ll want to engineer environments where X taxes at a lower rate.
Why do you want to save the world? To allow people, humans, to do what they like to do for much longer than they would otherwise be able to. Status-seeking is one of those things that people are especially fond of.
Ask yourself, would you have written this post after a positive Singularity? Would it matter if some people were engaged in status games all day long?
What you are really trying to tell people is that they want to help solving friendly AI because it is universally instrumentally useful.
In case you want to argue that status-seeking is bad, no matter what, under any circumstances, you have to explain why that is so. And if you are unable to ground utility in something that is physically measurable, like the maximization of certain brain states, then I don’t think that you could convincingly demonstrate it to be a relatively undesirable human activity.
Umm. Sure, status-seeking may be fine once we have solved all possible problems anyway and we’re living in a perfect utopia. But that’s not very relevant if we want to discuss the world as it is today.
It is very relevant, because the reason why we want to solve friendly AI in the first place is to protect our complex values given to us by the Blind Idiot God.
If we’re talking about Friendly AI design, sure. I wasn’t.
But if status-seeking is what you really want, as evidenced by your decisions, how can you say it’s bad that you do it? Can’t I just go and claim any goal you’re not optimizing for as your “real” goal you “should” have? Alternatively, can’t I claim that you only want us to drop status-seeking to get rid of the competition? Where’s your explanatory power?
By the suffering it causes, and also by the fact that when I have realized that I’m doing it, I’ve stopped doing (that particular form of) it.