My current model of this centers on status, similar to your last paragraph. I’ll flesh it out a bit more.
Suppose I build a net-power-generating fusion reactor in my garage. In terms of status, this reflects very badly on an awful lot of high-status physicists and engineers who’ve sunk massive amounts of effort and resources into the same goal and completely failed to achieve it. This applies even without actually building the thing: if I claim that I can build a net-power-generating fusion reactor in my garage, then that’s a status grab (whether I intend it that way or not); I’m claiming that I can run circles around all those people who’ve tried and failed. People react to that the way people usually react to status grabs: they slap it down. After all, if they don’t slap it down, then the status-grab “succeeds”—it becomes more plausible in the eyes of third parties that I actually could build the thing, which in turn lowers the status of all the people who failed to do so.
Now, flip this back around: if I want to avoid being perceived as making a status grab (and therefore being slapped down), then I need to avoid being perceived as claiming to be able to do anything really big-sounding. And, as you mention, the easiest way to avoid the perception of a grand claim is to honestly believe that I can’t do the grand thing.
From the inside, this means that we try to predict what we’ll be able to do via an algorithm like:
How much social status would I have if I did X?
How much social status do I have?
If the first number is much larger than the second, then I probably can’t do the thing.
Presumably this is not an especially accurate algorithm, but it is a great algorithm for avoiding conflict. It avoids making claims (even unintentionally) for which we will be slapped down.
I’m pretty sure Yudkowsky sketched a model like this in Inadequate Equilibria, which is probably where I got it from.
But why doesn’t the all out attack work against status?
This model, when we’re only talking about status, seems like another reflection of the “I can’t” view so no commitment to make the effort is made.
I assume your “slap down” is not merely those with status ridiculing the idea attempting to point out flaws in the theory or design but rather that of applying both economic, political and perhaps even raw force to stop you. In that case the issue doesn’t seem to be status (though clearly that might indicate a level or location of risk). It issue is the ability of others with an interest in stopping you in achieving that goal. Seems to me that decision process there would be performing a calculation on a different set of inputs than status.
What are some examples of this algorithm being inaccurate? It seems awfully like the efficient market hypothesis to me. (I don’t particularly believe in EMH, but it’s an accurate enough heuristic.)
In principle I agree with Villiam, though often these situations are sufficiently unlike markets that thinking of it in EMH terms will lead intuitions astray. So I’ll emphasize some other aspects (though it’s still useful to consider how the aspects below generalize to other EMH arguments).
Situations where all out attacks work are usually situations where people nominally trying to do the thing are not actually trying to do the thing. This is often for typical Inadequate Equilibria reasons—i.e. people are rewarded for looking like they’re making effort, rather than for success, because it’s often a lot easier to verify that people look-like-they’re-making-effort than that they’re actually making progress.
I think this happens a lot more in everyday life than people realize/care to admit: employers in many areas will continue to employ employees without complaint as long as it looks like they’re trying to do The Thing, even if The Thing doesn’t get done very quickly/very well—there just needs to be a plausible-sounding argument that The Thing is more difficult than it looks. (I’ve worked in several tech startups, and this incentive structure applied to basically everyone.) Whether consciously or unconsciously, a natural result is that employees don’t really put forth their full effort to finish things as quickly and perfectly as possible; there’s no way for the employer to know that The Thing could have been done faster/better.
(Thought experiment: would you do your job differently if you were going to capture the value from the product for yourself, and wouldn’t get paid anything besides that?)
The whole status-attack problem slots neatly into this sort of scenario: if I come along and say that I can do The Thing in half the time and do a better job of it too, then obviously that’s going to come across as an attack on whoever ’s busy looking-like-they’re-doing The Thing.
It seems awfully like the efficient market hypothesis to me.
Then the reasoning wouldn’t apply when the “market” is not efficient. For example, when something cannot be bought or sold, when the information necessary to determine the price is not publicly available, when the opportunity to buy or sell is limited to a few people (so the people with superior knowledge of market situation cannot participate), and when the people who buy or sell have other priorities stronger than being right (for example a tiny financial profit caused by being right would be balanced by a greater status loss).
My current model of this centers on status, similar to your last paragraph. I’ll flesh it out a bit more.
Suppose I build a net-power-generating fusion reactor in my garage. In terms of status, this reflects very badly on an awful lot of high-status physicists and engineers who’ve sunk massive amounts of effort and resources into the same goal and completely failed to achieve it. This applies even without actually building the thing: if I claim that I can build a net-power-generating fusion reactor in my garage, then that’s a status grab (whether I intend it that way or not); I’m claiming that I can run circles around all those people who’ve tried and failed. People react to that the way people usually react to status grabs: they slap it down. After all, if they don’t slap it down, then the status-grab “succeeds”—it becomes more plausible in the eyes of third parties that I actually could build the thing, which in turn lowers the status of all the people who failed to do so.
Now, flip this back around: if I want to avoid being perceived as making a status grab (and therefore being slapped down), then I need to avoid being perceived as claiming to be able to do anything really big-sounding. And, as you mention, the easiest way to avoid the perception of a grand claim is to honestly believe that I can’t do the grand thing.
From the inside, this means that we try to predict what we’ll be able to do via an algorithm like:
How much social status would I have if I did X?
How much social status do I have?
If the first number is much larger than the second, then I probably can’t do the thing.
Presumably this is not an especially accurate algorithm, but it is a great algorithm for avoiding conflict. It avoids making claims (even unintentionally) for which we will be slapped down.
I’m pretty sure Yudkowsky sketched a model like this in Inadequate Equilibria, which is probably where I got it from.
But why doesn’t the all out attack work against status?
This model, when we’re only talking about status, seems like another reflection of the “I can’t” view so no commitment to make the effort is made.
I assume your “slap down” is not merely those with status ridiculing the idea attempting to point out flaws in the theory or design but rather that of applying both economic, political and perhaps even raw force to stop you. In that case the issue doesn’t seem to be status (though clearly that might indicate a level or location of risk). It issue is the ability of others with an interest in stopping you in achieving that goal. Seems to me that decision process there would be performing a calculation on a different set of inputs than status.
I think it often does. All out attacks do actually work quite often.
What are some examples of this algorithm being inaccurate? It seems awfully like the efficient market hypothesis to me. (I don’t particularly believe in EMH, but it’s an accurate enough heuristic.)
In principle I agree with Villiam, though often these situations are sufficiently unlike markets that thinking of it in EMH terms will lead intuitions astray. So I’ll emphasize some other aspects (though it’s still useful to consider how the aspects below generalize to other EMH arguments).
Situations where all out attacks work are usually situations where people nominally trying to do the thing are not actually trying to do the thing. This is often for typical Inadequate Equilibria reasons—i.e. people are rewarded for looking like they’re making effort, rather than for success, because it’s often a lot easier to verify that people look-like-they’re-making-effort than that they’re actually making progress.
I think this happens a lot more in everyday life than people realize/care to admit: employers in many areas will continue to employ employees without complaint as long as it looks like they’re trying to do The Thing, even if The Thing doesn’t get done very quickly/very well—there just needs to be a plausible-sounding argument that The Thing is more difficult than it looks. (I’ve worked in several tech startups, and this incentive structure applied to basically everyone.) Whether consciously or unconsciously, a natural result is that employees don’t really put forth their full effort to finish things as quickly and perfectly as possible; there’s no way for the employer to know that The Thing could have been done faster/better.
(Thought experiment: would you do your job differently if you were going to capture the value from the product for yourself, and wouldn’t get paid anything besides that?)
The whole status-attack problem slots neatly into this sort of scenario: if I come along and say that I can do The Thing in half the time and do a better job of it too, then obviously that’s going to come across as an attack on whoever ’s busy looking-like-they’re-doing The Thing.
Then the reasoning wouldn’t apply when the “market” is not efficient. For example, when something cannot be bought or sold, when the information necessary to determine the price is not publicly available, when the opportunity to buy or sell is limited to a few people (so the people with superior knowledge of market situation cannot participate), and when the people who buy or sell have other priorities stronger than being right (for example a tiny financial profit caused by being right would be balanced by a greater status loss).