I’m not sure I see the moral aspect. Assuming the same work gets done, it’s pretty much zero sum whether the applicant, other applicants, or shareholders get more of the surplus. If it matters to you, look for a coasean solution (side-payments to those you think you’ve harmed).
There is a moral argument to be made regarding good matching of jobs to candidates for optimum production. But it’s not clear which direction this pulls in this case.
I certainly can understand tactical aspects, and it probably depends on specifics whether it’s in your best interest to disclose your current/previous salary.
I think you’re assuming outcomes with the same expected value are equally preferred. No society’s morality works that way.
As indicated in one of my footnotes, I’m using the term “morality” to mean something like “a group will call moral that behavior that can solve coordination problems for that group, which aren’t solved by rational self-interest.” This is a “the morals of employees” formulation, not a “morals of society” formulation.
Hmm. I think I understand that defintion, but I don’t think I agree with it as common or useful enough to use in a post title. Perhaps “the coordination effects of disclosing salary requirements” would be clearer.
Mostly, I don’t think “employees” (or worse, “employees applying for a given job opening”) are a particularly robust way to segment a population for moral evaluation. As a member of many overlapping groups, I find it difficult to decide which group’s coordination problems I want to assist with.
Here’s a way of connecting the views of “social morality” and “group morality”, and explaining why groups using expected value wouldn’t result in society using expected value. (Not that I think it should, but that’s a different discussion.)
Say society is composed of different groups, each with their own coordination problems, and behaviors that could solve them. Say you can analyze a situation x in the space X, and for each group g, find the gradient ∇u(g,x) of their utility surface u(g,X). Each group g then prefers an action given by the direction of ∇u(g,x), with a strength of preference given by its magnitude.
Suppose each agent z is a member of one group g(z). (We don’t need this assumption, but it makes the notation simpler.) If we call “socially-moral behavior at x” the average, over all agents z, of ∇u(g(x),x), I’m pretty sure this is going to give results that do not maximize social expected value.
Er… for the simple cases I’ve looked at, following the gradient, and solving directly for the maximal-utility point in the space, give the same results, as long as there are no local minima. I should have expected that.
I’m not sure I see the moral aspect. Assuming the same work gets done, it’s pretty much zero sum whether the applicant, other applicants, or shareholders get more of the surplus. If it matters to you, look for a coasean solution (side-payments to those you think you’ve harmed).
There is a moral argument to be made regarding good matching of jobs to candidates for optimum production. But it’s not clear which direction this pulls in this case.
I certainly can understand tactical aspects, and it probably depends on specifics whether it’s in your best interest to disclose your current/previous salary.
I think you’re assuming outcomes with the same expected value are equally preferred. No society’s morality works that way.
As indicated in one of my footnotes, I’m using the term “morality” to mean something like “a group will call moral that behavior that can solve coordination problems for that group, which aren’t solved by rational self-interest.” This is a “the morals of employees” formulation, not a “morals of society” formulation.
Hmm. I think I understand that defintion, but I don’t think I agree with it as common or useful enough to use in a post title. Perhaps “the coordination effects of disclosing salary requirements” would be clearer.
Mostly, I don’t think “employees” (or worse, “employees applying for a given job opening”) are a particularly robust way to segment a population for moral evaluation. As a member of many overlapping groups, I find it difficult to decide which group’s coordination problems I want to assist with.
Here’s a way of connecting the views of “social morality” and “group morality”, and explaining why groups using expected value wouldn’t result in society using expected value. (Not that I think it should, but that’s a different discussion.)
Say society is composed of different groups, each with their own coordination problems, and behaviors that could solve them. Say you can analyze a situation x in the space X, and for each group g, find the gradient ∇u(g,x) of their utility surface u(g,X). Each group g then prefers an action given by the direction of ∇u(g,x), with a strength of preference given by its magnitude.
Suppose each agent z is a member of one group g(z). (We don’t need this assumption, but it makes the notation simpler.) If we call “socially-moral behavior at x” the average, over all agents z, of ∇u(g(x),x), I’m pretty sure this is going to give results that do not maximize social expected value.
Er… for the simple cases I’ve looked at, following the gradient, and solving directly for the maximal-utility point in the space, give the same results, as long as there are no local minima. I should have expected that.