I don’t think salaries were ever as low as 40% of what they are now. When I came on board, most people were at $36k/yr.
To illustrate why lower salaries means less stuff gets done: I’ve been averaging 60 hours per week, and I’m unusually productive. If I am paid less, that means that (to pick just one example from this week) I can’t afford to take a taxi to and from the eye doctor, which means I spend 1.5 hrs each way changing buses to get there, and spend less time being productive on x-risk. That is totally not worth it. Future civilizations would look back on this decision as profoundly stupid.
Pretty sure Anna and Steve Rayhawk had salaries around $20k/yr at some point while living in Silicon Valley.
I don’t think that you’re really responding to Steven’s point. Yes, as Steven said, if you were paid less then clearly that would impose more costs on you, so ceteris paribus your getting paid less would be bad. But, as Steven said, the opportunity cost is potentially very high. You haven’t made a rationally compelling case that the missed opportunity is “totally not worth it” or that heeding it would be “profoundly stupid”, you’ve mostly just re-asserted your conclusion, contra Steven’s objection. What are your arguments that this is the case? Note that I personally think it’s highly plausible that $40-50k/yr is optimal, but as far as I can see you haven’t yet listed any rationally compelling reasons to think so.
(This comment is a little bit sterner than it would have been if you hadn’t emphatically asserted that conclusions other than your own would be “profoundly stupid” without first giving overwhelming justification for your conclusion. It is especially important to be careful about such apparent overconfidence on issues where one clearly has a personal stake in the matter.)
I will largely endorse Will’s comment, then bow out of the discussion, because this appears to be too personal and touchy a topic for a detailed discussion to be fruitful.
Pretty sure Anna and Steve Rayhawk had salaries around $20k/yr at some point while living in Silicon Valley.
If so, I suspect they were burning through savings during this time or had some kind of cheap living arrangement that I don’t have.
What are your arguments that [paying you less wouldn’t be worth it]?
I couldn’t really get by on less, so paying me less would cause me to quit the organization and do something else instead, which would cause much of this good stuff to probably not happen.
It’s VERY hard for SingInst to purchase value as efficiently as by purchasing Luke-hours. At $48k/yr for 60 hrs/wk, I make $15.38/hr, and one Luke-hour is unusually productive for SingInst. Paying me less and thereby causing me to work fewer hours per week is a bad value proposition for SingInst.
paying me less would require me to do things that take up time and energy in order to get by with a smaller income. Then, assuming all goes well, future intergalactic civilizations would look back and think this was incredibly stupid; in much the same way that letting billions of person-containing brains rot in graves, and humanity allocating less than a million dollars per year to the Singularity Institute, would predictably look pretty stupid in retrospect. At Singularity Institute board meetings we at least try not to do things which will predictably make future intergalactic civilizations think we were being willfully stupid. That’s all there is to it, and no more.
This seems to me unnecessarily defensive. I support the goals of SingInst, but I could never bring myself to accept the kind of salary cut you guys are taking in order to work there. Like every other human on the planet, I can’t be accurately modelled with a utility function that places any value on far distant strangers; you can more accurately model what stranger-altruism I do show as purchase of moral satisfaction, though I do seek for such altruism to be efficient. SingInst should pay the salaries it needs to pay to recruit the kind of staff it needs to fulfil its mission; it’s harder to recruit if staff are expected to be defensive about demanding market salaries for their expertise, with no more than a normal adjustment for altruistic work much as if they were working for an animal sanctuary.
I could never bring myself to accept the… salary cut you guys are taking in order to work [at SI]… SingInst should pay the salaries it needs to pay to recruit the kind of staff it needs to fulfill its mission; it’s harder to recruit if staff are expected to be defensive about demanding market salaries for their expertise...
So when I say “unnecessarily defensive”, I mean that all the stuff about the cost of taxis is after-the-fact defensive rationalization; it can’t be said about a single dollar you spend on having a life outside of SI. The truth is that even the best human rationalist in the world isn’t going to agree to giving those up, and since you have to recruit humans, you’d best pay the sort of salary that is going to attract and retain them. That of course includes yourself.
The same goes for saying “move to the Honduras”. Your perfectly utility-maximising AGIs will move to the Honduras, but your human staff won’t; they want to live in places like the Bay Area.
As katydee and thomblake say, I mean that working for SingInst would mean a bigger reduction in my salary than I could currently bring myself to accept. If I really valued the lives of strangers as a utilitarian, the benefits to them of taking a salary cut would be so huge that it would totally outweigh the costs to me. But it looks like I only really place direct value on the short-term interests of myself and those close to me, and everything else is purchase of moral satisfaction. Happily, purchase of moral satisfaction can still save the world if it is done efficiently.
Since the labour pool contains only human beings, with no true altruistic utility maximizers, SingInst should hire and pay accordingly; the market shows that people will accept a lower salary for a job that directly does good, but not a vastly lower salary. It would increase SI-utility if Luke accepted a lower salary, but it wouldn’t increase Luke-utility, and driving Luke away would cost a lot of SI-utility, so calling for it is in the end a cheap shot and a bad recommendation.
I live in London, which is also freaking expensive—but so are all the places I want to live. There’s a reason people are prepared to pay more to live in these places.
Indeed. I guess “taking a cut” can sometimes mean “taking some of the money”, so you could interpret this as meaning “I couldn’t accept all that money”, which as you say is the opposite of what I meant!
I think the standard answer is that the networking and tech industry connections available in the Bay Area are useful enough to SIAI to justify the high costs of operating there.
I understand the point you’re making regarding salaries, and for once I agree.
However, it’s rather presumptuous of you (and/or Eliezer) to assume, implicitly, that our choices are limited to only two possibilities: “Support SIAI, save the world”, and “Don’t support SIAI, the world is doomed”. I can envision many other scenarios, such as “Support SIAI, but their fears were overblown and you implicitly killed N children by not spending the money on them instead”, or “Don’t support SIAI, support some other organization instead because they’ll have a better chance of success”, etc.
...I can’t afford to take a taxi to and from the eye doctor, which means I spend 1.5 hrs each way changing buses to get there, and spend less time being productive on x-risk. That is totally not worth it. Future civilizations would look back on this decision as profoundly stupid.
You also quoted Eliezer saying something similar.
This outlook implies strongly that whatever SIAI is doing is of such monumental significance that future civilizations will not only remember its name, but also reverently preserve every decision it made. You are also quite fond of saying that the work that SIAI is doing is tantamount to “saving the world”; and IIRC Eliezer once said that, if you have a talent for investment banking, you should make as much money as possible and then donate it all to SIAI, as opposed to any other charity.
This kind of grand rhetoric presupposes not only that the SIAI is correct in its risk assessment regarding AGI, but also that they are uniquely qualified to address this potentially world-ending problem, and that, over the ages, no one more qualified could possibly come along. All of this could be true, but it’s far from a certainty, as your writing would seem to imply.
You appear to be very confident that future civilizations will remember SIAI in a positive way, and care about its actions. If so, they must have some reason for doing so. Any reason would do, but the most likely reason is that SIAI will accomplish something so spectacularly beneficial that it will affect everyone in the far future. SIAI’s core mission is to save the world from UFAI, so it’s reasonable to assume that this is the highly beneficial effect that the SIAI will achieve.
I don’t have a problem with this chain of events, just with your apparent confidence that a). it’s going to happen in exactly that way, and b). your organization is the only one who is qualified to save the world in this specific fashion.
(EDIT: I forgot to say that, if we follow your reasoning to its conclusion, then you are indeed implying that donating as much money or labor as possible to SIAI is the only smart move for any rational agent.)
Note that I have no problem with your main statement, i.e. “lowering the salaries of SIAI members would bring us too much negative utility to compensate for the monetary savings”. This kind of cost-benefit analysis is done all the time, and future civilizations rarely enter into it.
Please substitute “certainty minus epsilon” for “certainty” wherever you see it in my post. It was not my intention to imply 100% certainty; just a confidence value so high that it amounts to the same thing for all practical purposes.
I don’t think “certainty minus epsilon” improves much. It moves it from theoretical impossibility to practical—but looking that far out, I expect “likelihood” might be best.
And where do SI claim even that? Obviously some of their discussions are implicitly conditioned on the fundamental assumptions behind their mission being true, but that doesn’t mean that they have extremely high confidence in those assumptions.
This outlook implies strongly that whatever SIAI is doing is of such monumental significance that future civilizations will not only remember its name, but also reverently preserve every decision it made.
In the SIA/Transhumanist outlook, if civilization survives some large (perhaps majority) of extant human minds will survive as uploads. As a result, all of their memories will likely be stored, dissected, shared, searched, judged, and so on. Much will be preserved in such a future. And even without uploading, there are plenty of people who have maintained websites since the early days of the internet with no loss of information, and this is quite likely to remain true far into the future if civilization survives.
Plenty of people make less than you and work harder than you. Look in every major city and you will find plenty of people that fit this category, both in business and labor.
“That is totally not worth it. Future civilizations would look back on this decision as profoundly stupid.”
Elitism plus demanding that you don’t have to budget. Seems that you need to work more and focus less on how “awesome” you are.
You make good contributions...but let’s not get carried away.
If you really cared about future risk you would be working away at the problem even with a smaller salary. Focus on your work.
If you really cared about future risk you would be working away at the problem even with a smaller salary. Focus on your work.
What we really need is some kind of emotionless robot who doesn’t care about its own standard of living and who can do lots of research and run organizations and suchlike without all the pesky problems introduced by “being human”.
I don’t think salaries were ever as low as 40% of what they are now. When I came on board, most people were at $36k/yr.
To illustrate why lower salaries means less stuff gets done: I’ve been averaging 60 hours per week, and I’m unusually productive. If I am paid less, that means that (to pick just one example from this week) I can’t afford to take a taxi to and from the eye doctor, which means I spend 1.5 hrs each way changing buses to get there, and spend less time being productive on x-risk. That is totally not worth it. Future civilizations would look back on this decision as profoundly stupid.
Pretty sure Anna and Steve Rayhawk had salaries around $20k/yr at some point while living in Silicon Valley.
I don’t think that you’re really responding to Steven’s point. Yes, as Steven said, if you were paid less then clearly that would impose more costs on you, so ceteris paribus your getting paid less would be bad. But, as Steven said, the opportunity cost is potentially very high. You haven’t made a rationally compelling case that the missed opportunity is “totally not worth it” or that heeding it would be “profoundly stupid”, you’ve mostly just re-asserted your conclusion, contra Steven’s objection. What are your arguments that this is the case? Note that I personally think it’s highly plausible that $40-50k/yr is optimal, but as far as I can see you haven’t yet listed any rationally compelling reasons to think so.
(This comment is a little bit sterner than it would have been if you hadn’t emphatically asserted that conclusions other than your own would be “profoundly stupid” without first giving overwhelming justification for your conclusion. It is especially important to be careful about such apparent overconfidence on issues where one clearly has a personal stake in the matter.)
I will largely endorse Will’s comment, then bow out of the discussion, because this appears to be too personal and touchy a topic for a detailed discussion to be fruitful.
If so, I suspect they were burning through savings during this time or had some kind of cheap living arrangement that I don’t have.
I couldn’t really get by on less, so paying me less would cause me to quit the organization and do something else instead, which would cause much of this good stuff to probably not happen.
It’s VERY hard for SingInst to purchase value as efficiently as by purchasing Luke-hours. At $48k/yr for 60 hrs/wk, I make $15.38/hr, and one Luke-hour is unusually productive for SingInst. Paying me less and thereby causing me to work fewer hours per week is a bad value proposition for SingInst.
Or, as Eliezer put it:
This seems to me unnecessarily defensive. I support the goals of SingInst, but I could never bring myself to accept the kind of salary cut you guys are taking in order to work there. Like every other human on the planet, I can’t be accurately modelled with a utility function that places any value on far distant strangers; you can more accurately model what stranger-altruism I do show as purchase of moral satisfaction, though I do seek for such altruism to be efficient. SingInst should pay the salaries it needs to pay to recruit the kind of staff it needs to fulfil its mission; it’s harder to recruit if staff are expected to be defensive about demanding market salaries for their expertise, with no more than a normal adjustment for altruistic work much as if they were working for an animal sanctuary.
Yes, exactly.
So when I say “unnecessarily defensive”, I mean that all the stuff about the cost of taxis is after-the-fact defensive rationalization; it can’t be said about a single dollar you spend on having a life outside of SI. The truth is that even the best human rationalist in the world isn’t going to agree to giving those up, and since you have to recruit humans, you’d best pay the sort of salary that is going to attract and retain them. That of course includes yourself.
The same goes for saying “move to the Honduras”. Your perfectly utility-maximising AGIs will move to the Honduras, but your human staff won’t; they want to live in places like the Bay Area.
You know that the Bay Area is freakin’ expensive, right?
Re-reading, the whole thing is pretty unclear!
As katydee and thomblake say, I mean that working for SingInst would mean a bigger reduction in my salary than I could currently bring myself to accept. If I really valued the lives of strangers as a utilitarian, the benefits to them of taking a salary cut would be so huge that it would totally outweigh the costs to me. But it looks like I only really place direct value on the short-term interests of myself and those close to me, and everything else is purchase of moral satisfaction. Happily, purchase of moral satisfaction can still save the world if it is done efficiently.
Since the labour pool contains only human beings, with no true altruistic utility maximizers, SingInst should hire and pay accordingly; the market shows that people will accept a lower salary for a job that directly does good, but not a vastly lower salary. It would increase SI-utility if Luke accepted a lower salary, but it wouldn’t increase Luke-utility, and driving Luke away would cost a lot of SI-utility, so calling for it is in the end a cheap shot and a bad recommendation.
I live in London, which is also freaking expensive—but so are all the places I want to live. There’s a reason people are prepared to pay more to live in these places.
Hmm… Perhaps you don’t know that “salary cut” above means taking much less money?
I had missed the word cut. Damn it, I shouldn’t be commenting while sleep-deprived!
Indeed. I guess “taking a cut” can sometimes mean “taking some of the money”, so you could interpret this as meaning “I couldn’t accept all that money”, which as you say is the opposite of what I meant!
So why not relocate SIAI somewhere with a more reasonable cost of living?
I think the standard answer is that the networking and tech industry connections available in the Bay Area are useful enough to SIAI to justify the high costs of operating there.
[comment deleted]
Perhaps that’s why he’s saying he wouldn’t be willing to live there on a low salary?
I understand the point you’re making regarding salaries, and for once I agree.
However, it’s rather presumptuous of you (and/or Eliezer) to assume, implicitly, that our choices are limited to only two possibilities: “Support SIAI, save the world”, and “Don’t support SIAI, the world is doomed”. I can envision many other scenarios, such as “Support SIAI, but their fears were overblown and you implicitly killed N children by not spending the money on them instead”, or “Don’t support SIAI, support some other organization instead because they’ll have a better chance of success”, etc.
Where did we say all that?
In your comment above, you said:
You also quoted Eliezer saying something similar.
This outlook implies strongly that whatever SIAI is doing is of such monumental significance that future civilizations will not only remember its name, but also reverently preserve every decision it made. You are also quite fond of saying that the work that SIAI is doing is tantamount to “saving the world”; and IIRC Eliezer once said that, if you have a talent for investment banking, you should make as much money as possible and then donate it all to SIAI, as opposed to any other charity.
This kind of grand rhetoric presupposes not only that the SIAI is correct in its risk assessment regarding AGI, but also that they are uniquely qualified to address this potentially world-ending problem, and that, over the ages, no one more qualified could possibly come along. All of this could be true, but it’s far from a certainty, as your writing would seem to imply.
I’m not seeing how the above implies the thing you said:
(Note that I don’t necessarily endorse things you report Eliezer as having said.)
You appear to be very confident that future civilizations will remember SIAI in a positive way, and care about its actions. If so, they must have some reason for doing so. Any reason would do, but the most likely reason is that SIAI will accomplish something so spectacularly beneficial that it will affect everyone in the far future. SIAI’s core mission is to save the world from UFAI, so it’s reasonable to assume that this is the highly beneficial effect that the SIAI will achieve.
I don’t have a problem with this chain of events, just with your apparent confidence that a). it’s going to happen in exactly that way, and b). your organization is the only one who is qualified to save the world in this specific fashion.
(EDIT: I forgot to say that, if we follow your reasoning to its conclusion, then you are indeed implying that donating as much money or labor as possible to SIAI is the only smart move for any rational agent.)
Note that I have no problem with your main statement, i.e. “lowering the salaries of SIAI members would bring us too much negative utility to compensate for the monetary savings”. This kind of cost-benefit analysis is done all the time, and future civilizations rarely enter into it.
Well no, of course it’s not a certainty. All efforts to make a difference are decisions under uncertainty. You’re attacking a straw man.
Please substitute “certainty minus epsilon” for “certainty” wherever you see it in my post. It was not my intention to imply 100% certainty; just a confidence value so high that it amounts to the same thing for all practical purposes.
I don’t think “certainty minus epsilon” improves much. It moves it from theoretical impossibility to practical—but looking that far out, I expect “likelihood” might be best.
I don’t understand your comment… what’s the practical difference between “extremely high likelihood” and “extremely high certainty” ?
And where do SI claim even that? Obviously some of their discussions are implicitly conditioned on the fundamental assumptions behind their mission being true, but that doesn’t mean that they have extremely high confidence in those assumptions.
In the SIA/Transhumanist outlook, if civilization survives some large (perhaps majority) of extant human minds will survive as uploads. As a result, all of their memories will likely be stored, dissected, shared, searched, judged, and so on. Much will be preserved in such a future. And even without uploading, there are plenty of people who have maintained websites since the early days of the internet with no loss of information, and this is quite likely to remain true far into the future if civilization survives.
“1. I couldn’t really get by on less”
It is called a budget, son.
Plenty of people make less than you and work harder than you. Look in every major city and you will find plenty of people that fit this category, both in business and labor.
“That is totally not worth it. Future civilizations would look back on this decision as profoundly stupid.”
Elitism plus demanding that you don’t have to budget. Seems that you need to work more and focus less on how “awesome” you are.
You make good contributions...but let’s not get carried away.
If you really cared about future risk you would be working away at the problem even with a smaller salary. Focus on your work.
What we really need is some kind of emotionless robot who doesn’t care about its own standard of living and who can do lots of research and run organizations and suchlike without all the pesky problems introduced by “being human”.
Oh, wait...
Downvoted for this; Rain’s reply to the parent goes for me too.