This topic is something I’ve been thinking about lately. Do SIers tend to have superior general rationality, or do we merely escape a few particular biases? Are we good at rationality, or just good at “far mode” rationality (aka philosophy)? Are we good at epistemic but not instrumental rationality? (Keep in mind, though, that rationality is only a ceteris paribus predictor of success.)
Or, pick a more specific comparison. Do SIers tend to be better at general rationality than someone who can keep a small business running for 5 years? Maybe the tight feedback loops of running a small business are better rationality training than “debiasing interventions” can hope to be.
Of course, different people are more or less rational in different domains, at different times, in different environments.
This isn’t an idle question about labels. My estimate of the scope and level of people’s rationality in part determines how much I update from their stated opinion on something. How much evidence for Hypothesis X (about organizational development) is it when Eliezer gives me his opinion on the matter, as opposed to when Louie gives me his opinion on the matter? When Person B proposes to take on a totally new kind of project, I think their general rationality is a predictor of success — so, what is their level of general rationality?
Are we good at epistemic but not instrumental rationality?
Holden implies (and I agree with him) that there’s very little evidence at the moment to suggest that SI is good at instrumental rationality. As for epistemic rationality, how would we know ? Is there some objective way to measure it ? I personally happen to believe that if a person seems to take it as a given that he’s great at epistemic rationality, this fact should count as evidence (however circumstantial) against him being great at epistemic rationality… but that’s just me.
If you accept that your estimate of someone’s “rationality” should depend on the domain, the environment, the time, the context, etc… and what you want to do is make reliable estimates of the reliability of their opinion, their chances of success. etc… it seems to follow that you should be looking for comparisons within a relevant domain, environment, etc.
That is, if you want to get opinions about hypothesis X about organizational development that serve as significant evidence, it seems the thing to do is to find someone who knows a lot about organizational development—ideally, someone who has been successful at developing organizations—and consult their opinions. How generally rational they are might be very relevant causally, or it might not, but is in either case screened off by their domain competence… and their domain competence is easier to measure than their general rationality.
So is their general rationality worth devoting resources to determining?
It seems this only makes sense if you have already (e.g.) decided to ask Eliezer and Louie for their advice, whether it’s good evidence or not, and now you need to know how much evidence it is, and you expect the correct answer is different from the answer you’d get by applying the metrics you know about (e.g., domain familiarity and previously demonstrated relevant expertise).
I do spend a fair amount of time talking to domain experts outside of SI. The trouble is that the question of what we should do about thing X doesn’t just depend on domain competence but also on thousands of details about the inner workings of SI and our mission that I cannot communicate to domain experts outside SI, but which Eliezer and Louie already possess.
So it seems you have a problem in two domains (organizational development + SI internals) and different domain experts in both domains (outside domain experts + Eliezer/Louie), and need some way of cross-linking the two groups’ expertise to get a coherent recommendation, and the brute-force solutions (e.g. get them all in a room together, or bring one group up to speed on the other’s domain) are too expensive to be worth it. (Well, assuming the obstacle isn’t that the details need to be kept secret, but simply that expecting an outsider to come up to speed on all of SI’s local potentially relevant trivia simply isn’t practical.)
Yes?
Yeah, that can be a problem.
In that position, for serious questions I would probably ask E/L for their recommendations and a list of the most relevant details that informed that decision, then go to outside experts with a summary of the competing recommendations and an expanded version of that list and ask for their input. If there’s convergence, great. If there’s divergence, iterate.
This is still a expensive approach, though, so I can see where a cheaper approximation for less important questions is worth having.
In the world in which a varied group of intelligent and especially rational people are organizing to literally save humanity, I don’t see the relatively trivial, but important, improvements you’ve made in a short period of time being made because they were made years ago. And I thought that already accounting for the points you’ve made.
I mean, the question this group should be asking themselves is “how can we best alter the future so as to navigate towards FAI?” So, how did they apparently miss something like opportunity cost? Why, for instance, has their salaries increased when they could’ve been using it to improve the foundation of their cause from which everything else follows?
(Granted, I don’t know the history and inner workings of the SI, and so I could be missing some very significant and immovable hurdles, but I don’t see that as very likely; at least, not as likely as Holden’s scenario.)
I don’t see the relatively trivial, but important, improvements you’ve made in a short period of time being made because they were made years ago. And I thought that already accounting for the points you’ve made.
I don’t know what these sentences mean.
So, how did they apparently miss something like opportunity cost? Why, for instance, has their salaries increased when they could’ve been using it to improve the foundation of their cause from which everything else follows?
Actually, salary increases help with opportunity cost. At very low salaries, SI staff ends up spending lots of time and energy on general life cost-saving measures that distract us from working on x-risk reduction. And our salaries are generally still pretty low. I have less than $6k in my bank accounts. Outsourcing most tasks to remote collaborators also helps a lot with opportunity cost.
People are more rational in different domains, environments, and so on.
The people at SI may have poor instrumental rationality while being adept at epistemic rationality.
Being rational doesn’t necessarily mean being successful.
I accept all those points, and yet I still see the Singularity Institute having made the improvements that you’ve made since being hired before you were hired if they have superior general rationality. That is, you wouldn’t have that list of relatively trivial things to brag about because someone else would have recognized the items on that list as important and got them done somehow (ignore any negative connotations—they’re not intended).
For instance, I don’t see a varied group of people with superior general rationality not discovering or just not outsourcing work they don’t have a comparative advantage in (i.e., what you’ve done). That doesn’t look like just a failure in instrumental rationality, or just rationality operating on a different kind of utility function, or just a lack of domain specific knowledge.
The excuses available to a person acting in a way that’s non-traditionally rational are less convincing when you apply them to a group.
Actually, salary increases help with opportunity cost. At very low salaries, SI staff ends up spending lots of time and energy on general life cost-saving measures that distract us from working on x-risk reduction. And our salaries are generally still pretty low. I have less than $6k in my bank accounts.
No, I get that. But that still doesn’t explain away the higher salaries like EY’s 80k/year and its past upwards trend. I mean, these higher paid people are the most committed to the cause, right? I don’t see those people taking a higher salary when they could use that money for more outsourcing, or another employee, or better employees, if they want to literally save humanity while being superior in general rationality. It’s like a homeless person desperately in want of shelter trying save enough for an apartment and yet buying meals at some restaurant.
Outsourcing most tasks to remote collaborators also helps a lot with opportunity cost.
That’s the point I was making, why wasn’t that done earlier? How did these people apparently miss out on opportunity cost? (And I’m just using outsourcing as an example because it was one of the most glaring changes you made that I think should have probably been made much earlier.)
Right, I think we’re saying the same thing, here: the availability of so much low-hanging fruit in organizational development as late as Sept. 2011 is some evidence against the general rationality of SIers. Eliezer seems to want to say it was all a matter of funding, but that doesn’t make sense to me.
Now, on this:
I don’t see those people taking a higher salary when they could use that money for more outsourcing, or another employee, or better employees, if they want to literally save humanity while being super in general rationality.
For some reason I’m having a hard time parsing your sentences for unambiguous meaning, but if I may attempt to rephrase: “SIers wouldn’t take any salaries higher than (say) $70k/yr if they were truly committed to the cause and good in general rationality, because they would instead use that money to accomplish other things.” Is that what you’re saying?
Yes, the Bay Area is expensive. We’ve considered relocating, but on the other hand the (by far) best two places for meeting our needs in HR and in physically meeting with VIPs are SF and NYC, and if anything NYC is more expensive than the Bay Area. We cut living expenses where we can: most of us are just renting individual rooms.
Also, of course, it’s not like the Board could decide we should relocate to a charter city in Honduras and then all our staff would be able to just up and relocate. :)
(Rain may know all this; I’m posting it for others’ benefit.)
I think it’s crucial that SI stay in the Bay Area. Being in a high-status place signals that the cause is important. If you think you’re not taken seriously enough now, imagine if you were in Honduras…
Not to mention that HR is without doubt the single most important asset for SI. (Which is why it would probably be a good idea to pay more than the minimum cost of living.)
FWIW, Wikimedia moved from Florida to San Francisco precisely for the immense value of being at the centre of things instead of the middle of nowhere (and yes, Tampa is the middle of nowhere for these purposes, even though it still has the primary data centre). Even paying local charity scale rather than commercial scale (there’s a sort of cycle where WMF hires brilliant kids, they do a few years working at charity scale then go to Facebook/Google/etc for gobs of cash), being in the centre of things gets them staff and contacts they just couldn’t get if they were still in Tampa. And yes, the question came up there pretty much the same as it’s coming up here: why be there instead of remote? Because so much comes with being where things are actually happening, even if it doesn’t look directly related to your mission (educational charity, AI research institute).
The charity is still registered in Florida but the office is in SF. I can’t find the discussion on a quick search, but all manner of places were under serious consideration—including the UK, which is a horrible choice for legal issues in so very many ways.
In our experience, monkeys don’t work that way. It sounds like it should work, and then it just… doesn’t. Of course we do lots of Skyping, but regular human contact turns out to be pretty important.
(nods) Yeah, that’s been my experience too, though I’ve often suspected that companies like Google probably have a lot of research on the subject lying around that might be informative.
Some friends of mine did some experimenting along these lines when doing distributed software development (in both senses) and were somewhat startled to realize that Dark Age of Camelot worked better for them as a professional conferencing tool than any of the professional conferencing tools their company had. They didn’t mention this to their management.
and were somewhat startled to realize that Dark Age of Camelot worked better for them as a professional conferencing tool than any of the professional conferencing tools their company had. They didn’t mention this to their management.
I am reminded that Flickr started as a photo add-on for an MMORPG...
Enough for you to agree with Holden on that point?
“SIers wouldn’t take any salaries higher than (say) $70k/yr if they were truly committed to the cause and good in general rationality, because they would instead use that money to accomplish other things.” Is that what you’re saying?
Yes, but I wouldn’t set a limit at a specific salary range; I’d expect them to give as much as they optimally could, because I assume they’re more concerned with the cause than the money. (re the 70k/yr mention: I’d be surprised if that was anywhere near optimal)
Enough for you to agree with Holden on that point?
Probably not. He and I continue to dialogue in private about the point, in part to find the source of our disagreement.
Yes, but I wouldn’t set a limit at a specific salary range; I’d expect them to give as much as they optimally could, because I assume they’re more concerned with the cause than the money. (re the 70k/yr mention: I’d be surprised if that was anywhere near optimal)
I believe everyone except Eliezer currently makes between $42k/yr and $48k/yr — pretty low for the cost of living in the Bay Area.
Probably not. He and I continue to dialogue in private about the point, in part to find the source of our disagreement.
So, if you disagree with Holden, I assume you think SIers have superior general rationality: why?
And I’m confident SIers will score well on rationality tests, but that looks like specialized rationality. I.e., you can avoid a bias but you can’t avoid a failure in your achieving your goals. To me, the SI approach seems poorly leveraged. I expect more significant returns from simple knowledge acquisition. E.g., you want to become successful? YOU WANT TO WIN?! Great, read these textbooks on microeconomics, finance, and business. I think this is more the approach you take anyway.
I believe everyone except Eliezer currently makes between $42k/yr and $48k/yr — pretty low for the cost of living in the Bay Area.
That isn’t as bad as I thinking it was; I don’t know if that’s optimal, but it seems at least reasonable.
I assume you think SIers have superior general rationality: why?
I’ll avoid double-labor on this and wait to reply until my conversation with Holden is done.
I expect more significant returns from simple knowledge acquisition. E.g., you want to become successful? …Great, read these textbooks on microeconomics, finance, and business. I think this is more the approach you take anyway.
Right. Exercise the neglected virtue of scholarship and all that.
It’s not that easy to dismiss; if it’s as poorly leveraged as it looks relative to other approaches then you have little reason to be spreading and teaching SI’s brand of specialized rationality (except for perhaps income).
Weird, I have this perception of SI being heavily invested in overcoming biases and epistemic rationality training to the detriment of relevant domain specific knowledge, but I guess that’s wrong?
I’m not dismissing it, I’m endorsing it and agreeing with you that it has been my approach ever since my first post on LW.
I wasn’t talking about you; I was talking about SI’s approach in spreading and training rationality. You(SI) have Yudkowsky writing books, you have rationality minicamps, you have lesswrong, you and others are writing rationality articles and researching the rationality literature, and so on.
That kind of rationality training, research, and message looks poorly leveraged in achieving your goals, is what I’m saying. Poorly leveraged for anyone trying to achieve goals. And at its most abstract, that’s what rationality is, right? Achieving your goals.
So, I don’t care if your approach was to acquire as much relevant knowledge as possible before dabbling in debiasing, bayes, and whatnot (i.e., prioritizing the most leveraged approach). I wondering why your approach doesn’t seem to be SI’s approach. I’m wondering why SI doesn’t prioritize rationality training, research, and message by whatever is the most leveraged in achieving SI’s goals. I’m wondering why SI doesn’t spread the virtue of scholarship to the detriment of training debiasing and so on.
SI wants to raise the sanity waterline, is what the SI doing even near optimal for that? Knowing what SIers knew and trained for couldn’t even get them to see an opportunity for trading in on opportunity cost for years; that is sad.
(Disclaimer: the following comment should not be taken to imply that I myself have concluded that SI staff salaries should be reduced.)
I believe everyone except Eliezer currently makes between $42k/yr and $48k/yr — pretty low for the cost of living in the Bay Area.
I’ll grant you that it’s pretty low relative to other Bay Area salaries. But as for the actual cost of living, I’m less sure.
I’m not fortunate enough to be a Bay Area resident myself, but here is what the internet tells me:
After taxes, a $48,000/yr gross salary in California equates to a net of around $3000/month.
A 1-bedroom apartment in Berkeley and nearby places can be rented for around $1500/month. (Presumably, this is the category of expense where most of the geography-dependent high cost of living is contained.)
If one assumes an average spending of $20/day on food (typically enough to have at least one of one’s daily meals at a restaurant), that comes out to about $600/month.
That leaves around $900/month for miscellaneous expenses, which seems pretty comfortable for a young person with no dependents.
So, if these numbers are right, it seems that this salary range is actually right about what the cost of living is. Of course, this calculation specifically does not include costs relating to signaling (via things such as choices of housing, clothing, transportation, etc.) that one has more money than necessary to live (and therefore isn’t low-status). Depending on the nature of their job, certain SI employees may need, or at least find it distinctly advantageous for their particular duties, to engage in such signaling.
The point is that we’re consequentialists, and lowering salaries even further would save money (on salaries) but result in SI getting less done, not more — for the same reason that outsourcing fewer tasks would save money (on outsourcing) but cause us to get less done, not more.
You say this as though it’s obvious, but if I’m not mistaken, salaries used to be about 40% of what they are now, and while the higher salaries sound like they are making a major productivity difference, hiring 2.5 times as many people would also make a major productivity difference. (Though yes, obviously marginal hires would be lower in quality.)
I don’t think salaries were ever as low as 40% of what they are now. When I came on board, most people were at $36k/yr.
To illustrate why lower salaries means less stuff gets done: I’ve been averaging 60 hours per week, and I’m unusually productive. If I am paid less, that means that (to pick just one example from this week) I can’t afford to take a taxi to and from the eye doctor, which means I spend 1.5 hrs each way changing buses to get there, and spend less time being productive on x-risk. That is totally not worth it. Future civilizations would look back on this decision as profoundly stupid.
Pretty sure Anna and Steve Rayhawk had salaries around $20k/yr at some point while living in Silicon Valley.
I don’t think that you’re really responding to Steven’s point. Yes, as Steven said, if you were paid less then clearly that would impose more costs on you, so ceteris paribus your getting paid less would be bad. But, as Steven said, the opportunity cost is potentially very high. You haven’t made a rationally compelling case that the missed opportunity is “totally not worth it” or that heeding it would be “profoundly stupid”, you’ve mostly just re-asserted your conclusion, contra Steven’s objection. What are your arguments that this is the case? Note that I personally think it’s highly plausible that $40-50k/yr is optimal, but as far as I can see you haven’t yet listed any rationally compelling reasons to think so.
(This comment is a little bit sterner than it would have been if you hadn’t emphatically asserted that conclusions other than your own would be “profoundly stupid” without first giving overwhelming justification for your conclusion. It is especially important to be careful about such apparent overconfidence on issues where one clearly has a personal stake in the matter.)
I will largely endorse Will’s comment, then bow out of the discussion, because this appears to be too personal and touchy a topic for a detailed discussion to be fruitful.
Pretty sure Anna and Steve Rayhawk had salaries around $20k/yr at some point while living in Silicon Valley.
If so, I suspect they were burning through savings during this time or had some kind of cheap living arrangement that I don’t have.
What are your arguments that [paying you less wouldn’t be worth it]?
I couldn’t really get by on less, so paying me less would cause me to quit the organization and do something else instead, which would cause much of this good stuff to probably not happen.
It’s VERY hard for SingInst to purchase value as efficiently as by purchasing Luke-hours. At $48k/yr for 60 hrs/wk, I make $15.38/hr, and one Luke-hour is unusually productive for SingInst. Paying me less and thereby causing me to work fewer hours per week is a bad value proposition for SingInst.
paying me less would require me to do things that take up time and energy in order to get by with a smaller income. Then, assuming all goes well, future intergalactic civilizations would look back and think this was incredibly stupid; in much the same way that letting billions of person-containing brains rot in graves, and humanity allocating less than a million dollars per year to the Singularity Institute, would predictably look pretty stupid in retrospect. At Singularity Institute board meetings we at least try not to do things which will predictably make future intergalactic civilizations think we were being willfully stupid. That’s all there is to it, and no more.
This seems to me unnecessarily defensive. I support the goals of SingInst, but I could never bring myself to accept the kind of salary cut you guys are taking in order to work there. Like every other human on the planet, I can’t be accurately modelled with a utility function that places any value on far distant strangers; you can more accurately model what stranger-altruism I do show as purchase of moral satisfaction, though I do seek for such altruism to be efficient. SingInst should pay the salaries it needs to pay to recruit the kind of staff it needs to fulfil its mission; it’s harder to recruit if staff are expected to be defensive about demanding market salaries for their expertise, with no more than a normal adjustment for altruistic work much as if they were working for an animal sanctuary.
I could never bring myself to accept the… salary cut you guys are taking in order to work [at SI]… SingInst should pay the salaries it needs to pay to recruit the kind of staff it needs to fulfill its mission; it’s harder to recruit if staff are expected to be defensive about demanding market salaries for their expertise...
So when I say “unnecessarily defensive”, I mean that all the stuff about the cost of taxis is after-the-fact defensive rationalization; it can’t be said about a single dollar you spend on having a life outside of SI. The truth is that even the best human rationalist in the world isn’t going to agree to giving those up, and since you have to recruit humans, you’d best pay the sort of salary that is going to attract and retain them. That of course includes yourself.
The same goes for saying “move to the Honduras”. Your perfectly utility-maximising AGIs will move to the Honduras, but your human staff won’t; they want to live in places like the Bay Area.
As katydee and thomblake say, I mean that working for SingInst would mean a bigger reduction in my salary than I could currently bring myself to accept. If I really valued the lives of strangers as a utilitarian, the benefits to them of taking a salary cut would be so huge that it would totally outweigh the costs to me. But it looks like I only really place direct value on the short-term interests of myself and those close to me, and everything else is purchase of moral satisfaction. Happily, purchase of moral satisfaction can still save the world if it is done efficiently.
Since the labour pool contains only human beings, with no true altruistic utility maximizers, SingInst should hire and pay accordingly; the market shows that people will accept a lower salary for a job that directly does good, but not a vastly lower salary. It would increase SI-utility if Luke accepted a lower salary, but it wouldn’t increase Luke-utility, and driving Luke away would cost a lot of SI-utility, so calling for it is in the end a cheap shot and a bad recommendation.
I live in London, which is also freaking expensive—but so are all the places I want to live. There’s a reason people are prepared to pay more to live in these places.
Indeed. I guess “taking a cut” can sometimes mean “taking some of the money”, so you could interpret this as meaning “I couldn’t accept all that money”, which as you say is the opposite of what I meant!
I think the standard answer is that the networking and tech industry connections available in the Bay Area are useful enough to SIAI to justify the high costs of operating there.
I understand the point you’re making regarding salaries, and for once I agree.
However, it’s rather presumptuous of you (and/or Eliezer) to assume, implicitly, that our choices are limited to only two possibilities: “Support SIAI, save the world”, and “Don’t support SIAI, the world is doomed”. I can envision many other scenarios, such as “Support SIAI, but their fears were overblown and you implicitly killed N children by not spending the money on them instead”, or “Don’t support SIAI, support some other organization instead because they’ll have a better chance of success”, etc.
...I can’t afford to take a taxi to and from the eye doctor, which means I spend 1.5 hrs each way changing buses to get there, and spend less time being productive on x-risk. That is totally not worth it. Future civilizations would look back on this decision as profoundly stupid.
You also quoted Eliezer saying something similar.
This outlook implies strongly that whatever SIAI is doing is of such monumental significance that future civilizations will not only remember its name, but also reverently preserve every decision it made. You are also quite fond of saying that the work that SIAI is doing is tantamount to “saving the world”; and IIRC Eliezer once said that, if you have a talent for investment banking, you should make as much money as possible and then donate it all to SIAI, as opposed to any other charity.
This kind of grand rhetoric presupposes not only that the SIAI is correct in its risk assessment regarding AGI, but also that they are uniquely qualified to address this potentially world-ending problem, and that, over the ages, no one more qualified could possibly come along. All of this could be true, but it’s far from a certainty, as your writing would seem to imply.
You appear to be very confident that future civilizations will remember SIAI in a positive way, and care about its actions. If so, they must have some reason for doing so. Any reason would do, but the most likely reason is that SIAI will accomplish something so spectacularly beneficial that it will affect everyone in the far future. SIAI’s core mission is to save the world from UFAI, so it’s reasonable to assume that this is the highly beneficial effect that the SIAI will achieve.
I don’t have a problem with this chain of events, just with your apparent confidence that a). it’s going to happen in exactly that way, and b). your organization is the only one who is qualified to save the world in this specific fashion.
(EDIT: I forgot to say that, if we follow your reasoning to its conclusion, then you are indeed implying that donating as much money or labor as possible to SIAI is the only smart move for any rational agent.)
Note that I have no problem with your main statement, i.e. “lowering the salaries of SIAI members would bring us too much negative utility to compensate for the monetary savings”. This kind of cost-benefit analysis is done all the time, and future civilizations rarely enter into it.
Please substitute “certainty minus epsilon” for “certainty” wherever you see it in my post. It was not my intention to imply 100% certainty; just a confidence value so high that it amounts to the same thing for all practical purposes.
I don’t think “certainty minus epsilon” improves much. It moves it from theoretical impossibility to practical—but looking that far out, I expect “likelihood” might be best.
And where do SI claim even that? Obviously some of their discussions are implicitly conditioned on the fundamental assumptions behind their mission being true, but that doesn’t mean that they have extremely high confidence in those assumptions.
This outlook implies strongly that whatever SIAI is doing is of such monumental significance that future civilizations will not only remember its name, but also reverently preserve every decision it made.
In the SIA/Transhumanist outlook, if civilization survives some large (perhaps majority) of extant human minds will survive as uploads. As a result, all of their memories will likely be stored, dissected, shared, searched, judged, and so on. Much will be preserved in such a future. And even without uploading, there are plenty of people who have maintained websites since the early days of the internet with no loss of information, and this is quite likely to remain true far into the future if civilization survives.
Plenty of people make less than you and work harder than you. Look in every major city and you will find plenty of people that fit this category, both in business and labor.
“That is totally not worth it. Future civilizations would look back on this decision as profoundly stupid.”
Elitism plus demanding that you don’t have to budget. Seems that you need to work more and focus less on how “awesome” you are.
You make good contributions...but let’s not get carried away.
If you really cared about future risk you would be working away at the problem even with a smaller salary. Focus on your work.
If you really cared about future risk you would be working away at the problem even with a smaller salary. Focus on your work.
What we really need is some kind of emotionless robot who doesn’t care about its own standard of living and who can do lots of research and run organizations and suchlike without all the pesky problems introduced by “being human”.
That’s not actually that good, I don’t think—I go to a good college, and I know many people who are graduating to 60k-80k+ jobs with recruitment bonuses, opportunities for swift advancement, etc. Some of the best people I know could literally drop out now (three or four weeks prior to graduation) and immediately begin making six figures.
SIAI wages certainly seem fairly low to me relative to the quality of the people they are seeking to attract, though I think there are other benefits to working for them that cause the organization to attract skillful people regardless.
Ouch. I’d like to think that the side benefits for working for SIAI outweigh the side benefits for working for whatever soulless corporation Dilbert’s workplace embodies, though there is certainly a difference between side benefits and actual monetary compensation.
I graduated ~5 years ago with a engineering degree from a first tier University and I would have consider those starting salaries to be low to decent and not high. This is especially true in places with high cost of living like the bay area.
Having a good internship durring college often ment starting out at 60k/yr if not higher.
If this is significantly different for engineers exiting first tier University now it would be interesting to know.
To summarize and rephrase: in a “counterfactual” world where SI was actually rational, they would have found all these solutions and done all these things long ago.
Many of your sentences are confusing because you repeatedly use the locution “I see X”/ “I don’t see X” in a nonstandard way, apparently to mean “X would have happened” /”X would not have happened”.
This is not the way that phrase is usually understood. Normally, “I see X” is taken to mean either “I observe X” or “I predict X”. For example I might say (if I were so inclined):
Unlike you, I see a lot of rationality being demonstrated by SI employees.
meaning that I believe (from my observation) they are in fact being rational. Or, I might say:
I don’t see Luke quitting his job at SI tomorrow to become a punk rocker.
meaning that I don’t predict that will happen. But I would not generally say:
* I don’t see these people taking a higher salary.
if what I mean is “these people should/would not have taken a higher salary [if such-and-such were true]”.
Oh, I see ;) Thanks. I’ll definitely act on your comment, but I was using “I see X” as “I predict X”—just in the context of a possible world. E.g., I predict in the possible world in which SIers are superior in general rationality and committed to their cause, Luke wouldn’t have that list of accomplishments. Or, “yet I still see the Singularity Institute having made the improvements...”
I now see that I’ve been using ‘see’ as syntactic sugar for counterfactual talk… but no more!
I was using “I see X” as “I predict X”—just in the context of a possible world.
To get away with this, you really need, at minimum, an explicit counterfactual clause (“if”, “unless”, etc.) to introduce it: “In a world where SIers are superior in general rationality, I don’t see Luke having that list of accomplishments.”
The problem was not so much that your usage itself was logically inconceivable, but rather that it collided with the other interpretations of “I see X” in the particular contexts in which it occurred. E.g. “I don’t see them taking higher salaries” sounded like you were saying that they weren’t taking higher salaries. (There was an “if” clause, but it came way too late!)
That might be informative if we knew anything about your budget, but without any sort of context it sounds purely obfuscatory. (Also, your bank account is pretty close to my annual salary, so you might want to consider what you’re actually signalling here and to whom.)
This topic is something I’ve been thinking about lately. Do SIers tend to have superior general rationality, or do we merely escape a few particular biases? Are we good at rationality, or just good at “far mode” rationality (aka philosophy)? Are we good at epistemic but not instrumental rationality? (Keep in mind, though, that rationality is only a ceteris paribus predictor of success.)
Or, pick a more specific comparison. Do SIers tend to be better at general rationality than someone who can keep a small business running for 5 years? Maybe the tight feedback loops of running a small business are better rationality training than “debiasing interventions” can hope to be.
Of course, different people are more or less rational in different domains, at different times, in different environments.
This isn’t an idle question about labels. My estimate of the scope and level of people’s rationality in part determines how much I update from their stated opinion on something. How much evidence for Hypothesis X (about organizational development) is it when Eliezer gives me his opinion on the matter, as opposed to when Louie gives me his opinion on the matter? When Person B proposes to take on a totally new kind of project, I think their general rationality is a predictor of success — so, what is their level of general rationality?
Holden implies (and I agree with him) that there’s very little evidence at the moment to suggest that SI is good at instrumental rationality. As for epistemic rationality, how would we know ? Is there some objective way to measure it ? I personally happen to believe that if a person seems to take it as a given that he’s great at epistemic rationality, this fact should count as evidence (however circumstantial) against him being great at epistemic rationality… but that’s just me.
If you accept that your estimate of someone’s “rationality” should depend on the domain, the environment, the time, the context, etc… and what you want to do is make reliable estimates of the reliability of their opinion, their chances of success. etc… it seems to follow that you should be looking for comparisons within a relevant domain, environment, etc.
That is, if you want to get opinions about hypothesis X about organizational development that serve as significant evidence, it seems the thing to do is to find someone who knows a lot about organizational development—ideally, someone who has been successful at developing organizations—and consult their opinions. How generally rational they are might be very relevant causally, or it might not, but is in either case screened off by their domain competence… and their domain competence is easier to measure than their general rationality.
So is their general rationality worth devoting resources to determining?
It seems this only makes sense if you have already (e.g.) decided to ask Eliezer and Louie for their advice, whether it’s good evidence or not, and now you need to know how much evidence it is, and you expect the correct answer is different from the answer you’d get by applying the metrics you know about (e.g., domain familiarity and previously demonstrated relevant expertise).
I do spend a fair amount of time talking to domain experts outside of SI. The trouble is that the question of what we should do about thing X doesn’t just depend on domain competence but also on thousands of details about the inner workings of SI and our mission that I cannot communicate to domain experts outside SI, but which Eliezer and Louie already possess.
So it seems you have a problem in two domains (organizational development + SI internals) and different domain experts in both domains (outside domain experts + Eliezer/Louie), and need some way of cross-linking the two groups’ expertise to get a coherent recommendation, and the brute-force solutions (e.g. get them all in a room together, or bring one group up to speed on the other’s domain) are too expensive to be worth it. (Well, assuming the obstacle isn’t that the details need to be kept secret, but simply that expecting an outsider to come up to speed on all of SI’s local potentially relevant trivia simply isn’t practical.)
Yes?
Yeah, that can be a problem.
In that position, for serious questions I would probably ask E/L for their recommendations and a list of the most relevant details that informed that decision, then go to outside experts with a summary of the competing recommendations and an expanded version of that list and ask for their input. If there’s convergence, great. If there’s divergence, iterate.
This is still a expensive approach, though, so I can see where a cheaper approximation for less important questions is worth having.
Yes to all this.
In the world in which a varied group of intelligent and especially rational people are organizing to literally save humanity, I don’t see the relatively trivial, but important, improvements you’ve made in a short period of time being made because they were made years ago. And I thought that already accounting for the points you’ve made.
I mean, the question this group should be asking themselves is “how can we best alter the future so as to navigate towards FAI?” So, how did they apparently miss something like opportunity cost? Why, for instance, has their salaries increased when they could’ve been using it to improve the foundation of their cause from which everything else follows?
(Granted, I don’t know the history and inner workings of the SI, and so I could be missing some very significant and immovable hurdles, but I don’t see that as very likely; at least, not as likely as Holden’s scenario.)
I don’t know what these sentences mean.
Actually, salary increases help with opportunity cost. At very low salaries, SI staff ends up spending lots of time and energy on general life cost-saving measures that distract us from working on x-risk reduction. And our salaries are generally still pretty low. I have less than $6k in my bank accounts. Outsourcing most tasks to remote collaborators also helps a lot with opportunity cost.
People are more rational in different domains, environments, and so on.
The people at SI may have poor instrumental rationality while being adept at epistemic rationality.
Being rational doesn’t necessarily mean being successful.
I accept all those points, and yet I still see the Singularity Institute having made the improvements that you’ve made since being hired before you were hired if they have superior general rationality. That is, you wouldn’t have that list of relatively trivial things to brag about because someone else would have recognized the items on that list as important and got them done somehow (ignore any negative connotations—they’re not intended).
For instance, I don’t see a varied group of people with superior general rationality not discovering or just not outsourcing work they don’t have a comparative advantage in (i.e., what you’ve done). That doesn’t look like just a failure in instrumental rationality, or just rationality operating on a different kind of utility function, or just a lack of domain specific knowledge.
The excuses available to a person acting in a way that’s non-traditionally rational are less convincing when you apply them to a group.
No, I get that. But that still doesn’t explain away the higher salaries like EY’s 80k/year and its past upwards trend. I mean, these higher paid people are the most committed to the cause, right? I don’t see those people taking a higher salary when they could use that money for more outsourcing, or another employee, or better employees, if they want to literally save humanity while being superior in general rationality. It’s like a homeless person desperately in want of shelter trying save enough for an apartment and yet buying meals at some restaurant.
That’s the point I was making, why wasn’t that done earlier? How did these people apparently miss out on opportunity cost? (And I’m just using outsourcing as an example because it was one of the most glaring changes you made that I think should have probably been made much earlier.)
Right, I think we’re saying the same thing, here: the availability of so much low-hanging fruit in organizational development as late as Sept. 2011 is some evidence against the general rationality of SIers. Eliezer seems to want to say it was all a matter of funding, but that doesn’t make sense to me.
Now, on this:
For some reason I’m having a hard time parsing your sentences for unambiguous meaning, but if I may attempt to rephrase: “SIers wouldn’t take any salaries higher than (say) $70k/yr if they were truly committed to the cause and good in general rationality, because they would instead use that money to accomplish other things.” Is that what you’re saying?
I’ve heard the Bay Area is expensive, and previously pointed out that Eliezer earns more than I do, despite me being in the top 10 SI donors.
I don’t mind, though, as has been pointed out, even thinking about muffins might be a question invoking existential risk calculations.
...and much beloved for it.
Yes, the Bay Area is expensive. We’ve considered relocating, but on the other hand the (by far) best two places for meeting our needs in HR and in physically meeting with VIPs are SF and NYC, and if anything NYC is more expensive than the Bay Area. We cut living expenses where we can: most of us are just renting individual rooms.
Also, of course, it’s not like the Board could decide we should relocate to a charter city in Honduras and then all our staff would be able to just up and relocate. :)
(Rain may know all this; I’m posting it for others’ benefit.)
I think it’s crucial that SI stay in the Bay Area. Being in a high-status place signals that the cause is important. If you think you’re not taken seriously enough now, imagine if you were in Honduras…
Not to mention that HR is without doubt the single most important asset for SI. (Which is why it would probably be a good idea to pay more than the minimum cost of living.)
Out of curiosity only: what were the most significant factors that led you to reject telepresence options?
FWIW, Wikimedia moved from Florida to San Francisco precisely for the immense value of being at the centre of things instead of the middle of nowhere (and yes, Tampa is the middle of nowhere for these purposes, even though it still has the primary data centre). Even paying local charity scale rather than commercial scale (there’s a sort of cycle where WMF hires brilliant kids, they do a few years working at charity scale then go to Facebook/Google/etc for gobs of cash), being in the centre of things gets them staff and contacts they just couldn’t get if they were still in Tampa. And yes, the question came up there pretty much the same as it’s coming up here: why be there instead of remote? Because so much comes with being where things are actually happening, even if it doesn’t look directly related to your mission (educational charity, AI research institute).
I didn’t know this, but I’m happy to hear it.
The charity is still registered in Florida but the office is in SF. I can’t find the discussion on a quick search, but all manner of places were under serious consideration—including the UK, which is a horrible choice for legal issues in so very many ways.
In our experience, monkeys don’t work that way. It sounds like it should work, and then it just… doesn’t. Of course we do lots of Skyping, but regular human contact turns out to be pretty important.
(nods) Yeah, that’s been my experience too, though I’ve often suspected that companies like Google probably have a lot of research on the subject lying around that might be informative.
Some friends of mine did some experimenting along these lines when doing distributed software development (in both senses) and were somewhat startled to realize that Dark Age of Camelot worked better for them as a professional conferencing tool than any of the professional conferencing tools their company had. They didn’t mention this to their management.
I am reminded that Flickr started as a photo add-on for an MMORPG...
-
Enough for you to agree with Holden on that point?
Yes, but I wouldn’t set a limit at a specific salary range; I’d expect them to give as much as they optimally could, because I assume they’re more concerned with the cause than the money. (re the 70k/yr mention: I’d be surprised if that was anywhere near optimal)
Probably not. He and I continue to dialogue in private about the point, in part to find the source of our disagreement.
I believe everyone except Eliezer currently makes between $42k/yr and $48k/yr — pretty low for the cost of living in the Bay Area.
So, if you disagree with Holden, I assume you think SIers have superior general rationality: why?
And I’m confident SIers will score well on rationality tests, but that looks like specialized rationality. I.e., you can avoid a bias but you can’t avoid a failure in your achieving your goals. To me, the SI approach seems poorly leveraged. I expect more significant returns from simple knowledge acquisition. E.g., you want to become successful? YOU WANT TO WIN?! Great, read these textbooks on microeconomics, finance, and business. I think this is more the approach you take anyway.
That isn’t as bad as I thinking it was; I don’t know if that’s optimal, but it seems at least reasonable.
I’ll avoid double-labor on this and wait to reply until my conversation with Holden is done.
Right. Exercise the neglected virtue of scholarship and all that.
It’s not that easy to dismiss; if it’s as poorly leveraged as it looks relative to other approaches then you have little reason to be spreading and teaching SI’s brand of specialized rationality (except for perhaps income).
I’m not dismissing it, I’m endorsing it and agreeing with you that it has been my approach ever since my first post on LW.
Weird, I have this perception of SI being heavily invested in overcoming biases and epistemic rationality training to the detriment of relevant domain specific knowledge, but I guess that’s wrong?
I’m lost again; I don’t know what you’re saying.
I wasn’t talking about you; I was talking about SI’s approach in spreading and training rationality. You(SI) have Yudkowsky writing books, you have rationality minicamps, you have lesswrong, you and others are writing rationality articles and researching the rationality literature, and so on.
That kind of rationality training, research, and message looks poorly leveraged in achieving your goals, is what I’m saying. Poorly leveraged for anyone trying to achieve goals. And at its most abstract, that’s what rationality is, right? Achieving your goals.
So, I don’t care if your approach was to acquire as much relevant knowledge as possible before dabbling in debiasing, bayes, and whatnot (i.e., prioritizing the most leveraged approach). I wondering why your approach doesn’t seem to be SI’s approach. I’m wondering why SI doesn’t prioritize rationality training, research, and message by whatever is the most leveraged in achieving SI’s goals. I’m wondering why SI doesn’t spread the virtue of scholarship to the detriment of training debiasing and so on.
SI wants to raise the sanity waterline, is what the SI doing even near optimal for that? Knowing what SIers knew and trained for couldn’t even get them to see an opportunity for trading in on opportunity cost for years; that is sad.
(Disclaimer: the following comment should not be taken to imply that I myself have concluded that SI staff salaries should be reduced.)
I’ll grant you that it’s pretty low relative to other Bay Area salaries. But as for the actual cost of living, I’m less sure.
I’m not fortunate enough to be a Bay Area resident myself, but here is what the internet tells me:
After taxes, a $48,000/yr gross salary in California equates to a net of around $3000/month.
A 1-bedroom apartment in Berkeley and nearby places can be rented for around $1500/month. (Presumably, this is the category of expense where most of the geography-dependent high cost of living is contained.)
If one assumes an average spending of $20/day on food (typically enough to have at least one of one’s daily meals at a restaurant), that comes out to about $600/month.
That leaves around $900/month for miscellaneous expenses, which seems pretty comfortable for a young person with no dependents.
So, if these numbers are right, it seems that this salary range is actually right about what the cost of living is. Of course, this calculation specifically does not include costs relating to signaling (via things such as choices of housing, clothing, transportation, etc.) that one has more money than necessary to live (and therefore isn’t low-status). Depending on the nature of their job, certain SI employees may need, or at least find it distinctly advantageous for their particular duties, to engage in such signaling.
Damn good for someone just out of college—without a degree!
The point is that we’re consequentialists, and lowering salaries even further would save money (on salaries) but result in SI getting less done, not more — for the same reason that outsourcing fewer tasks would save money (on outsourcing) but cause us to get less done, not more.
You say this as though it’s obvious, but if I’m not mistaken, salaries used to be about 40% of what they are now, and while the higher salaries sound like they are making a major productivity difference, hiring 2.5 times as many people would also make a major productivity difference. (Though yes, obviously marginal hires would be lower in quality.)
I don’t think salaries were ever as low as 40% of what they are now. When I came on board, most people were at $36k/yr.
To illustrate why lower salaries means less stuff gets done: I’ve been averaging 60 hours per week, and I’m unusually productive. If I am paid less, that means that (to pick just one example from this week) I can’t afford to take a taxi to and from the eye doctor, which means I spend 1.5 hrs each way changing buses to get there, and spend less time being productive on x-risk. That is totally not worth it. Future civilizations would look back on this decision as profoundly stupid.
Pretty sure Anna and Steve Rayhawk had salaries around $20k/yr at some point while living in Silicon Valley.
I don’t think that you’re really responding to Steven’s point. Yes, as Steven said, if you were paid less then clearly that would impose more costs on you, so ceteris paribus your getting paid less would be bad. But, as Steven said, the opportunity cost is potentially very high. You haven’t made a rationally compelling case that the missed opportunity is “totally not worth it” or that heeding it would be “profoundly stupid”, you’ve mostly just re-asserted your conclusion, contra Steven’s objection. What are your arguments that this is the case? Note that I personally think it’s highly plausible that $40-50k/yr is optimal, but as far as I can see you haven’t yet listed any rationally compelling reasons to think so.
(This comment is a little bit sterner than it would have been if you hadn’t emphatically asserted that conclusions other than your own would be “profoundly stupid” without first giving overwhelming justification for your conclusion. It is especially important to be careful about such apparent overconfidence on issues where one clearly has a personal stake in the matter.)
I will largely endorse Will’s comment, then bow out of the discussion, because this appears to be too personal and touchy a topic for a detailed discussion to be fruitful.
If so, I suspect they were burning through savings during this time or had some kind of cheap living arrangement that I don’t have.
I couldn’t really get by on less, so paying me less would cause me to quit the organization and do something else instead, which would cause much of this good stuff to probably not happen.
It’s VERY hard for SingInst to purchase value as efficiently as by purchasing Luke-hours. At $48k/yr for 60 hrs/wk, I make $15.38/hr, and one Luke-hour is unusually productive for SingInst. Paying me less and thereby causing me to work fewer hours per week is a bad value proposition for SingInst.
Or, as Eliezer put it:
This seems to me unnecessarily defensive. I support the goals of SingInst, but I could never bring myself to accept the kind of salary cut you guys are taking in order to work there. Like every other human on the planet, I can’t be accurately modelled with a utility function that places any value on far distant strangers; you can more accurately model what stranger-altruism I do show as purchase of moral satisfaction, though I do seek for such altruism to be efficient. SingInst should pay the salaries it needs to pay to recruit the kind of staff it needs to fulfil its mission; it’s harder to recruit if staff are expected to be defensive about demanding market salaries for their expertise, with no more than a normal adjustment for altruistic work much as if they were working for an animal sanctuary.
Yes, exactly.
So when I say “unnecessarily defensive”, I mean that all the stuff about the cost of taxis is after-the-fact defensive rationalization; it can’t be said about a single dollar you spend on having a life outside of SI. The truth is that even the best human rationalist in the world isn’t going to agree to giving those up, and since you have to recruit humans, you’d best pay the sort of salary that is going to attract and retain them. That of course includes yourself.
The same goes for saying “move to the Honduras”. Your perfectly utility-maximising AGIs will move to the Honduras, but your human staff won’t; they want to live in places like the Bay Area.
You know that the Bay Area is freakin’ expensive, right?
Re-reading, the whole thing is pretty unclear!
As katydee and thomblake say, I mean that working for SingInst would mean a bigger reduction in my salary than I could currently bring myself to accept. If I really valued the lives of strangers as a utilitarian, the benefits to them of taking a salary cut would be so huge that it would totally outweigh the costs to me. But it looks like I only really place direct value on the short-term interests of myself and those close to me, and everything else is purchase of moral satisfaction. Happily, purchase of moral satisfaction can still save the world if it is done efficiently.
Since the labour pool contains only human beings, with no true altruistic utility maximizers, SingInst should hire and pay accordingly; the market shows that people will accept a lower salary for a job that directly does good, but not a vastly lower salary. It would increase SI-utility if Luke accepted a lower salary, but it wouldn’t increase Luke-utility, and driving Luke away would cost a lot of SI-utility, so calling for it is in the end a cheap shot and a bad recommendation.
I live in London, which is also freaking expensive—but so are all the places I want to live. There’s a reason people are prepared to pay more to live in these places.
Hmm… Perhaps you don’t know that “salary cut” above means taking much less money?
I had missed the word cut. Damn it, I shouldn’t be commenting while sleep-deprived!
Indeed. I guess “taking a cut” can sometimes mean “taking some of the money”, so you could interpret this as meaning “I couldn’t accept all that money”, which as you say is the opposite of what I meant!
So why not relocate SIAI somewhere with a more reasonable cost of living?
I think the standard answer is that the networking and tech industry connections available in the Bay Area are useful enough to SIAI to justify the high costs of operating there.
[comment deleted]
Perhaps that’s why he’s saying he wouldn’t be willing to live there on a low salary?
I understand the point you’re making regarding salaries, and for once I agree.
However, it’s rather presumptuous of you (and/or Eliezer) to assume, implicitly, that our choices are limited to only two possibilities: “Support SIAI, save the world”, and “Don’t support SIAI, the world is doomed”. I can envision many other scenarios, such as “Support SIAI, but their fears were overblown and you implicitly killed N children by not spending the money on them instead”, or “Don’t support SIAI, support some other organization instead because they’ll have a better chance of success”, etc.
Where did we say all that?
In your comment above, you said:
You also quoted Eliezer saying something similar.
This outlook implies strongly that whatever SIAI is doing is of such monumental significance that future civilizations will not only remember its name, but also reverently preserve every decision it made. You are also quite fond of saying that the work that SIAI is doing is tantamount to “saving the world”; and IIRC Eliezer once said that, if you have a talent for investment banking, you should make as much money as possible and then donate it all to SIAI, as opposed to any other charity.
This kind of grand rhetoric presupposes not only that the SIAI is correct in its risk assessment regarding AGI, but also that they are uniquely qualified to address this potentially world-ending problem, and that, over the ages, no one more qualified could possibly come along. All of this could be true, but it’s far from a certainty, as your writing would seem to imply.
I’m not seeing how the above implies the thing you said:
(Note that I don’t necessarily endorse things you report Eliezer as having said.)
You appear to be very confident that future civilizations will remember SIAI in a positive way, and care about its actions. If so, they must have some reason for doing so. Any reason would do, but the most likely reason is that SIAI will accomplish something so spectacularly beneficial that it will affect everyone in the far future. SIAI’s core mission is to save the world from UFAI, so it’s reasonable to assume that this is the highly beneficial effect that the SIAI will achieve.
I don’t have a problem with this chain of events, just with your apparent confidence that a). it’s going to happen in exactly that way, and b). your organization is the only one who is qualified to save the world in this specific fashion.
(EDIT: I forgot to say that, if we follow your reasoning to its conclusion, then you are indeed implying that donating as much money or labor as possible to SIAI is the only smart move for any rational agent.)
Note that I have no problem with your main statement, i.e. “lowering the salaries of SIAI members would bring us too much negative utility to compensate for the monetary savings”. This kind of cost-benefit analysis is done all the time, and future civilizations rarely enter into it.
Well no, of course it’s not a certainty. All efforts to make a difference are decisions under uncertainty. You’re attacking a straw man.
Please substitute “certainty minus epsilon” for “certainty” wherever you see it in my post. It was not my intention to imply 100% certainty; just a confidence value so high that it amounts to the same thing for all practical purposes.
I don’t think “certainty minus epsilon” improves much. It moves it from theoretical impossibility to practical—but looking that far out, I expect “likelihood” might be best.
I don’t understand your comment… what’s the practical difference between “extremely high likelihood” and “extremely high certainty” ?
And where do SI claim even that? Obviously some of their discussions are implicitly conditioned on the fundamental assumptions behind their mission being true, but that doesn’t mean that they have extremely high confidence in those assumptions.
In the SIA/Transhumanist outlook, if civilization survives some large (perhaps majority) of extant human minds will survive as uploads. As a result, all of their memories will likely be stored, dissected, shared, searched, judged, and so on. Much will be preserved in such a future. And even without uploading, there are plenty of people who have maintained websites since the early days of the internet with no loss of information, and this is quite likely to remain true far into the future if civilization survives.
“1. I couldn’t really get by on less”
It is called a budget, son.
Plenty of people make less than you and work harder than you. Look in every major city and you will find plenty of people that fit this category, both in business and labor.
“That is totally not worth it. Future civilizations would look back on this decision as profoundly stupid.”
Elitism plus demanding that you don’t have to budget. Seems that you need to work more and focus less on how “awesome” you are.
You make good contributions...but let’s not get carried away.
If you really cared about future risk you would be working away at the problem even with a smaller salary. Focus on your work.
What we really need is some kind of emotionless robot who doesn’t care about its own standard of living and who can do lots of research and run organizations and suchlike without all the pesky problems introduced by “being human”.
Oh, wait...
Downvoted for this; Rain’s reply to the parent goes for me too.
That’s not actually that good, I don’t think—I go to a good college, and I know many people who are graduating to 60k-80k+ jobs with recruitment bonuses, opportunities for swift advancement, etc. Some of the best people I know could literally drop out now (three or four weeks prior to graduation) and immediately begin making six figures.
SIAI wages certainly seem fairly low to me relative to the quality of the people they are seeking to attract, though I think there are other benefits to working for them that cause the organization to attract skillful people regardless.
A Dilbert comic said it.
Ouch. I’d like to think that the side benefits for working for SIAI outweigh the side benefits for working for whatever soulless corporation Dilbert’s workplace embodies, though there is certainly a difference between side benefits and actual monetary compensation.
I graduated ~5 years ago with a engineering degree from a first tier University and I would have consider those starting salaries to be low to decent and not high. This is especially true in places with high cost of living like the bay area.
Having a good internship durring college often ment starting out at 60k/yr if not higher.
If this is significantly different for engineers exiting first tier University now it would be interesting to know.
To summarize and rephrase: in a “counterfactual” world where SI was actually rational, they would have found all these solutions and done all these things long ago.
Many of your sentences are confusing because you repeatedly use the locution “I see X”/ “I don’t see X” in a nonstandard way, apparently to mean “X would have happened” /”X would not have happened”.
This is not the way that phrase is usually understood. Normally, “I see X” is taken to mean either “I observe X” or “I predict X”. For example I might say (if I were so inclined):
meaning that I believe (from my observation) they are in fact being rational. Or, I might say:
meaning that I don’t predict that will happen. But I would not generally say:
if what I mean is “these people should/would not have taken a higher salary [if such-and-such were true]”.
Oh, I see ;) Thanks. I’ll definitely act on your comment, but I was using “I see X” as “I predict X”—just in the context of a possible world. E.g., I predict in the possible world in which SIers are superior in general rationality and committed to their cause, Luke wouldn’t have that list of accomplishments. Or, “yet I still see the Singularity Institute having made the improvements...”
I now see that I’ve been using ‘see’ as syntactic sugar for counterfactual talk… but no more!
To get away with this, you really need, at minimum, an explicit counterfactual clause (“if”, “unless”, etc.) to introduce it: “In a world where SIers are superior in general rationality, I don’t see Luke having that list of accomplishments.”
The problem was not so much that your usage itself was logically inconceivable, but rather that it collided with the other interpretations of “I see X” in the particular contexts in which it occurred. E.g. “I don’t see them taking higher salaries” sounded like you were saying that they weren’t taking higher salaries. (There was an “if” clause, but it came way too late!)
Have you considered the possibility that even higher salaries might raise productivity further?
I think we should search systematically for ways to convert money into increased productivity.
By what measure do you figure that?
That might be informative if we knew anything about your budget, but without any sort of context it sounds purely obfuscatory. (Also, your bank account is pretty close to my annual salary, so you might want to consider what you’re actually signalling here and to whom.)