I understand the point you’re making regarding salaries, and for once I agree.
However, it’s rather presumptuous of you (and/or Eliezer) to assume, implicitly, that our choices are limited to only two possibilities: “Support SIAI, save the world”, and “Don’t support SIAI, the world is doomed”. I can envision many other scenarios, such as “Support SIAI, but their fears were overblown and you implicitly killed N children by not spending the money on them instead”, or “Don’t support SIAI, support some other organization instead because they’ll have a better chance of success”, etc.
...I can’t afford to take a taxi to and from the eye doctor, which means I spend 1.5 hrs each way changing buses to get there, and spend less time being productive on x-risk. That is totally not worth it. Future civilizations would look back on this decision as profoundly stupid.
You also quoted Eliezer saying something similar.
This outlook implies strongly that whatever SIAI is doing is of such monumental significance that future civilizations will not only remember its name, but also reverently preserve every decision it made. You are also quite fond of saying that the work that SIAI is doing is tantamount to “saving the world”; and IIRC Eliezer once said that, if you have a talent for investment banking, you should make as much money as possible and then donate it all to SIAI, as opposed to any other charity.
This kind of grand rhetoric presupposes not only that the SIAI is correct in its risk assessment regarding AGI, but also that they are uniquely qualified to address this potentially world-ending problem, and that, over the ages, no one more qualified could possibly come along. All of this could be true, but it’s far from a certainty, as your writing would seem to imply.
You appear to be very confident that future civilizations will remember SIAI in a positive way, and care about its actions. If so, they must have some reason for doing so. Any reason would do, but the most likely reason is that SIAI will accomplish something so spectacularly beneficial that it will affect everyone in the far future. SIAI’s core mission is to save the world from UFAI, so it’s reasonable to assume that this is the highly beneficial effect that the SIAI will achieve.
I don’t have a problem with this chain of events, just with your apparent confidence that a). it’s going to happen in exactly that way, and b). your organization is the only one who is qualified to save the world in this specific fashion.
(EDIT: I forgot to say that, if we follow your reasoning to its conclusion, then you are indeed implying that donating as much money or labor as possible to SIAI is the only smart move for any rational agent.)
Note that I have no problem with your main statement, i.e. “lowering the salaries of SIAI members would bring us too much negative utility to compensate for the monetary savings”. This kind of cost-benefit analysis is done all the time, and future civilizations rarely enter into it.
Please substitute “certainty minus epsilon” for “certainty” wherever you see it in my post. It was not my intention to imply 100% certainty; just a confidence value so high that it amounts to the same thing for all practical purposes.
I don’t think “certainty minus epsilon” improves much. It moves it from theoretical impossibility to practical—but looking that far out, I expect “likelihood” might be best.
And where do SI claim even that? Obviously some of their discussions are implicitly conditioned on the fundamental assumptions behind their mission being true, but that doesn’t mean that they have extremely high confidence in those assumptions.
This outlook implies strongly that whatever SIAI is doing is of such monumental significance that future civilizations will not only remember its name, but also reverently preserve every decision it made.
In the SIA/Transhumanist outlook, if civilization survives some large (perhaps majority) of extant human minds will survive as uploads. As a result, all of their memories will likely be stored, dissected, shared, searched, judged, and so on. Much will be preserved in such a future. And even without uploading, there are plenty of people who have maintained websites since the early days of the internet with no loss of information, and this is quite likely to remain true far into the future if civilization survives.
I understand the point you’re making regarding salaries, and for once I agree.
However, it’s rather presumptuous of you (and/or Eliezer) to assume, implicitly, that our choices are limited to only two possibilities: “Support SIAI, save the world”, and “Don’t support SIAI, the world is doomed”. I can envision many other scenarios, such as “Support SIAI, but their fears were overblown and you implicitly killed N children by not spending the money on them instead”, or “Don’t support SIAI, support some other organization instead because they’ll have a better chance of success”, etc.
Where did we say all that?
In your comment above, you said:
You also quoted Eliezer saying something similar.
This outlook implies strongly that whatever SIAI is doing is of such monumental significance that future civilizations will not only remember its name, but also reverently preserve every decision it made. You are also quite fond of saying that the work that SIAI is doing is tantamount to “saving the world”; and IIRC Eliezer once said that, if you have a talent for investment banking, you should make as much money as possible and then donate it all to SIAI, as opposed to any other charity.
This kind of grand rhetoric presupposes not only that the SIAI is correct in its risk assessment regarding AGI, but also that they are uniquely qualified to address this potentially world-ending problem, and that, over the ages, no one more qualified could possibly come along. All of this could be true, but it’s far from a certainty, as your writing would seem to imply.
I’m not seeing how the above implies the thing you said:
(Note that I don’t necessarily endorse things you report Eliezer as having said.)
You appear to be very confident that future civilizations will remember SIAI in a positive way, and care about its actions. If so, they must have some reason for doing so. Any reason would do, but the most likely reason is that SIAI will accomplish something so spectacularly beneficial that it will affect everyone in the far future. SIAI’s core mission is to save the world from UFAI, so it’s reasonable to assume that this is the highly beneficial effect that the SIAI will achieve.
I don’t have a problem with this chain of events, just with your apparent confidence that a). it’s going to happen in exactly that way, and b). your organization is the only one who is qualified to save the world in this specific fashion.
(EDIT: I forgot to say that, if we follow your reasoning to its conclusion, then you are indeed implying that donating as much money or labor as possible to SIAI is the only smart move for any rational agent.)
Note that I have no problem with your main statement, i.e. “lowering the salaries of SIAI members would bring us too much negative utility to compensate for the monetary savings”. This kind of cost-benefit analysis is done all the time, and future civilizations rarely enter into it.
Well no, of course it’s not a certainty. All efforts to make a difference are decisions under uncertainty. You’re attacking a straw man.
Please substitute “certainty minus epsilon” for “certainty” wherever you see it in my post. It was not my intention to imply 100% certainty; just a confidence value so high that it amounts to the same thing for all practical purposes.
I don’t think “certainty minus epsilon” improves much. It moves it from theoretical impossibility to practical—but looking that far out, I expect “likelihood” might be best.
I don’t understand your comment… what’s the practical difference between “extremely high likelihood” and “extremely high certainty” ?
And where do SI claim even that? Obviously some of their discussions are implicitly conditioned on the fundamental assumptions behind their mission being true, but that doesn’t mean that they have extremely high confidence in those assumptions.
In the SIA/Transhumanist outlook, if civilization survives some large (perhaps majority) of extant human minds will survive as uploads. As a result, all of their memories will likely be stored, dissected, shared, searched, judged, and so on. Much will be preserved in such a future. And even without uploading, there are plenty of people who have maintained websites since the early days of the internet with no loss of information, and this is quite likely to remain true far into the future if civilization survives.