Thanks—I’ve read the bullet points and it looks like a really good summary (apologies for skimming—I’ll read it in more detail when I have time).
Just a few minor points:
The P(FOOM) calculation appears to be entirely independent of the P(CHARITY) calculation. Should these be made into separate documents? Or should it be made clearer which factors are common to FOOM and CHARITY? (e.g. P5 would appear to be correlated with P9).
In P6, I’m taking “SIAI” to mean a kind of generalized SIAI (i.e. it doesn’t have to be this specific team of people who solve FAI, what we’re interested in is to what extent a donation to this organization increases the probability that someone will solve FAI)
P7 and P8: I’m not sure which risk factors go into P7 and which go into P8. I’d have listed them as one top-level point with a bunch of sub-points.
P7 and P8: I think that sane SIAI supporters believe that supporting SIAI reduces some risk pathways while increasing others. The standard is not “it mustn’t increase any risks” but rather “the expected positives must outweigh the negatives”
Also if the standard is not “a worthwhile charity” but “the best charity”, it would be worth adding a P10: no other charity provides higher expected marginal value. Meta-level charities that focus on building the rational altruism movement are at least a candidate here.
The P(FOOM) calculation appears to be entirely independent of the P(CHARITY) calculation. Should these be made into separate documents?
I wanted to show that even if you assign a high probability to the possibility of risks from AI due to recursive self-improvement, it is still questionable if SIAI is the right choice or if now is the time to act.
Or should it be made clearer which factors are common to FOOM and CHARITY? (e.g. P5 would appear to be correlated with P9).
As I wrote at the top, it was a rather quick write-up and I plan to improve it. I can’t get myself to work on something like this for very long. It’s stupid, I know. But I can try to improve things incrementally. Thanks for your feedback.
In P6, I’m taking “SIAI” to mean a kind of generalized SIAI (i.e. it doesn’t have to be this specific team of people who solve FAI, what we’re interested in is to what extent a donation to this organization increases the probability that someone will solve FAI)
That’s a good point. SIAI as an organisation that makes people aware of the risk. But from my interview series it seemed like that a lot of AI researchers are aware of it to the point of being bothered.
I’m not sure which risk factors go into P7 and which go into P8.
It isn’t optimal. It is kind of hard to talk about premises that appear to be the same from a superficially point of view. But from a probabilistic point of view it is important to separate them into distinct parts to make clear that there are things that need to be true in conjunction.
Also if the standard is not “a worthwhile charity” but “the best charity”, it would be worth adding a P10: no other charity provides higher expected marginal value.
That problem is incredibly mathy and given my current level of education I am happy that people like Holden Karnofsky tackle that problem. The problem being that we get into the realm of Pascal’s mugging here where vast utilities outweigh tiny probabilities. Large error bars may render such choices moot. For more, see my post here.
Thanks—I’ve read the bullet points and it looks like a really good summary (apologies for skimming—I’ll read it in more detail when I have time).
Just a few minor points:
The P(FOOM) calculation appears to be entirely independent of the P(CHARITY) calculation. Should these be made into separate documents? Or should it be made clearer which factors are common to FOOM and CHARITY? (e.g. P5 would appear to be correlated with P9).
In P6, I’m taking “SIAI” to mean a kind of generalized SIAI (i.e. it doesn’t have to be this specific team of people who solve FAI, what we’re interested in is to what extent a donation to this organization increases the probability that someone will solve FAI)
P7 and P8: I’m not sure which risk factors go into P7 and which go into P8. I’d have listed them as one top-level point with a bunch of sub-points.
P7 and P8: I think that sane SIAI supporters believe that supporting SIAI reduces some risk pathways while increasing others. The standard is not “it mustn’t increase any risks” but rather “the expected positives must outweigh the negatives”
Also if the standard is not “a worthwhile charity” but “the best charity”, it would be worth adding a P10: no other charity provides higher expected marginal value. Meta-level charities that focus on building the rational altruism movement are at least a candidate here.
I wanted to show that even if you assign a high probability to the possibility of risks from AI due to recursive self-improvement, it is still questionable if SIAI is the right choice or if now is the time to act.
As I wrote at the top, it was a rather quick write-up and I plan to improve it. I can’t get myself to work on something like this for very long. It’s stupid, I know. But I can try to improve things incrementally. Thanks for your feedback.
That’s a good point. SIAI as an organisation that makes people aware of the risk. But from my interview series it seemed like that a lot of AI researchers are aware of it to the point of being bothered.
It isn’t optimal. It is kind of hard to talk about premises that appear to be the same from a superficially point of view. But from a probabilistic point of view it is important to separate them into distinct parts to make clear that there are things that need to be true in conjunction.
That problem is incredibly mathy and given my current level of education I am happy that people like Holden Karnofsky tackle that problem. The problem being that we get into the realm of Pascal’s mugging here where vast utilities outweigh tiny probabilities. Large error bars may render such choices moot. For more, see my post here.