(Note: This is posted from the community anonymous account. I am a different individual than the above poster.)
Numbers, you say? Those are hard to find, by the very nature of the thing. I’ll give my personal calculus, but churning the probabilities is hard, and I cannot guarantee I am perfectly accurate. (Although I will brag that I get very high scores on The Credence Game, so perhaps you can trust my numbers more than most.)
I, personally, would estimate a 10%-40% chance of surviving a hard takeoff, given that MIRI is semi-successful, and continues on it’s current course.
This is actually pretty good. I estimate a <1% chance of surviving one without anyone working on it. I also estimate a ~80% chance of hard takeoff rather than soft. (Given a soft takeoff, we have maybe a ~20% chance without MIRI, and probably a 40%-50% chance with.)
Given these numbers, MIRI seems to be doing a lot of good. The two questions that remain are: 1) Is MIRI better than other organizations doing something similar? 2) How much good does the marginal MIRI dollar do?
The answer to 1) is not so straightforward. When I mentioned MIRI in the above probability estimates, I really meant the whole mindspace of organizations that are working in this field. That is to say, MIRI, FHI, and the new CSER. There is a reasonable chance that donating to one of the others gives you more bang for your buck. I urge you to do more research on this.
The answer to 2) is easier. MIRI (and the others) are still rather small. The marginal dollar has a quite significant impact to them, and they get most of their money from small-time donors donating maybe ~$100 each. This imply MIRI has a relatively large room to grow, and will benefit quite the bit on the margin.
You asked for numbers, and I gave you what numbers I could. Even if you disagree with those exact numbers, I hope you agree that the risk is quite significant. I said that there is <1% chance of surviving a hard takeoff without anyone working on the problem, and that is the issue here. As such, I feel I can confidently assert that unFriendly AI is by far the largest X-risk we are facing. There are other x-risks, true, but this one looms the most dangerous, and we won’t suceed unless we work on it very hard indeed.
Luckily, there are people working on it. There are obvious paths to tread to help avert this disaster, so all we need is people to step down them. That is why I support MIRI, at least.
(Note: This is posted from the community anonymous account. I am a different individual than the above poster.)
Numbers, you say? Those are hard to find, by the very nature of the thing. I’ll give my personal calculus, but churning the probabilities is hard, and I cannot guarantee I am perfectly accurate. (Although I will brag that I get very high scores on The Credence Game, so perhaps you can trust my numbers more than most.)
I, personally, would estimate a 10%-40% chance of surviving a hard takeoff, given that MIRI is semi-successful, and continues on it’s current course.
This is actually pretty good. I estimate a <1% chance of surviving one without anyone working on it. I also estimate a ~80% chance of hard takeoff rather than soft. (Given a soft takeoff, we have maybe a ~20% chance without MIRI, and probably a 40%-50% chance with.)
Given these numbers, MIRI seems to be doing a lot of good. The two questions that remain are: 1) Is MIRI better than other organizations doing something similar? 2) How much good does the marginal MIRI dollar do?
The answer to 1) is not so straightforward. When I mentioned MIRI in the above probability estimates, I really meant the whole mindspace of organizations that are working in this field. That is to say, MIRI, FHI, and the new CSER. There is a reasonable chance that donating to one of the others gives you more bang for your buck. I urge you to do more research on this.
The answer to 2) is easier. MIRI (and the others) are still rather small. The marginal dollar has a quite significant impact to them, and they get most of their money from small-time donors donating maybe ~$100 each. This imply MIRI has a relatively large room to grow, and will benefit quite the bit on the margin.
You asked for numbers, and I gave you what numbers I could. Even if you disagree with those exact numbers, I hope you agree that the risk is quite significant. I said that there is <1% chance of surviving a hard takeoff without anyone working on the problem, and that is the issue here. As such, I feel I can confidently assert that unFriendly AI is by far the largest X-risk we are facing. There are other x-risks, true, but this one looms the most dangerous, and we won’t suceed unless we work on it very hard indeed.
Luckily, there are people working on it. There are obvious paths to tread to help avert this disaster, so all we need is people to step down them. That is why I support MIRI, at least.