My personal estimate is that such an approach carries about 10% less risk than an alternative approach where the statistics and software are both hacked together.
I don’t understand what you mean by “10% less risk”. Do you think any given project using “a well-reasoned statistical approach with good software engineering methodologies” has at least 10% chance of leading to a positive Singularity? Or each such project has a P*0.9 probability of causing an existential disaster, where P is the probability of disaster of a “hacked together” project. Or something else?
You said “I therefore consider the above-mentioned approach more effective.”, but if all you’re claiming is that the above mentioned approach (“a well-reasoned statistical approach with good software engineering methodologies”) has a P*0.9 probability of causing an existential disaster, and not claiming that it has a significant chance of causing a positive Singularity, then why do you think funding such projects is effective for reducing existential risk? Is the idea that each such project would displace a “hacked together” project that would otherwise be started?
EDIT: I originally misinterpreted your post slightly, and corrected my reply accordingly.
Not quite. The hope is that such a project will succeed before any other hacked-together project succeeds. More broadly, the hope is that partial successes using principled methodologies will convince them to be more widely adopted in the AI community as a whole, and more to the point that a contingent of highly successful AI researchers advocating Friendliness can change the overall mindset of the field.
The default is a hacked-together AI project. SIAI’s FAI research is trying to displace this, but I don’t think they will succeed (my information on this is purely outside-view, however).
An explicit instantiation of some of my calculations:
SIAI approach: 0.1% chance of replacing P with 0.1P
Approach that integrates with the rest of the AI community: 30% chance of replacing P with 0.9P
In the first case, P is basically staying constant, in the second case it is being replaced with 0.97P.
I don’t understand what you mean by “10% less risk”. Do you think any given project using “a well-reasoned statistical approach with good software engineering methodologies” has at least 10% chance of leading to a positive Singularity? Or each such project has a P*0.9 probability of causing an existential disaster, where P is the probability of disaster of a “hacked together” project. Or something else?
Sorry for the ambiguity. I meant P*0.9.
You said “I therefore consider the above-mentioned approach more effective.”, but if all you’re claiming is that the above mentioned approach (“a well-reasoned statistical approach with good software engineering methodologies”) has a P*0.9 probability of causing an existential disaster, and not claiming that it has a significant chance of causing a positive Singularity, then why do you think funding such projects is effective for reducing existential risk? Is the idea that each such project would displace a “hacked together” project that would otherwise be started?
EDIT: I originally misinterpreted your post slightly, and corrected my reply accordingly.
Not quite. The hope is that such a project will succeed before any other hacked-together project succeeds. More broadly, the hope is that partial successes using principled methodologies will convince them to be more widely adopted in the AI community as a whole, and more to the point that a contingent of highly successful AI researchers advocating Friendliness can change the overall mindset of the field.
The default is a hacked-together AI project. SIAI’s FAI research is trying to displace this, but I don’t think they will succeed (my information on this is purely outside-view, however).
An explicit instantiation of some of my calculations:
SIAI approach: 0.1% chance of replacing P with 0.1P Approach that integrates with the rest of the AI community: 30% chance of replacing P with 0.9P
In the first case, P is basically staying constant, in the second case it is being replaced with 0.97P.