EDIT: I originally misinterpreted your post slightly, and corrected my reply accordingly.
Not quite. The hope is that such a project will succeed before any other hacked-together project succeeds. More broadly, the hope is that partial successes using principled methodologies will convince them to be more widely adopted in the AI community as a whole, and more to the point that a contingent of highly successful AI researchers advocating Friendliness can change the overall mindset of the field.
The default is a hacked-together AI project. SIAI’s FAI research is trying to displace this, but I don’t think they will succeed (my information on this is purely outside-view, however).
An explicit instantiation of some of my calculations:
SIAI approach: 0.1% chance of replacing P with 0.1P
Approach that integrates with the rest of the AI community: 30% chance of replacing P with 0.9P
In the first case, P is basically staying constant, in the second case it is being replaced with 0.97P.
EDIT: I originally misinterpreted your post slightly, and corrected my reply accordingly.
Not quite. The hope is that such a project will succeed before any other hacked-together project succeeds. More broadly, the hope is that partial successes using principled methodologies will convince them to be more widely adopted in the AI community as a whole, and more to the point that a contingent of highly successful AI researchers advocating Friendliness can change the overall mindset of the field.
The default is a hacked-together AI project. SIAI’s FAI research is trying to displace this, but I don’t think they will succeed (my information on this is purely outside-view, however).
An explicit instantiation of some of my calculations:
SIAI approach: 0.1% chance of replacing P with 0.1P Approach that integrates with the rest of the AI community: 30% chance of replacing P with 0.9P
In the first case, P is basically staying constant, in the second case it is being replaced with 0.97P.