I should hope not! If that happens, it means the person who made the breakthrough released it to the public. That would be a huge mistake, because it would greatly increase the chances of an unfriendly AI being built before a friendly one.
Two points:
It seems very likely to me that there’s a string of breakthroughs which will lead to AGI and that it will gradually become clear that to people that they should be thinking about friendliness issues.
Even if there’s a single crucial breakthrough, I find it fairly likely that the person who makes it will not have friendliness concerns in mind.
That’s only because you said it in public and aren’t willing to appear inconsistent. You still haven’t decomposed this into manageable pieces with numbers. And since we’ve already seen that you wrote the bottom line first, we would have strong reason to not trust those numbers if you did.
I believe that the human brain is extremely poorly calibrated to determining probabilities through the explicit process that you describe and that the human brain’s intuition is often more reliable for such purposes. My attitude is in line with Holden’s comments 14 and 16 on the GiveWell Singularity Summit thread.
In line with the last two paragraphs of one of my earlier comments, I find your quickness to assume that my thinking on these matters stems from motivated cognition disturbing. Of course, I may be exhibiting motivated cognition, but the same is true of you, and your ungrounded confidence in your superiority to me is truly unsettling. As such, I will cease to communicate further with you unless you resolve to stop confidently asserting that I’m exhibiting motivated cognition.
I don’t think that’s the right way to escape from a Pascal’s mugging. In the case of the SIAI, there isn’t really clear evidence that the organisation is having any positive effect—let alone SAVING THE WORLD. When the benefit could plausibly be small, zero—or indeed negative—one does not need to invoke teeny tiny probabalities to offset it.
Two points:
It seems very likely to me that there’s a string of breakthroughs which will lead to AGI and that it will gradually become clear that to people that they should be thinking about friendliness issues.
Even if there’s a single crucial breakthrough, I find it fairly likely that the person who makes it will not have friendliness concerns in mind.
I believe that the human brain is extremely poorly calibrated to determining probabilities through the explicit process that you describe and that the human brain’s intuition is often more reliable for such purposes. My attitude is in line with Holden’s comments 14 and 16 on the GiveWell Singularity Summit thread.
In line with the last two paragraphs of one of my earlier comments, I find your quickness to assume that my thinking on these matters stems from motivated cognition disturbing. Of course, I may be exhibiting motivated cognition, but the same is true of you, and your ungrounded confidence in your superiority to me is truly unsettling. As such, I will cease to communicate further with you unless you resolve to stop confidently asserting that I’m exhibiting motivated cognition.
P(SIAI will be successful) may be smaller that 10^-(3^^^^3)!
I don’t think that’s the right way to escape from a Pascal’s mugging. In the case of the SIAI, there isn’t really clear evidence that the organisation is having any positive effect—let alone SAVING THE WORLD. When the benefit could plausibly be small, zero—or indeed negative—one does not need to invoke teeny tiny probabalities to offset it.
Upvoted twice for the “Two points”. Downvoted once for the remainder of the comment.
Well, actually, I’m pretty sure the second point has a serious typo. Maybe I should flip that vote.