Yeah, many people, like the majority of users on this forum, have decided to not build AGI.
Not to build AGI yet. Many of us would love to build it as soon as we can be confident we have a realistic and mature plan for alignment, but that’s a problem that’s so absurdly challenging that even if aliens landed tomorrow and handed us the “secret to Friendly AI”, we would have a hell of a time trying to validate that it actually was the real thing.
If one is faced with a math problem where you could be staring at the answer and know no way to unambiguously verify said answer, you are likely not capable of solving the problem until you somehow close the inferential distance separating you from understanding. Assuming the problem is solvable at all.
Not to build AGI yet.
Many of us would love to build it as soon as we can be confident we have a realistic and mature plan for alignment, but that’s a problem that’s so absurdly challenging that even if aliens landed tomorrow and handed us the “secret to Friendly AI”, we would have a hell of a time trying to validate that it actually was the real thing.
If one is faced with a math problem where you could be staring at the answer and know no way to unambiguously verify said answer, you are likely not capable of solving the problem until you somehow close the inferential distance separating you from understanding. Assuming the problem is solvable at all.