Impressive prospective work. It’s frightening, both scenarios, even though one is worse than the other. The evolution seems unstoppable, and even if superintelligent AGI doesn’t happen in 2027-2030 but in 2040 or 2050, the feeling isn’t very different. I have young children, and while I don’t really care for myself, I really care for them. It was cool when it was just sci-fi. It was still fun when we first played with ChatGPT. It doesn’t look fun anymore, at all. My own thinking about it is that we’re indeed locked in a two-option scenario, probably not that fast, probably not with exactly the same narrative, but with two possible global endings that look like attractors (https://en.wikipedia.org/wiki/Attractor).
Impressive prospective work. It’s frightening, both scenarios, even though one is worse than the other. The evolution seems unstoppable, and even if superintelligent AGI doesn’t happen in 2027-2030 but in 2040 or 2050, the feeling isn’t very different. I have young children, and while I don’t really care for myself, I really care for them. It was cool when it was just sci-fi. It was still fun when we first played with ChatGPT. It doesn’t look fun anymore, at all. My own thinking about it is that we’re indeed locked in a two-option scenario, probably not that fast, probably not with exactly the same narrative, but with two possible global endings that look like attractors (https://en.wikipedia.org/wiki/Attractor).