What—ideally—should young and intelligent people do?

This is my first post. I’m 21. From what I understand, my fluid intelligence will rise until the age of 26 and then slowly fall. So I’m in a great position now to positively contribute to humanity.

I feel the need—at least now—to devote my life to something that I think actually matters to humanity and helps and/​or saves as many humans as possible.

I think it might be a good idea to start off with my basic point of view: I want as many humans to survive as possible and live genuinely happy, satisfying, healthy, and fulfilling lives. I’ve heard arguments that humans have zero free will; I’m not sure whether I believe this or not, but then again I haven’t thought a ton about it. I’m leaning towards believing it. So, because I think it’s probably the case that humans have zero free will, I think everyone deserves to be genuinely happy, satisfied, healthy, and fulfilled. Yes, even criminals—they didn’t choose to commit their crimes if there’s zero free will, right? No. So they deserve to be very happy just like the “good” people do.

My question is, what should someone in my position do? I’m hearing arguments that perhaps humanity should stop AGI development altogether. This isn’t an arms race anymore; it’s a suicide race, as many have said.

But we then naturally face an extremely important question: Is it even reasonable to expect humanity to stop developing AGI? It’s one thing to get all countries and groups on Earth to agree to stop developing AGI—that might already be too difficult for all I know. But it’s then another thing to actually enforce that—airstrikes to destroy places where AGI is being developed? How long could this “airstrike” tactic actually work, for example? I’m assuming it actually can work in the first place, which might be incorrect.

If airstrikes stop being an effective anti-AGI-development tool for whatever reason, could we use or invent another anti-AGI-development tool that is effective? How long until a country or group figures out a way around that, though? Could a government or other group figure out how to secretly build AGI? Isn’t it only a matter of time until that happens?

Also, I’m pretty ignorant about this: how much could the people at OpenAI, Deepmind, and Anthropic be hiding? What does this tiny group of elities know that we don’t? Is it possible that they have already agreed to create AGI and have agreed to use it to wipe out 99.9+% of humans?

Or, on the opposite extreme: have the people at these companies agreed to stop developing AGI completely? Are they just trying to make sure that nobody in the world develops AGI at this point, including themselves? Are they doing only what I said in the last question, or that plus trying to figure out how to build benevolent and safe AGI?

Another thought I’ve had is: is it possible that humans should never build AGI at all because even if we develop perfectly aligned and benevolent AGI, humanity will be exterminated because of a bug or because this perfect AGI will fall into the wrong hands?

If my thought in the last paragraph is true, it seems that our goal as a species should be to never develop AGI at all and to actively try to stop and/​or prevent anyone from doing so?

So back to my original question: what should youngsters who are still on the rising part of fluid intelligence do now? Try to get into these extremely selective and powerful AGI-building companies and convince everyone there to turn the group from an AGI-building company to an anti-AGI-development company that tries to make sure that AGI is never built? Try to get into these extremely selective and powerful AGI-building companies and work on alignment?

Or have these companies already agreed—perhaps secretly—to stop attempting to develop AGI and maybe to even stop working on alignment because they have reached the conclusion that even an aligned AGI has a too-high chance to spell the extinction of humanity? In that case, would it be a waste of my life to try to get into these companies? Should I just work on something else that could be useful for humanity like Neuroscience, Mathematics, and/​or Physics?

I just wonder what the best use of the next decade+ of my life—and of those similar to me—really is now. There are many, many unknowns, obviously. I have many, many more thoughts about all this but I’m tempted to just publish this now.