(If you work for a company that’s trying to develop AGI, I suggest you don’t publicly answer this question lest the media get ahold of it.)
(Let’s assume you’ve “aligned” this AGI and done significant sandbox testing before you let it loose with its first task(s). If you’d like to change or add to these assumptions for your answer, please spell out how.)
Possible answers:
Figure out how to perform a unilateral “pivotal act” to keep any other AGI from coming online, and then get my approval before doing it unless there’s no time to, then just do it
Figure out how to get people/governments on board to perform a multilateral “pivotal act,” and then do it
Do (2) first, then (1) if (2) isn’t possible
Prepare for having to police the world against potential “bad” AGI’s as they come online—do this in a lawful way
Prepare for having to police the world against potential “bad” AGI’s as they come online—do this by illegal means if necessary, including illegal mass surveillance
Figure out how to align ASI’s
Figure out how to get to ASI as quickly as possible
Figure out how to stop aging
Figure out how to save as many human life-years as possible (such as by curing malaria, COVID, cancer, etc.)
Figure out fusion energy
Room temperature superconductors, if you please
Solve water scarcity
Figure out and make humanoid robots so we don’t have to work anymore
Figure out how to raise people’s ethics on a massive scale
Figure out how to help on a massive scale with mental health
Figure out how to connect our brains to computers for intelligence augmentation/mind uploading
Figure out how to make us an interplanetary species
Make the world robust against engineered pandemics
Make the world robust against nuclear war
Figure out how to put an end to factory farming as quickly as possible
Other? _______________
If you think things should be done concurrently, your answer should be in the form of, for example: “(1) 90%, (6) 9%, (10) 1%.”
If you want things done sequentially and concurrently, an example answer would be: “(1) 100%, then (8) 100%, then (9) 50% and (21) 50% (Other: “help me win my favorite video game”).”
You can also give answers such as “do (8) first unless it looks like it’ll take more than a year, then do (9) first until I say switch to something else.” I’d suggest, however, to not get too too crazy detailed/complicated with your answers—I’m not going to hold you to them!
There’s a somewhat similar question I found on Reddit to possibly give you some other ideas.
First? Swing low, see how it performs, especially with a long-term project. Something low-stakes. Maybe something like a populated immersive game world. See what comes from there. Is it stable? Is it sane? Does it keep to its original parameters? What are the costs of running the agent/system? Can it solve social alignment problems?
Heck, test out some theories for some of your other answers in there.
Thank you for the comment. I think all of what you said is reasonable. I see now that I probably should’ve been more precise in defining my assumptions, as I would put much of what you said under “…done significant sandbox testing before you let it loose.”
I kind of think of this as more than sandbox testing. There is a big difference between how a system works in laboratory conditions, and how it works when encountering the real world. There are always things that we can’t foresee. As a software engineer, I have seen system that work perfectly fine in testing, but once you add a million users, then the wheels start to fall off.
I expect that AI agents will be similar. As a result, I think that it would be important to start small. Unintended consequences are the default. I would much rather have an AGI system try to solve small local problems before moving on to bigger ones that are harder to accomplish. Maybe find a way to address the affordable housing problem here. If it does well, then consider scaling up.