Yeah, don’t do RL on it, but instead use it to make money for you (ethically) and at the same time ask it to think about how to create a safe/aligned superintelligent AGI. You may still need a big enough lead (to prevent others doing RL outcompeting you) or global coordination but it doesn’t seem obviously impossible.
Pretty much. I also think this plausibly buys off the actors who are currently really excited about AGI. They can make silly money with such a system without the RL part—why not do that for a while, while mutually-enforcing the “nobody kill everyone” provisions?
Yeah, don’t do RL on it, but instead use it to make money for you (ethically) and at the same time ask it to think about how to create a safe/aligned superintelligent AGI. You may still need a big enough lead (to prevent others doing RL outcompeting you) or global coordination but it doesn’t seem obviously impossible.
Pretty much. I also think this plausibly buys off the actors who are currently really excited about AGI. They can make silly money with such a system without the RL part—why not do that for a while, while mutually-enforcing the “nobody kill everyone” provisions?