I don’t think most of those proposals make sense, but anyway, the ones that do make sense only make sense with a pretty extreme math oracle—not something that leaves the human to fill in the “leaps of logic”. It’s just talking about AGI, basically. Which defeats the purpose.
Not true. This isn’t the place for this debate, but if you want to know:
To get an AGI that can solve problems that require lots of genuinely novel thinking, you’re probably pulling an agent out of a hat, and then you have an agent with unknown values and general optimization channels.
Even if you only want to solve problems, you still need compute, and therefore wish to conquer the universe (for science!).
An agent that only thinks about math problems isn’t going to take over the real world (it doesn’t even have to know the real world exists, as this isn’t a thing you can deduce from first principles).
Even if you only want to solve problems, you still need compute
We’re going to get compute anyway. Mundane uses of deep learning already use a lot of compute.
I don’t think most of those proposals make sense, but anyway, the ones that do make sense only make sense with a pretty extreme math oracle—not something that leaves the human to fill in the “leaps of logic”. It’s just talking about AGI, basically. Which defeats the purpose.
A “math proof only” AGI avoids most alignment problems. There’s no need to worry about paperclip maximizing or instrumental convergence.
Not true. This isn’t the place for this debate, but if you want to know:
To get an AGI that can solve problems that require lots of genuinely novel thinking, you’re probably pulling an agent out of a hat, and then you have an agent with unknown values and general optimization channels.
Even if you only want to solve problems, you still need compute, and therefore wish to conquer the universe (for science!).
An agent that only thinks about math problems isn’t going to take over the real world (it doesn’t even have to know the real world exists, as this isn’t a thing you can deduce from first principles).
We’re going to get compute anyway. Mundane uses of deep learning already use a lot of compute.