Not true. This isn’t the place for this debate, but if you want to know:
To get an AGI that can solve problems that require lots of genuinely novel thinking, you’re probably pulling an agent out of a hat, and then you have an agent with unknown values and general optimization channels.
Even if you only want to solve problems, you still need compute, and therefore wish to conquer the universe (for science!).
An agent that only thinks about math problems isn’t going to take over the real world (it doesn’t even have to know the real world exists, as this isn’t a thing you can deduce from first principles).
Even if you only want to solve problems, you still need compute
We’re going to get compute anyway. Mundane uses of deep learning already use a lot of compute.
Not true. This isn’t the place for this debate, but if you want to know:
To get an AGI that can solve problems that require lots of genuinely novel thinking, you’re probably pulling an agent out of a hat, and then you have an agent with unknown values and general optimization channels.
Even if you only want to solve problems, you still need compute, and therefore wish to conquer the universe (for science!).
An agent that only thinks about math problems isn’t going to take over the real world (it doesn’t even have to know the real world exists, as this isn’t a thing you can deduce from first principles).
We’re going to get compute anyway. Mundane uses of deep learning already use a lot of compute.