Objection 1 seems really strong. The kinds of problems that AGI would be better at than non-general-intelligences are those with ambiguity. If it was just a constraint-solver, it wouldn’t be a threat in the first place.
Similarly, with such a restricted output channel, there’s little-to-no point in making it have agency to begin with. We’re deep in ‘tool AI’ territory. The incentives to leave this territory would remain.
Sure, but to “independently verify” the output of an entity smarter than you is generally impossible. This makes it possible, while also limiting the potential of the boxed AI to choose its answers.
Objection 1 seems really strong. The kinds of problems that AGI would be better at than non-general-intelligences are those with ambiguity. If it was just a constraint-solver, it wouldn’t be a threat in the first place.
Similarly, with such a restricted output channel, there’s little-to-no point in making it have agency to begin with. We’re deep in ‘tool AI’ territory. The incentives to leave this territory would remain.
Thanks. Those points are correct. Is there any particular weakness or strength to this UP-idea in contrast to Oracle, tool-AI, or Gatekeeper ideas?
Seems like your usual ‘independently verify everything the AI says’ concept, only way more restrictive.
Sure, but to “independently verify” the output of an entity smarter than you is generally impossible. This makes it possible, while also limiting the potential of the boxed AI to choose its answers.
‘Generally impossible’. Well… only in the sense that ‘general’ means ‘in every conceivable case’. But that just means you miss out on some things.
This is not the reason that tool AI would likely fail