Thanks. Those points are correct. Is there any particular weakness or strength to this UP-idea in contrast to Oracle, tool-AI, or Gatekeeper ideas?
Seems like your usual ‘independently verify everything the AI says’ concept, only way more restrictive.
Sure, but to “independently verify” the output of an entity smarter than you is generally impossible. This makes it possible, while also limiting the potential of the boxed AI to choose its answers.
‘Generally impossible’. Well… only in the sense that ‘general’ means ‘in every conceivable case’. But that just means you miss out on some things.
This is not the reason that tool AI would likely fail
Thanks. Those points are correct. Is there any particular weakness or strength to this UP-idea in contrast to Oracle, tool-AI, or Gatekeeper ideas?
Seems like your usual ‘independently verify everything the AI says’ concept, only way more restrictive.
Sure, but to “independently verify” the output of an entity smarter than you is generally impossible. This makes it possible, while also limiting the potential of the boxed AI to choose its answers.
‘Generally impossible’. Well… only in the sense that ‘general’ means ‘in every conceivable case’. But that just means you miss out on some things.
This is not the reason that tool AI would likely fail