Is super-intelligent AI necessarily AGI (for this amazing future), or can it be ANI ?
i.e. why insist on all of the work-arounds we force with pursuing AGI, when, with ANI, don’t we already have Safety, Alignment, Corrigibility, Reliability, and super-human ability, today?
How are you defining “super-intelligent”, “AGI”, and “ANI” here?
I’d distinguish two questions:
Pivotal-act-style AI: How do we leverage AI (or some other tech) to end the period where humans can imminently destroy the world with AI?
CEV-style AI: How do we leverage AI to solve all of our problems and put us on a trajectory to an ~optimal future?
My guess is that successful pivotal act AI will need to be AGI, though I’m not highly confident of this. By “AGI” I mean “something that’s doing qualitatively the right kind of reasoning to be able to efficiently model physical processes in general, both high-level and low-level”.
I don’t mean that the AGI that saves the world necessarily actually has the knowledge or inclination to productively reason about arbitrary topics—e.g., we might want to limit AGI to just reasoning about low-level physics (in ways that help us build tech to save the world), and keep the AGI from doing dangerous things like “reasoning about its operators’ minds”. (Similarly, I would call a human a “general intelligence” insofar as they have the right cognitive machinery to do science in general, even if they’ve never actually thought about physics or learned any physics facts.)
In the case of CEV-style AI, I’m much more confident that it will need to be AGI, and I strongly expect it to need to be aligned enough (and capable enough) that we can trust it to reason about arbitrary domains. If it can safely do CEV at all, then we shouldn’t need to restrict it—needing to restrict it is a flag that we aren’t ready to hand it such a difficult task.
Is super-intelligent AI necessarily AGI (for this amazing future), or can it be ANI ?
i.e. why insist on all of the work-arounds we force with pursuing AGI, when, with ANI, don’t we already have Safety, Alignment, Corrigibility, Reliability, and super-human ability, today?
Eugene
How are you defining “super-intelligent”, “AGI”, and “ANI” here?
I’d distinguish two questions:
Pivotal-act-style AI: How do we leverage AI (or some other tech) to end the period where humans can imminently destroy the world with AI?
CEV-style AI: How do we leverage AI to solve all of our problems and put us on a trajectory to an ~optimal future?
My guess is that successful pivotal act AI will need to be AGI, though I’m not highly confident of this. By “AGI” I mean “something that’s doing qualitatively the right kind of reasoning to be able to efficiently model physical processes in general, both high-level and low-level”.
I don’t mean that the AGI that saves the world necessarily actually has the knowledge or inclination to productively reason about arbitrary topics—e.g., we might want to limit AGI to just reasoning about low-level physics (in ways that help us build tech to save the world), and keep the AGI from doing dangerous things like “reasoning about its operators’ minds”. (Similarly, I would call a human a “general intelligence” insofar as they have the right cognitive machinery to do science in general, even if they’ve never actually thought about physics or learned any physics facts.)
In the case of CEV-style AI, I’m much more confident that it will need to be AGI, and I strongly expect it to need to be aligned enough (and capable enough) that we can trust it to reason about arbitrary domains. If it can safely do CEV at all, then we shouldn’t need to restrict it—needing to restrict it is a flag that we aren’t ready to hand it such a difficult task.