Lack of Intelligence: Very likely Slow take-off AI: Very Likely Self-Supervised Learning AI: Likely Bounded Intelligence AI: Likely Far far away AI: Likely Personal Assistant AI: close to 100% certain. Oracle AI: Likely Sandboxed Virtual World AI: likely The Age of Em: Borderline Certain Multipolar Cohabition: borderline certain Neuralink AI: borderline certain Human Simulation AI: likely Virtual zoo-keeper AI: likely Coherent Extrapolated Volition AI: likely Partly aligned AI: Very likely Transparent Corrigible AI: Borderline certain.
In total, I think the most probable scenario is a very, very slow take-off, not a Singularity, because AGI would be hampered by Lack of Intelligence, slowed down by countless corrections, sandboxing and ubiquity of LAI. In effect, by the time we have something approaching true AGI, we would long be a culture of cyborgs and LAIs, and the arrival of AGI will be less of a Singularity, but a fuzzy pinnacle of a long, hard, bumpy and mostly uneventful process.
In fact, I would claim that we will never be at a point where we can agree: “yep, AGI is finally achieved.” I rather envision us tinkering with AI, making in painstakingly more powerful and efficient, with tiny incremental steps, until we are content that it is “eh, this Artificial Intelligence is General enough, I guess.”
In my view, the true danger does not come from achieving AGI and it turning on us, but rather achieving stupid, buggy yet powerful LAI, giving it too much access, and having it do something that triggers a global catastrophe by accident, not out of conscious malice.
Its less “Superhuman Intelligence got access to the nuclear codes and decided to wipe us out” but, “Dumb as a brick LAI got access to the nuclear codes and wiped us out due to a simple coding error”.
My take on some of the items on this list:
Lack of Intelligence: Very likely
Slow take-off AI: Very Likely
Self-Supervised Learning AI: Likely
Bounded Intelligence AI: Likely
Far far away AI: Likely
Personal Assistant AI: close to 100% certain.
Oracle AI: Likely
Sandboxed Virtual World AI: likely
The Age of Em: Borderline Certain
Multipolar Cohabition: borderline certain
Neuralink AI: borderline certain
Human Simulation AI: likely
Virtual zoo-keeper AI: likely
Coherent Extrapolated Volition AI: likely
Partly aligned AI: Very likely
Transparent Corrigible AI: Borderline certain.
In total, I think the most probable scenario is a very, very slow take-off, not a Singularity, because AGI would be hampered by Lack of Intelligence, slowed down by countless corrections, sandboxing and ubiquity of LAI. In effect, by the time we have something approaching true AGI, we would long be a culture of cyborgs and LAIs, and the arrival of AGI will be less of a Singularity, but a fuzzy pinnacle of a long, hard, bumpy and mostly uneventful process.
In fact, I would claim that we will never be at a point where we can agree: “yep, AGI is finally achieved.” I rather envision us tinkering with AI, making in painstakingly more powerful and efficient, with tiny incremental steps, until we are content that it is “eh, this Artificial Intelligence is General enough, I guess.”
In my view, the true danger does not come from achieving AGI and it turning on us, but rather achieving stupid, buggy yet powerful LAI, giving it too much access, and having it do something that triggers a global catastrophe by accident, not out of conscious malice.
Its less “Superhuman Intelligence got access to the nuclear codes and decided to wipe us out” but, “Dumb as a brick LAI got access to the nuclear codes and wiped us out due to a simple coding error”.