P(negative Singularity & badly done AGI) = 10%. P(negative Singularity | badly done AGI) ranges from 30% to 97%, depending on the specific definition of AGI. I’m not sure what ‘extremely negative’ means.
‘Human level’ is extremely fuzzy. An AGI could be far above humans in terms of mind design but less capable due to inferior hardware or vice versa.
Vastly more.
Other risks, including nanotech, are more likely, though a FAI could obviously manage nanotech risks.
I’m going to answer this for a Singularity in 5 years, due to my dispute of the phrase ‘human-level’. A solution to logical uncertainty would be more likely than anything else I can think of to result in a Singularity in 5 years, but I still would not expect it to happen, especially if the researchers were competent. Extreme interest from a major tech company or a government in the most promising approaches would be more likely to cause a Singularity in 5 years, but I doubt that fits the implied criteria for a milestone.
2025, 2040, never.
P(negative Singularity & badly done AGI) = 10%. P(negative Singularity | badly done AGI) ranges from 30% to 97%, depending on the specific definition of AGI. I’m not sure what ‘extremely negative’ means.
‘Human level’ is extremely fuzzy. An AGI could be far above humans in terms of mind design but less capable due to inferior hardware or vice versa.
Vastly more.
Other risks, including nanotech, are more likely, though a FAI could obviously manage nanotech risks.
I’m going to answer this for a Singularity in 5 years, due to my dispute of the phrase ‘human-level’. A solution to logical uncertainty would be more likely than anything else I can think of to result in a Singularity in 5 years, but I still would not expect it to happen, especially if the researchers were competent. Extreme interest from a major tech company or a government in the most promising approaches would be more likely to cause a Singularity in 5 years, but I doubt that fits the implied criteria for a milestone.