2025/2040/2080, modulo a fair degree of uncertainty about that estimate (a great deal depends on implementation and unknown details of cognitive science)
Roughly 30% for net negative consequences and 10% for extinction or worse contingent on existence of singularity (note that this is apparently a different interpretation than XiXiDu’s), details dependent on singularity type. My estimates would be higher a couple years ago, but the concerns behind friendly AI have become sufficiently well-known that I view it as likely that major AI teams will be taking them properly into consideration by the time true AGI is on the table. Negative consequences are semi-likely thanks to goal stability problems or subtle incompatibilities between human and machine implicit utility functions, but catastrophic consequences are only likely if serious mistakes are made. One important contributing factor is that I’m pretty sure a goal-unstable AI is far more likely to end up wireheading itself than tiling the world with anything, although the latter is still a possible outcome.
Can’t answer this with any confidence. The answer depends almost entirely on how well bounded various aspects of intelligence are by computational resources, which is a question cognitive science hasn’t answered with precision yet as far as I know.
Somewhere between “little more” and “much more”—but I’d like to see the bulk of that support going into non-SIAI research. The SIAI is doing good work and could use more support, but even a 5% chance of existential consequences is way too important a topic for one research group to monopolize.
Don’t know. Not enough knowledge of other existential risks.
Several, the most basic being that I’d expect human-level AI to be developed within five years of the functional simulation of any reasonably large mammalian brain (the brute-force approach, in other words). I’d put roughly 50% confidence on human-level AI within five years if efficient algorithms for humanlike language acquisition or a similarly broad machine-learning problem are developed, but there are a lot more unknowns in that scenario.
2025/2040/2080, modulo a fair degree of uncertainty about that estimate (a great deal depends on implementation and unknown details of cognitive science)
Roughly 30% for net negative consequences and 10% for extinction or worse contingent on existence of singularity (note that this is apparently a different interpretation than XiXiDu’s), details dependent on singularity type. My estimates would be higher a couple years ago, but the concerns behind friendly AI have become sufficiently well-known that I view it as likely that major AI teams will be taking them properly into consideration by the time true AGI is on the table. Negative consequences are semi-likely thanks to goal stability problems or subtle incompatibilities between human and machine implicit utility functions, but catastrophic consequences are only likely if serious mistakes are made. One important contributing factor is that I’m pretty sure a goal-unstable AI is far more likely to end up wireheading itself than tiling the world with anything, although the latter is still a possible outcome.
Can’t answer this with any confidence. The answer depends almost entirely on how well bounded various aspects of intelligence are by computational resources, which is a question cognitive science hasn’t answered with precision yet as far as I know.
Somewhere between “little more” and “much more”—but I’d like to see the bulk of that support going into non-SIAI research. The SIAI is doing good work and could use more support, but even a 5% chance of existential consequences is way too important a topic for one research group to monopolize.
Don’t know. Not enough knowledge of other existential risks.
Several, the most basic being that I’d expect human-level AI to be developed within five years of the functional simulation of any reasonably large mammalian brain (the brute-force approach, in other words). I’d put roughly 50% confidence on human-level AI within five years if efficient algorithms for humanlike language acquisition or a similarly broad machine-learning problem are developed, but there are a lot more unknowns in that scenario.