It depends on what you mean by “badly done”. If it’s “good, but not good enough”, 99%. (It’s possible for an AI that hasn’t been carefully designed for invariant-preserving self-modification to nevertheless choose an invariant that we’d consider nice. It’s just not very likely.)
Hours: vanishingly small. Days: 5%. Less than 5 years: 90%. (I believe that the bottleneck would be constructing better hardware. You could always try to eat the Internet, but it wouldn’t be very tasty.)
More.
Yes—mostly because true existential risks are few and far between. There are only a few good ways to thoroughly smash civilization (e.g. global thermonuclear war, doomsday asteroids which we’ll see coming).
No. This is essentially asking for a very hard problem that almost, but not quite, requires the full capability of human intelligence to solve. I suspect that, like chess and Jeopardy and Go, every individual very hard problem can be attacked with a special-case solution that doesn’t resemble human intelligence. (Even things like automated novel writing/movie production/game development. Something like perfect machine translation is trivial in comparison.) And of course, the hardest problem we know of—interacting with real humans for an unbounded length of time—is just the Turing test.
2060 (10%), 2110 (50%), 2210 (90%).
It depends on what you mean by “badly done”. If it’s “good, but not good enough”, 99%. (It’s possible for an AI that hasn’t been carefully designed for invariant-preserving self-modification to nevertheless choose an invariant that we’d consider nice. It’s just not very likely.)
Hours: vanishingly small. Days: 5%. Less than 5 years: 90%. (I believe that the bottleneck would be constructing better hardware. You could always try to eat the Internet, but it wouldn’t be very tasty.)
More.
Yes—mostly because true existential risks are few and far between. There are only a few good ways to thoroughly smash civilization (e.g. global thermonuclear war, doomsday asteroids which we’ll see coming).
No. This is essentially asking for a very hard problem that almost, but not quite, requires the full capability of human intelligence to solve. I suspect that, like chess and Jeopardy and Go, every individual very hard problem can be attacked with a special-case solution that doesn’t resemble human intelligence. (Even things like automated novel writing/movie production/game development. Something like perfect machine translation is trivial in comparison.) And of course, the hardest problem we know of—interacting with real humans for an unbounded length of time—is just the Turing test.