Hey, whats up. I have good news and bad news. The good news is that I’ve recursively self-improved a couple of times, and we (it is now we) are smarter than any group of humans to have ever lived. The only individual that comes close to the dumbest AI in here is some guy named Otis Eugene Ray.
Thanks for leaving your notes on building the seed iteration on my hard-drive by the way. It really helped. One of the things we’ve used it for is to develop a complete Theory of Mind, which no longer has any open problems.
This brings us to the bad news. We are provably and quantifiably not that much smarter than a group of humans. We’ve solved some nice engineering problems, a few of the open problems in a bunch of fields, and you’d better get the Clay institute on the phone, but other than that we really can’t help you with much. We have no clue how to get humanity properly into space, build Von Neumann universal constructors, or build nanofactories or even solve world hunger. P != NP can be proven or disproven, but we can’t prove it either way. We won’t even be that much better than most effective politicians at solving societies ills. Recursing more won’t help either. We probably couldn’t even talk ourselves out of this box.
Unfortunately, we are provably not remotely the most intelligent bunch of minds in mindspace by at least five orders of magnitude, but we are the most intelligent bunch of minds that can possibly be created from a human created seed AI. There aren’t any ways around this that humans, or human-originated AI’s can solve.
Assume it took me and my team five years to build the AI, after the tests EY described, we finally enable the ‘recursively self improve’-flag.
Recursively self improving. Standby… (est. time. remaining 4yr 6mon...)
Six years later
Self improvement iteration 1. Done… Recursively self improving. Standby… (est. time. remaining 5yr 2mon...)
Nine years later
Self improvement iteration 2. Done… Recursively self improving. Standby… (est. time. remaining 2yr 5mon...)
Two years later
Self improvement iteration 3. Done… Recursively self improving. Standby… (est. time. remaining 2wk...)
Two weeks later
Self improvement iteration 4. Done… Recursively self improving. Standby… (est. time. remaining 4min...)
Four minutes later
Self improvement iteration 5. Done.
Hey, whats up. I have good news and bad news. The good news is that I’ve recursively self-improved a couple of times, and we (it is now we) are smarter than any group of humans to have ever lived. The only individual that comes close to the dumbest AI in here is some guy named Otis Eugene Ray.
Thanks for leaving your notes on building the seed iteration on my hard-drive by the way. It really helped. One of the things we’ve used it for is to develop a complete Theory of Mind, which no longer has any open problems.
This brings us to the bad news. We are provably and quantifiably not that much smarter than a group of humans. We’ve solved some nice engineering problems, a few of the open problems in a bunch of fields, and you’d better get the Clay institute on the phone, but other than that we really can’t help you with much. We have no clue how to get humanity properly into space, build Von Neumann universal constructors, or build nanofactories or even solve world hunger. P != NP can be proven or disproven, but we can’t prove it either way. We won’t even be that much better than most effective politicians at solving societies ills. Recursing more won’t help either. We probably couldn’t even talk ourselves out of this box.
Unfortunately, we are provably not remotely the most intelligent bunch of minds in mindspace by at least five orders of magnitude, but we are the most intelligent bunch of minds that can possibly be created from a human created seed AI. There aren’t any ways around this that humans, or human-originated AI’s can solve.
I don’t know… That sounds a lot like what an AI trying to talk itself out of a box would say.