Thanks. The assertiveness was deliberate; I wanted to take the perspective of someone in a post-AGI world saying, “Of course it worked out this way!” In our time, we can’t be as certain; the narrator is suffering from a degree of hindsight bias.
There were a couple of fake breakthroughs in there (though maybe I glossed over them more than I ought?). Specifically the bootstrapping from a given model to a more accurate one by looking for implications and checking for alternatives (this actually is very close to the self-play that helped build AlphaGo as stated, but making it work with a full model of the real world would require substantial further work), and the solution of AI alignment via machine learning with multiple agents seeking to more accurately model each other’s values (which I suspect might actually work, but which is purely speculative).
I can’t say I remember noticing either one of them being listed; perhaps they were glossed over in my remembering things as going the easy way?
I do think that learning to be more accurate through checking implications and checking alternatives is absolutely necessary for high level general intelligence unless you want to include brute force checking the entire possible state of the universe as intelligent. Bootstrapping seems very necessary for getting from where we are now.
Honestly, if it isn’t self-reflective, I view it as an ordinary algorithm.
Thanks. The assertiveness was deliberate; I wanted to take the perspective of someone in a post-AGI world saying, “Of course it worked out this way!” In our time, we can’t be as certain; the narrator is suffering from a degree of hindsight bias.
There were a couple of fake breakthroughs in there (though maybe I glossed over them more than I ought?). Specifically the bootstrapping from a given model to a more accurate one by looking for implications and checking for alternatives (this actually is very close to the self-play that helped build AlphaGo as stated, but making it work with a full model of the real world would require substantial further work), and the solution of AI alignment via machine learning with multiple agents seeking to more accurately model each other’s values (which I suspect might actually work, but which is purely speculative).
I can’t say I remember noticing either one of them being listed; perhaps they were glossed over in my remembering things as going the easy way?
I do think that learning to be more accurate through checking implications and checking alternatives is absolutely necessary for high level general intelligence unless you want to include brute force checking the entire possible state of the universe as intelligent. Bootstrapping seems very necessary for getting from where we are now.
Honestly, if it isn’t self-reflective, I view it as an ordinary algorithm.