I feel like the thing that I’m hinting is not directly related to QACI. I’m talking about a specific way to construct an AGI where we write down all of the algorithms explicitly, whereas the QACI part of QACI, is about specifying an objective that is aligned when optimized very hard. It seems like, in the thing that I’m describing, you would get the alignment properties from a different place. You get them because you understand the algorithm of intelligence that you have written down very well. Whereas in QHCI, you get the alignment properties by successfully pointing to the causal process that is the human in the world that you want to “simulate” in order to determine the “actual objective”.
Just to clarify, when I say non-DUMB way, I mainly refer to using giant neural networks and just making them more capable in order to get to intelligent systems to be the DUMB way. And Tasman’s thing seems to be one of the least DUMB things I have heard recently. I can’t see how this obviously fails (yet), though, of course, this doesn’t necessarily imply that it will succeed (though it is of cause possible).
I feel like the thing that I’m hinting is not directly related to QACI. I’m talking about a specific way to construct an AGI where we write down all of the algorithms explicitly, whereas the QACI part of QACI, is about specifying an objective that is aligned when optimized very hard. It seems like, in the thing that I’m describing, you would get the alignment properties from a different place. You get them because you understand the algorithm of intelligence that you have written down very well. Whereas in QHCI, you get the alignment properties by successfully pointing to the causal process that is the human in the world that you want to “simulate” in order to determine the “actual objective”.
Just to clarify, when I say non-DUMB way, I mainly refer to using giant neural networks and just making them more capable in order to get to intelligent systems to be the DUMB way. And Tasman’s thing seems to be one of the least DUMB things I have heard recently. I can’t see how this obviously fails (yet), though, of course, this doesn’t necessarily imply that it will succeed (though it is of cause possible).