So, given that you know where values come from, do you know what it looks like to have a deeply corrigible strong mind, clearly enough to make one? I don’t think so, but please correct me if you do. Assuming you don’t, I suggest that understanding what values are and where they come from in a more joint-carving way might help.
Yes, understanding values better would be better. The case I’ve madeelsewhere is that we can use cybernetics as the basis for this understanding. Hence my comment is to suggest that if you don’t know where values come from, I can offer what I believe to be a model that answers where values ultimately come from and gives a good basis for building up a more detailed model of values. Others are doing the same with compatible models, e.g. predictive processing.
I’ve not thought deeply about corrigibility recently, but my thinking on outer alignment more generally has been that, because Goodhart is robust, we cannot hope to get fully aligned AI by any means that measures, which leaves us with building AI with goals that are already aligned with ours (it seems quite likely we’re going to bootstrap to AI that helps us build this, though, so work on imperfect systems seems worthwhile, but I’ll ignore it here). I expect a similar situation for building just corrigibility.
So to build a corrigible AI, my model says we need to find the configuration of negative feedback circuits that implement a corrigible process. That doesn’t constrain the space to look in a lot, but it does some, and it makes it clear that what we have is an engineering rather than a theory challenge. I see this as advancing the question from “where do values come from?” to “how do I build a thing out of feedback circuits that has the values I want it to have?”.
Yes, understanding values better would be better. The case I’ve made elsewhere is that we can use cybernetics as the basis for this understanding. Hence my comment is to suggest that if you don’t know where values come from, I can offer what I believe to be a model that answers where values ultimately come from and gives a good basis for building up a more detailed model of values. Others are doing the same with compatible models, e.g. predictive processing.
I’ve not thought deeply about corrigibility recently, but my thinking on outer alignment more generally has been that, because Goodhart is robust, we cannot hope to get fully aligned AI by any means that measures, which leaves us with building AI with goals that are already aligned with ours (it seems quite likely we’re going to bootstrap to AI that helps us build this, though, so work on imperfect systems seems worthwhile, but I’ll ignore it here). I expect a similar situation for building just corrigibility.
So to build a corrigible AI, my model says we need to find the configuration of negative feedback circuits that implement a corrigible process. That doesn’t constrain the space to look in a lot, but it does some, and it makes it clear that what we have is an engineering rather than a theory challenge. I see this as advancing the question from “where do values come from?” to “how do I build a thing out of feedback circuits that has the values I want it to have?”.