if it can quantum-simulate a human brain, then it can in principle decode things from it as well. the question is how to demand that it do so in the math that defines the system.
Why do you assume that we need to demand this be done in “the math that defines the system”?
I would assume we could have a discussion with this higher-ontology being to find a happy specification, using our ontologies, that it can tell us we’ll like, also using our ontologies.
A 5-year-old might not understand an adult’s specific definition of “heavy”, but it’s not too hard for it to ask for a heavy thing.
I don’t at all think that’s off the table temporarily! I don’t trust that it’ll stay on the table—if the adult has malicious intent, knowing what the child means isn’t enough; it seems hard to know when it’ll stop being viable without more progress. (for example, I doubt it’ll ever be a good idea to do that with an OpenAI model, they seem highly deceptively misaligned to most of their users. seems possible for it to be a good idea with Claude.) But the challenge is how to certify that the math does in fact say the right thing to durably point to the ontology in which we want to preserve good things; at some point we have to actually understand some sort of specification that constrains what the stuff we don’t understand is doing to be what it seems to say in natural language.
if it can quantum-simulate a human brain, then it can in principle decode things from it as well. the question is how to demand that it do so in the math that defines the system.
Why do you assume that we need to demand this be done in “the math that defines the system”?
I would assume we could have a discussion with this higher-ontology being to find a happy specification, using our ontologies, that it can tell us we’ll like, also using our ontologies.
A 5-year-old might not understand an adult’s specific definition of “heavy”, but it’s not too hard for it to ask for a heavy thing.
I don’t at all think that’s off the table temporarily! I don’t trust that it’ll stay on the table—if the adult has malicious intent, knowing what the child means isn’t enough; it seems hard to know when it’ll stop being viable without more progress. (for example, I doubt it’ll ever be a good idea to do that with an OpenAI model, they seem highly deceptively misaligned to most of their users. seems possible for it to be a good idea with Claude.) But the challenge is how to certify that the math does in fact say the right thing to durably point to the ontology in which we want to preserve good things; at some point we have to actually understand some sort of specification that constrains what the stuff we don’t understand is doing to be what it seems to say in natural language.