And the point of the session was to see if people have ideas for how to do less naive experiments that would allow us to increase our confidence that a SHF-scheme would yield safe generalization to these more difficult decisions.
Did anyone have ideas for this? My thinking is that you have to understand or make some assumptions about the nature of TDMP in order to have confidence about safe generalization, because if you just treat it as a black box, then it might be that for some class of queries it will do something that can’t be approximated by SHF-schemes. No matter how you test, you can only conclude that if such queries exist they are not in the test sets you used.
Or was the discussion more about, assuming we have theoretical reasons to think that SHF-schemes can approximate TDMP, how to test it empirically?
Regarding the question of how to do empirical work on this topic: I remember there being one thing which seemed potentially interesting, but I couldn’t find it in my notes (yet).
RE the rest of your comment: I guess you are taking issue with the complexity theory analogy; is that correct? An example hypothetical TDMP I used is “arbitrarily long deliberation” (ALD), i.e. a single human is allowed as long as they want to make the decision (I don’t think that’s a perfect “target” for alignment, but it seems like a reasonable starting point). I don’t see why ALD would (even potentially) “do something that can’t be approximated by SHF-schemes”, since those schemes still have the human making a decision.
“Or was the discussion more about, assuming we have theoretical reasons to think that SHF-schemes can approximate TDMP, how to test it empirically?” <-- yes, IIUC.
I don’t see why ALD would (even potentially) “do something that can’t be approximated by SHF-schemes”, since those schemes still have the human making a decision.
Suppose there’s a cryptographic hash function H inside a human brain whose algorithm is not introspectively accessible, and some secret state S which is also not introspectively accessible. The human can choose to, in each period, run S|Output := H(S|Input) and observe/report Output, so we can ask ALD, what’s Output if you iterate H n times with X as the initial Input and update S each time. (I can try to clarify if it’s not clear what I mean.) I think this can’t be approximated by SHF-schemes, because there’s no way to train ML to approximate H to serve as the baseline agent.
So what is this an analogy for? I think H could stand for human philosophical deliberation, and S for any introspectively inaccessible information in our brain that might go into and be changed by such deliberation.
Yes, please try to clarify. In particular, I don’t understand your “|” notation (as in “S|Output”).
I realized that I was a bit confused in what I said earlier. I think it’s clear that (proposed) SHF schemes should be able to do at least as well as a human, given the same amount of time, because they have human “on top” (as “CEO”) who can merely ignore all the AI helpers(/underlings).
But now I can also see an argument for why SHF couldn’t do ALD, if it doesn’t have arbitrarily long to deliberate: there would need to be some parallelism/decomposition in SHF, and that might not work well/perfectly for all problems.
OK, so it sounds like your argument why SHF can’t do ALD is (a specific, technical version of) the same argument that I mentioned in my last response. Can you confirm?
Aha, OK. So I either misunderstand or disagree with that.
I think SHF (at least most examples) have the human as “CEO” with AIs as “advisers”, and thus the human can chose to ignore all of the advice and make the decision unaided.
Did anyone have ideas for this? My thinking is that you have to understand or make some assumptions about the nature of TDMP in order to have confidence about safe generalization, because if you just treat it as a black box, then it might be that for some class of queries it will do something that can’t be approximated by SHF-schemes. No matter how you test, you can only conclude that if such queries exist they are not in the test sets you used.
Or was the discussion more about, assuming we have theoretical reasons to think that SHF-schemes can approximate TDMP, how to test it empirically?
Regarding the question of how to do empirical work on this topic: I remember there being one thing which seemed potentially interesting, but I couldn’t find it in my notes (yet).
RE the rest of your comment: I guess you are taking issue with the complexity theory analogy; is that correct? An example hypothetical TDMP I used is “arbitrarily long deliberation” (ALD), i.e. a single human is allowed as long as they want to make the decision (I don’t think that’s a perfect “target” for alignment, but it seems like a reasonable starting point). I don’t see why ALD would (even potentially) “do something that can’t be approximated by SHF-schemes”, since those schemes still have the human making a decision.
“Or was the discussion more about, assuming we have theoretical reasons to think that SHF-schemes can approximate TDMP, how to test it empirically?” <-- yes, IIUC.
Suppose there’s a cryptographic hash function H inside a human brain whose algorithm is not introspectively accessible, and some secret state S which is also not introspectively accessible. The human can choose to, in each period, run S|Output := H(S|Input) and observe/report Output, so we can ask ALD, what’s Output if you iterate H n times with X as the initial Input and update S each time. (I can try to clarify if it’s not clear what I mean.) I think this can’t be approximated by SHF-schemes, because there’s no way to train ML to approximate H to serve as the baseline agent.
So what is this an analogy for? I think H could stand for human philosophical deliberation, and S for any introspectively inaccessible information in our brain that might go into and be changed by such deliberation.
Yes, please try to clarify. In particular, I don’t understand your “|” notation (as in “S|Output”).
I realized that I was a bit confused in what I said earlier. I think it’s clear that (proposed) SHF schemes should be able to do at least as well as a human, given the same amount of time, because they have human “on top” (as “CEO”) who can merely ignore all the AI helpers(/underlings).
But now I can also see an argument for why SHF couldn’t do ALD, if it doesn’t have arbitrarily long to deliberate: there would need to be some parallelism/decomposition in SHF, and that might not work well/perfectly for all problems.
“|” meant concatenation, so “S|Output := H(S|Input)” means you set S to the first half of H(S|Input), and Output to the second half of H(S|Input).
OK, so it sounds like your argument why SHF can’t do ALD is (a specific, technical version of) the same argument that I mentioned in my last response. Can you confirm?
I’m not sure. It seems like my argument applies even if SHF did have arbitrarily long to deliberate?
Aha, OK. So I either misunderstand or disagree with that.
I think SHF (at least most examples) have the human as “CEO” with AIs as “advisers”, and thus the human can chose to ignore all of the advice and make the decision unaided.