I mean, we have this thread with Paul directly saying “If all goes well you can think of it like ‘a human thinking a long time’”, plus Ajeya and Rohin both basically agreeing with that.
Agreed, but the thing you want to use this for isn’t simulating a long reflection, which will fail (in the worst case) because HCH can’t do certain types of learning efficiently.
Once we get past Simulated Long Reflection, there’s a whole pile of Things To Do With AI which strike me as Probably Doomed on general principles.
You mentioned using HCH to “let humans be epistemically competitive with the systems we’re trying to train”, which definitely falls in that pile. We have general principles saying that we should definitely not rely on humans being epistemically competitive with AGI; using HCH does not seem to get around those general principles at all. (Unless we buy some very strong hypotheses about humans’ skill at factorizing problems, in which case we’d also expect HCH to be able to simulate something long-reflection-like.)
Trying to be epistemically competitive with AGI is, in general, one of the most difficult use-cases one can aim for. For that to be easier than simulating a long reflection, even for architectures other than HCH-emulators, we’d need some really weird assumptions.
I mean, we have this thread with Paul directly saying “If all goes well you can think of it like ‘a human thinking a long time’”, plus Ajeya and Rohin both basically agreeing with that.
Agreed, but the thing you want to use this for isn’t simulating a long reflection, which will fail (in the worst case) because HCH can’t do certain types of learning efficiently.
Once we get past Simulated Long Reflection, there’s a whole pile of Things To Do With AI which strike me as Probably Doomed on general principles.
You mentioned using HCH to “let humans be epistemically competitive with the systems we’re trying to train”, which definitely falls in that pile. We have general principles saying that we should definitely not rely on humans being epistemically competitive with AGI; using HCH does not seem to get around those general principles at all. (Unless we buy some very strong hypotheses about humans’ skill at factorizing problems, in which case we’d also expect HCH to be able to simulate something long-reflection-like.)
Trying to be epistemically competitive with AGI is, in general, one of the most difficult use-cases one can aim for. For that to be easier than simulating a long reflection, even for architectures other than HCH-emulators, we’d need some really weird assumptions.