The bit on AlphaGo Zero was great. I’ve read about what AGZ does on Wikipedia before, but didn’t entirely understand how it works. Reading your description, I got it immediately.
This process ends when the task is solved or when creating more copies of Rosa would not change the output anymore. HCH is a fixed point. HCH does not depend on the starting point (“What difficulty of task can Rosa solve on her own?”), but on the task difficulty the reasoning process (= breaking down tasks to solve them) can solve in principle.
[emphasis changed]. This seems speculative. How do you know that a hypothetical infinite HCH tree does not depend the capabilities of the human?
You also sort of contradict this idea a bit later:
This is based on the intuition that at least some humans (such as Rosa) are above some universality threshold, such that if given sufficiently much time, they could solve any relevant task.
Hey, thanks for the question! And I’m glad you liked the part about AGZ. (I also found this video by Robert Miles extremely helpful and accessible to understand AGZ)
This seems speculative. How do you know that a hypothetical infinite HCH tree does not depend the capabilities of the human?
Hm, I wouldn’t say that it doesn’t depend on the capabilities of the human. I think it does, but it depends on the type of reasoning they employ and not e.g. their working memory (to the extent that the general hypothesis of factored cognition holds that we can successfully solve tasks by breaking them down into smaller tasks.)
HCH does not depend on the starting point (“What difficulty of task can Rosa solve on her own?”)
The way to best understand this is maybe to think in terms of computation/time to think. What kind of tasks Rosa can solve obviously depends a lot on how much computation/time they have to think about it. But for the final outcome of HCH, it shouldn’t matter if we half the computation/time the first node has (at least down to a certain level of computation/time) since the next lower node can just do the thinking that the first node would have done with more time. I guess this assumes that the way the first node would benefit from more time would be making more quantitative progress as opposed to qualitative progress. (I think I tried to capture quality with ‘type of reasoning process’.)
Sorry, this answer is a bit rambly, I can spend some more time on an answer if this doesn’t make sense! (There’s also a good chance this doesn’t make sense because it just doesn’t make sense/I misunderstand stuff and not just because I explain it poorly)
The bit on AlphaGo Zero was great. I’ve read about what AGZ does on Wikipedia before, but didn’t entirely understand how it works. Reading your description, I got it immediately.
[emphasis changed]. This seems speculative. How do you know that a hypothetical infinite HCH tree does not depend the capabilities of the human?
You also sort of contradict this idea a bit later:
Hey, thanks for the question! And I’m glad you liked the part about AGZ. (I also found this video by Robert Miles extremely helpful and accessible to understand AGZ)
Hm, I wouldn’t say that it doesn’t depend on the capabilities of the human. I think it does, but it depends on the type of reasoning they employ and not e.g. their working memory (to the extent that the general hypothesis of factored cognition holds that we can successfully solve tasks by breaking them down into smaller tasks.)
The way to best understand this is maybe to think in terms of computation/time to think. What kind of tasks Rosa can solve obviously depends a lot on how much computation/time they have to think about it. But for the final outcome of HCH, it shouldn’t matter if we half the computation/time the first node has (at least down to a certain level of computation/time) since the next lower node can just do the thinking that the first node would have done with more time. I guess this assumes that the way the first node would benefit from more time would be making more quantitative progress as opposed to qualitative progress. (I think I tried to capture quality with ‘type of reasoning process’.)
Sorry, this answer is a bit rambly, I can spend some more time on an answer if this doesn’t make sense! (There’s also a good chance this doesn’t make sense because it just doesn’t make sense/I misunderstand stuff and not just because I explain it poorly)