This chapter seems to present some examples of how algorithmic recalcitrance could be very low, but I think it doesn’t, in the relevant sense. Two of the three arguments in that part of the chapter (p69-70) are about how low recalcitrance might be mistaken for high recalcitrance, rather than about how low recalcitrance would occur. (One says that a system whose performance is the maximum of two parts might shift its growth from that of one part to that of the other; the other says we might be biased to not notice growth in dumb-seeming entities). The third argument (or first, chronologically) is that a key insight might be discovered after many other things are in place. This is conceivable, but seems to rarely happen at a large scale, and comes with no particular connection with intelligence—you could make exactly the same argument about any project (e.g. the earlier intelligence augmentation projects).
Yes, I agree. On page 68 he points out that the types of problems pre-EM are very different from those post-EM, but it could be that availability bias makes the former seem larger than the latter. We are more familiar with them, and have broken them down into many sub-problems.
Paradoxically, even though this ‘taskification’ is progress towards EMs, it makes them appear further away as they highlight the conjunctive nature of the task. Our estimates for the difficulty of a task probably over-state the difficulty of easy tasks and under-state the difficulty of easy tasks, which could mean that breaking down a problem increases our estimate of its difficulty, because it is now 10 tasks-worth-of-effort rather than one-tasks-worth-of-effort.
This chapter seems to present some examples of how algorithmic recalcitrance could be very low, but I think it doesn’t, in the relevant sense. Two of the three arguments in that part of the chapter (p69-70) are about how low recalcitrance might be mistaken for high recalcitrance, rather than about how low recalcitrance would occur. (One says that a system whose performance is the maximum of two parts might shift its growth from that of one part to that of the other; the other says we might be biased to not notice growth in dumb-seeming entities). The third argument (or first, chronologically) is that a key insight might be discovered after many other things are in place. This is conceivable, but seems to rarely happen at a large scale, and comes with no particular connection with intelligence—you could make exactly the same argument about any project (e.g. the earlier intelligence augmentation projects).
Yes, I agree. On page 68 he points out that the types of problems pre-EM are very different from those post-EM, but it could be that availability bias makes the former seem larger than the latter. We are more familiar with them, and have broken them down into many sub-problems.
Paradoxically, even though this ‘taskification’ is progress towards EMs, it makes them appear further away as they highlight the conjunctive nature of the task. Our estimates for the difficulty of a task probably over-state the difficulty of easy tasks and under-state the difficulty of easy tasks, which could mean that breaking down a problem increases our estimate of its difficulty, because it is now 10 tasks-worth-of-effort rather than one-tasks-worth-of-effort.