Is there some simple amount of working memory that’s required to do complex recursion? Like, 6 working memory slots makes things way harder than 7?
My theory is that 4 is enough, and the extra that many people have is just there because it’s useful overkill that’s there because it doesn’t hurt anything and extra memory makes things easier faster.
So, first, note that for computing pairwise operations, you only need a stack as high as 3 (so long as additional future inputs can come from somewhere else). If you’ve ever worked with a stack-based calculator, like one that supports reverse polish notation (RPN), or a stack-based computer language, you probably found this out empirically, but you might have also known it from the fact that early stack-based calculators had only 3 registers and that was good enough.
Well, almost. With 3 registers you can only carry forward a single term between operations and only perform scalar operations. With 4 registers to construct the stack you can do pairwise operations of two variables or carry forward an additional term. But more importantly in practice having only 3 registers is annoying and although you can theoretically do whatever you want it requires careful ordering of operations to avoid a stack overflow. With 4 you rarely have to think ahead, which is nice: you just perform the operations and having the extra register lets you get into and out of near overflows without actually running out of space and having to start over.
More registers are nice, but registers cost money, and for a long time stack-based calculators settled on using 4 registers because it was the best balance of cost, functionality, and flexibility. Again, 3 was enough, but annoying enough to work with that everyone was happy to pay for 4, but few were willing to pay for more.
Now, does this mean 4 is enough for complex recursion? I mean, sure, so long as you are tail call optimizing. More just makes life easier and means you don’t have to recurse. Why wouldn’t you want to do that, though?
Well, doing all this assumes you can perform the complex mental operations you want as pairwise operations. And maybe you can do that, but also maybe you don’t know how, so you end up needing more working memory to deal with trying to perform operations you can’t perform “natively”, yet.
And why think of the mind as performing mental operations over working memory at all or that you can develop access to more powerful operations that let you do more with the same memory? That’s a long topic, but I’d recommend this paper as a starting point that melds well with the viewpoint I’ve expressed here.
My theory is that 4 is enough, and the extra that many people have is just there because it’s useful overkill that’s there because it doesn’t hurt anything and extra memory makes things easier faster.
So, first, note that for computing pairwise operations, you only need a stack as high as 3 (so long as additional future inputs can come from somewhere else). If you’ve ever worked with a stack-based calculator, like one that supports reverse polish notation (RPN), or a stack-based computer language, you probably found this out empirically, but you might have also known it from the fact that early stack-based calculators had only 3 registers and that was good enough.
Well, almost. With 3 registers you can only carry forward a single term between operations and only perform scalar operations. With 4 registers to construct the stack you can do pairwise operations of two variables or carry forward an additional term. But more importantly in practice having only 3 registers is annoying and although you can theoretically do whatever you want it requires careful ordering of operations to avoid a stack overflow. With 4 you rarely have to think ahead, which is nice: you just perform the operations and having the extra register lets you get into and out of near overflows without actually running out of space and having to start over.
More registers are nice, but registers cost money, and for a long time stack-based calculators settled on using 4 registers because it was the best balance of cost, functionality, and flexibility. Again, 3 was enough, but annoying enough to work with that everyone was happy to pay for 4, but few were willing to pay for more.
Now, does this mean 4 is enough for complex recursion? I mean, sure, so long as you are tail call optimizing. More just makes life easier and means you don’t have to recurse. Why wouldn’t you want to do that, though?
Well, doing all this assumes you can perform the complex mental operations you want as pairwise operations. And maybe you can do that, but also maybe you don’t know how, so you end up needing more working memory to deal with trying to perform operations you can’t perform “natively”, yet.
And why think of the mind as performing mental operations over working memory at all or that you can develop access to more powerful operations that let you do more with the same memory? That’s a long topic, but I’d recommend this paper as a starting point that melds well with the viewpoint I’ve expressed here.