My view: Although I think it is a neat thought experiment, my intuition is it is a false dichotomy to separate between compute and algorithm, and I think so because: narrowing the path dependence of a domain that consists of multiple requirements for it to evolve optimally to an “either/or” situation usually leads to deadlocks that can be paradoxical(not all deadlocks have to remain paradoxical, pre-emption/non-blocking synchronization is a way out) like the one above.
My answer: Not much difference, because twenty-year timescale doesn’t seem very significant to me; and also because neither has there been any fundamental revolution in the semiconductor/compute-manufacturing industry that has benefitted us in ways other than cost, and nor has there been any revolutionary algorithms found that couldn’t be run with old hardware scaled to today’s standards. (But in complex systems (which ML is) interactions matters more than anything else, so I might be way off here)
My view: Although I think it is a neat thought experiment, my intuition is it is a false dichotomy to separate between compute and algorithm, and I think so because: narrowing the path dependence of a domain that consists of multiple requirements for it to evolve optimally to an “either/or” situation usually leads to deadlocks that can be paradoxical(not all deadlocks have to remain paradoxical, pre-emption/non-blocking synchronization is a way out) like the one above.
My answer: Not much difference, because twenty-year timescale doesn’t seem very significant to me; and also because neither has there been any fundamental revolution in the semiconductor/compute-manufacturing industry that has benefitted us in ways other than cost, and nor has there been any revolutionary algorithms found that couldn’t be run with old hardware scaled to today’s standards. (But in complex systems (which ML is) interactions matters more than anything else, so I might be way off here)