Arguably more important than the theory itself, especially in domains outside of mathematics
That can’t be true, because the ability to apply a theory is dependent on having a theory. I mean, I suppose you can do technology development just by doing random things and seeing what works, but that tends to have slow or poor results. Theories are a bottleneck on scientific advancement.
I suppose there is some sense in which the immediate first-order effects of someone finding a great application for a theory are more impactful than that of someone figuring out the theory to begin with. But that’s if we’re limiting ourselves to evaluating first-order effects only, and in this case this approximation seems to directly lead to the wrong conclusion.
I think ignoring the effort of actually being able to put a theory into practice is one of the main things that I think LW gets wrong
Any specific examples? (I can certainly imagine some people doing so. I’m interested in whether you think they’re really endemic to LW, or if I am doing that.)
Do you still think that the original example counts? If you agree that scientific fields have compact generators, it seems entirely natural to believe that “exfohazards” – as in, hard-to-figure-out compact ideas such that if leaked, they’d let people greatly improve capabilities just by “grunt work” – are a thing. (And I don’t really think most of the people worrying about them envision themselves Great Men? Rather than viewing themselves as “normal” researchers who may stumble into an important insight.)
Any specific examples? (I can certainly imagine some people doing so. I’m interested in whether you think they’re really endemic to LW, or if I am doing that.)
AI in general is littered with this, but a point I want to make is that the entire deep learning revolution caught LW by surprise, as while it did involve algorithmic improvement, overall it basically involved just adding more compute and data, and for several years, even up until now, the theory of deep learning hasn’t caught up with the empirical success of deep learning. In general, the stuff considered very important to LW like logic, provability, self-improvement, and generally strong theoretical foundations all turned out not to matter all that much to AI in general.
Steelmaking is probably another example where the theory lagged radically behind the empirical successes of the techniques, and overall an example of where empirical success can be found without theoretical basis for success.
For difficulty in applying theories being important, I’d argue that evolution was the central example, as while Darwin’s theory of evolution was very right, it also took quite a lot of time to fully propagate the logical implications, and for bounded agents like us, just having a central idea doesn’t allow us to automatically derive all the implications from that theory, because logical inference is very, very hard.
Do you still think that the original example counts? If you agree that scientific fields have compact generators, it seems entirely natural to believe that “exfohazards” – as in, hard-to-figure-out compact ideas such that if leaked, they’d let people greatly improve capabilities just by “grunt work” – are a thing.
I’d potentially agree, but I’d like the concept to be used a lot less, and a lot more carefully than what is used now.
I’m specifically focused on Nate Soares and Eliezer Yudkowsky, as well as MIRI the organization, but I do think the general point applies, especially before 2012-2015.
Before 2012, it’s somewhat notable that AlexNet wasn’t published yet.
TBC, I think people savvy enough about AI should have predicted that ML was a pretty plausible path and that “lots of compute” was also plausible. (But it’s unclear if they should have put lots of probability on this with the information available in 2010.)
I am more pointing out that they seemed to tacitly assume that deep learning/ML/scaling couldn’t work, since all the real work was what we would call better algorithms, and compute was not viewed as a bottleneck at all.
That can’t be true, because the ability to apply a theory is dependent on having a theory. I mean, I suppose you can do technology development just by doing random things and seeing what works, but that tends to have slow or poor results. Theories are a bottleneck on scientific advancement.
I suppose there is some sense in which the immediate first-order effects of someone finding a great application for a theory are more impactful than that of someone figuring out the theory to begin with. But that’s if we’re limiting ourselves to evaluating first-order effects only, and in this case this approximation seems to directly lead to the wrong conclusion.
Any specific examples? (I can certainly imagine some people doing so. I’m interested in whether you think they’re really endemic to LW, or if I am doing that.)
Do you still think that the original example counts? If you agree that scientific fields have compact generators, it seems entirely natural to believe that “exfohazards” – as in, hard-to-figure-out compact ideas such that if leaked, they’d let people greatly improve capabilities just by “grunt work” – are a thing. (And I don’t really think most of the people worrying about them envision themselves Great Men? Rather than viewing themselves as “normal” researchers who may stumble into an important insight.)
AI in general is littered with this, but a point I want to make is that the entire deep learning revolution caught LW by surprise, as while it did involve algorithmic improvement, overall it basically involved just adding more compute and data, and for several years, even up until now, the theory of deep learning hasn’t caught up with the empirical success of deep learning. In general, the stuff considered very important to LW like logic, provability, self-improvement, and generally strong theoretical foundations all turned out not to matter all that much to AI in general.
Steelmaking is probably another example where the theory lagged radically behind the empirical successes of the techniques, and overall an example of where empirical success can be found without theoretical basis for success.
For difficulty in applying theories being important, I’d argue that evolution was the central example, as while Darwin’s theory of evolution was very right, it also took quite a lot of time to fully propagate the logical implications, and for bounded agents like us, just having a central idea doesn’t allow us to automatically derive all the implications from that theory, because logical inference is very, very hard.
I’d potentially agree, but I’d like the concept to be used a lot less, and a lot more carefully than what is used now.
I’m missing the context, but I think you should consider naming specific people or organizations rather than saying “LW”.
I’m specifically focused on Nate Soares and Eliezer Yudkowsky, as well as MIRI the organization, but I do think the general point applies, especially before 2012-2015.
Before 2012, it’s somewhat notable that AlexNet wasn’t published yet.
TBC, I think people savvy enough about AI should have predicted that ML was a pretty plausible path and that “lots of compute” was also plausible. (But it’s unclear if they should have put lots of probability on this with the information available in 2010.)
I am more pointing out that they seemed to tacitly assume that deep learning/ML/scaling couldn’t work, since all the real work was what we would call better algorithms, and compute was not viewed as a bottleneck at all.