Thank you for the excellent and extensive write up :)
I hadn’t encountered your perspective before, I’ll definitely go through all your links to educate myself, and put less weight on algorithmic progress being a driving force then.
At the end of the day, the best thing to do is to actually try and apply the advances to real-world problems.
I work on open source stuff that anyone can use, and there’s plenty of companies willing to pay 6 figures a year if we can do some custom development to give them a 1-2% boost in performance. So the market is certainly there and waiting.
Even a minimal increase in accuracy can be worth millions or billions to the right people. In some industries (advertising, trading) you can even go at it alone, you don’t need customers.
But there’s plenty of domain-specific competitions that pay in the dozens or hundreds of thousands for relatively small improvements. Look past Kaggle at things that are domain-specific (e.g. https://unearthed.solutions/) and you’ll find plenty.
That way you’ll probably get a better understanding of what happens when you take a technique that’s good on paper and try to generalize. And I don’t mean this as a “you will fail”, you might well succede but it will probably make you see how minimal of an improvement “success” actually is and how hard you must work for that improvement. So I think it’s a win-win.
The problem with companies like OpenAI (and even more so with “AI experts” on LW/Alignment) is that they don’t have a stake by which to measure success or failure. If waxing lyrically and picking the evidence that suits your narrative is your benchmark for how well you are doing, you can make anything from horoscopes to homeopathy sound ground-breaking.
When you measure your ideas about “what works” against the real world that’s when the story changes. After all, one shouldn’t forget that since OpenAI was created it got its funding via optimizing the “Impress Paul Graham and Elon Musk”, rather than via the “Create an algorithm that can do something better than a human than sell it to humans that want that thing done better” strategy… which is an incentive 101 kinda problem and what makes me wary of many of their claims.
Again, not trying to disparage here, I also get my funding via the “Impress Paul Graham” route, I’m just saying that people in AI startups are not the best to listen to in terms of AI progress, none of them are going to say “Actually, it’s kinda stagnating”. Not because they are dishonest, but because the kind of people that work in and get funding for AI startups genuinely believe that… otherwise they’d be doing something else. However, as has been well pointed about by many here, confirmation bias is often much more insidious and credible than outright lies. Even I fall on the side of “exponential improvement” at the end of the day, but all my incentives are working towards biasing me in that direction, so thinking about it rationally, I’m likely wrong.
Thank you for the excellent and extensive write up :)
I hadn’t encountered your perspective before, I’ll definitely go through all your links to educate myself, and put less weight on algorithmic progress being a driving force then.
Cheers
At the end of the day, the best thing to do is to actually try and apply the advances to real-world problems.
I work on open source stuff that anyone can use, and there’s plenty of companies willing to pay 6 figures a year if we can do some custom development to give them a 1-2% boost in performance. So the market is certainly there and waiting.
Even a minimal increase in accuracy can be worth millions or billions to the right people. In some industries (advertising, trading) you can even go at it alone, you don’t need customers.
But there’s plenty of domain-specific competitions that pay in the dozens or hundreds of thousands for relatively small improvements. Look past Kaggle at things that are domain-specific (e.g. https://unearthed.solutions/) and you’ll find plenty.
That way you’ll probably get a better understanding of what happens when you take a technique that’s good on paper and try to generalize. And I don’t mean this as a “you will fail”, you might well succede but it will probably make you see how minimal of an improvement “success” actually is and how hard you must work for that improvement. So I think it’s a win-win.
The problem with companies like OpenAI (and even more so with “AI experts” on LW/Alignment) is that they don’t have a stake by which to measure success or failure. If waxing lyrically and picking the evidence that suits your narrative is your benchmark for how well you are doing, you can make anything from horoscopes to homeopathy sound ground-breaking.
When you measure your ideas about “what works” against the real world that’s when the story changes. After all, one shouldn’t forget that since OpenAI was created it got its funding via optimizing the “Impress Paul Graham and Elon Musk”, rather than via the “Create an algorithm that can do something better than a human than sell it to humans that want that thing done better” strategy… which is an incentive 101 kinda problem and what makes me wary of many of their claims.
Again, not trying to disparage here, I also get my funding via the “Impress Paul Graham” route, I’m just saying that people in AI startups are not the best to listen to in terms of AI progress, none of them are going to say “Actually, it’s kinda stagnating”. Not because they are dishonest, but because the kind of people that work in and get funding for AI startups genuinely believe that… otherwise they’d be doing something else. However, as has been well pointed about by many here, confirmation bias is often much more insidious and credible than outright lies. Even I fall on the side of “exponential improvement” at the end of the day, but all my incentives are working towards biasing me in that direction, so thinking about it rationally, I’m likely wrong.