I’m curious how you think your views here cash out differently from (your model of) most commenters here, especially as pertains to alignment work (timelines, strategy, prioritization, whatever else), but also more generally. If I’m interpreting you correctly, your pessimism on the usefulness-in-practice of quantitative progress probably cashes out in some sort of bet against scaling (i.e. maybe you think the “blessings of scale” will dry up faster than others think)?
Oh, I think superintelligences will be much less powerful than others seem to think.
Less human vs ant, and more “human vs very smart human that can think faster, has much larger working memory, longer attention spans, better recall and parallelisation ability”.
I’m curious how you think your views here cash out differently from (your model of) most commenters here, especially as pertains to alignment work (timelines, strategy, prioritization, whatever else), but also more generally. If I’m interpreting you correctly, your pessimism on the usefulness-in-practice of quantitative progress probably cashes out in some sort of bet against scaling (i.e. maybe you think the “blessings of scale” will dry up faster than others think)?
Oh, I think superintelligences will be much less powerful than others seem to think.
Less human vs ant, and more “human vs very smart human that can think faster, has much larger working memory, longer attention spans, better recall and parallelisation ability”.