I am not an expert; I don’t know how exactly the LLMs improve depending on their inputs. But if we take the amount of energy “all output of humanity, maybe add an order of magnitude or two for inventions and engineering in near future” and the amount of human text “everything humanity ever wrote, or even said, add an order of magnitude because people will keep talking”, well, if we won’t get something game-changing out of this, then it seems we are stuck—where would we get the additional order of magnitude inputs from?
So far the scaling worked, because we could redirect more and more resources from other parts of economy towards LLMs. How much time is left until a significant fraction of the world economy is spent training new version of LLMs, and what happens then? My naive assumption is that the requirements of LLMs grow exponentially, can the economy soon start e.g. doubling every year to match that? If not, then the training needs of LLMs will outrun the economy, and the progress slows down.
I am not an expert; I don’t know how exactly the LLMs improve depending on their inputs. But if we take the amount of energy “all output of humanity, maybe add an order of magnitude or two for inventions and engineering in near future” and the amount of human text “everything humanity ever wrote, or even said, add an order of magnitude because people will keep talking”, well, if we won’t get something game-changing out of this, then it seems we are stuck—where would we get the additional order of magnitude inputs from?
So far the scaling worked, because we could redirect more and more resources from other parts of economy towards LLMs. How much time is left until a significant fraction of the world economy is spent training new version of LLMs, and what happens then? My naive assumption is that the requirements of LLMs grow exponentially, can the economy soon start e.g. doubling every year to match that? If not, then the training needs of LLMs will outrun the economy, and the progress slows down.