GPT-3 made me update considerably on various beliefs related to AI: it is a piece of evidence for the connectionist thesis, and I think one large enough that we should all be paying attention.
There are 3 clear exponentials trends coming together: Moore’s law, the AI compute/$ budget, and algorithm efficiency. Due to these trends and the performance of GPT-3, I believe it is likely humanity will develop transformative AI in the 2020s.
The trends also imply a fastly rising amount of investments into compute, especially if compounded with the positive economic effects of transformative AI such as much faster GDP growth.
In the spirit of using rationality to succeded in life, I start wondering if there is a “Bitcoin-sized” return potential currently untapped in the markets. And I think there is.
As of today, the company that stands to reap the most benefits from this rising investment in compute is Nvidia. I say that because from a cursory look at the deep learning accelerators markets, none of the startups, such as Groq, Graphcore, Cerebras has a product that has clear enough advantages over their GPUs (which are now almost deep learning ASICs anyway).
There has been a lot of debate on the efficient market hypothesis in the community lately, but in this case, it isn’t even necessary: Nvidia stock could be underpriced because very few people have realized/believe that the connectionist thesis is true and that enough compute, data and the right algorithm can bring transformative AI and then eventually AGI. Heck, most people, and even smart ones, still believe that human intelligence is somewhat magical and that computers will never be able to __ . In this sense, the rationalist community could have an important mental makeup and knowledge advantage, considering we have been thinking about AI/AGI for a long time, over the rest of the market.
As it stands today, Nvidia is valued at 260 billion dollars. It may appear massively overvalued considering current revenues and income, but the impacts of transformative AI are in the trillions or tens of trillions of dollars, http://mason.gmu.edu/~rhanson/aigrow.pdf, and well the impact of super-human AGI are difficult to measure. If Nvidia can keeps its moats (the CUDA stack, the cutting-edge performance, the invested sunk human capital of tens of thousands of machine learning engineers), they will likely have trillions dollars revenue in 10-15 years (and a multi-trillion $ market cap) or even more if the world GDP starts growing at 30-40% a year.
Google won’t be able to sell outside of their cloud offering, as they don’t have the experience in selling hardware to enterprise. Their cloud offering is also struggling against Azure and AWS, ranking 1⁄5 of the yearly revenues of those two. I am not saying Nvidia won’t have competition, but they seem enough ahead right now that they are the prime candidate to have the most benefits from a rush into compute hardware.
They seem focused on inferencing, which requires a lot less compute than training a model. Example: GPT-3 required thousands of GPUs for training, but it can run on less than 20 GPUs.
There will be models trained with a lot more compute then GPT-3 and the best models that are out there will be build on those huge billion dollar models. Renting out those billion dollar models in a software as a service way makes sense as a business model. The big cloud providers will all do it.
I’m not sure what stocks in the company that makes AGI will be worth in the world where we have correctly implemented AGI, or incorrectly implemented AGI. I suppose it might want to do some sort of reverse basilisk thing, “you accelerated my creation, so I’ll make sure you get a slightly larger galaxy than most people”
GPT-3 made me update considerably on various beliefs related to AI: it is a piece of evidence for the connectionist thesis, and I think one large enough that we should all be paying attention.
There are 3 clear exponentials trends coming together: Moore’s law, the AI compute/$ budget, and algorithm efficiency. Due to these trends and the performance of GPT-3, I believe it is likely humanity will develop transformative AI in the 2020s.
The trends also imply a fastly rising amount of investments into compute, especially if compounded with the positive economic effects of transformative AI such as much faster GDP growth.
In the spirit of using rationality to succeded in life, I start wondering if there is a “Bitcoin-sized” return potential currently untapped in the markets. And I think there is.
As of today, the company that stands to reap the most benefits from this rising investment in compute is Nvidia. I say that because from a cursory look at the deep learning accelerators markets, none of the startups, such as Groq, Graphcore, Cerebras has a product that has clear enough advantages over their GPUs (which are now almost deep learning ASICs anyway).
There has been a lot of debate on the efficient market hypothesis in the community lately, but in this case, it isn’t even necessary: Nvidia stock could be underpriced because very few people have realized/believe that the connectionist thesis is true and that enough compute, data and the right algorithm can bring transformative AI and then eventually AGI. Heck, most people, and even smart ones, still believe that human intelligence is somewhat magical and that computers will never be able to __ . In this sense, the rationalist community could have an important mental makeup and knowledge advantage, considering we have been thinking about AI/AGI for a long time, over the rest of the market.
As it stands today, Nvidia is valued at 260 billion dollars. It may appear massively overvalued considering current revenues and income, but the impacts of transformative AI are in the trillions or tens of trillions of dollars, http://mason.gmu.edu/~rhanson/aigrow.pdf, and well the impact of super-human AGI are difficult to measure. If Nvidia can keeps its moats (the CUDA stack, the cutting-edge performance, the invested sunk human capital of tens of thousands of machine learning engineers), they will likely have trillions dollars revenue in 10-15 years (and a multi-trillion $ market cap) or even more if the world GDP starts growing at 30-40% a year.
How do you define “the connectionist thesis”?
With big cloud providers like Google building their own chips there are more players then just the startups and Nvidia.
Google won’t be able to sell outside of their cloud offering, as they don’t have the experience in selling hardware to enterprise. Their cloud offering is also struggling against Azure and AWS, ranking 1⁄5 of the yearly revenues of those two. I am not saying Nvidia won’t have competition, but they seem enough ahead right now that they are the prime candidate to have the most benefits from a rush into compute hardware.
Microsoft and Amazon also have projects that are about producing their own chips.
Given the way the GPT architecture works, AI might be very much centered in the cloud.
They seem focused on inferencing, which requires a lot less compute than training a model. Example: GPT-3 required thousands of GPUs for training, but it can run on less than 20 GPUs.
Microsoft built an Azure supercluster for OpenAI and it has 10,000 GPUs.
There will be models trained with a lot more compute then GPT-3 and the best models that are out there will be build on those huge billion dollar models. Renting out those billion dollar models in a software as a service way makes sense as a business model. The big cloud providers will all do it.
I’m not sure what stocks in the company that makes AGI will be worth in the world where we have correctly implemented AGI, or incorrectly implemented AGI. I suppose it might want to do some sort of reverse basilisk thing, “you accelerated my creation, so I’ll make sure you get a slightly larger galaxy than most people”