Consumer CPU price/performance as well as stand still GPU price/performance 2016-2019 Is probably what contributed massively to public perception of Moore’s law death.
In 90s and early 2000s after about 8 years you could get CPU for nearly the same money 50 times faster.
But since 2009 up until 2018 we maybe got 50 or 80% performance boost give or take for the same price. Now with Ryzen 3rd gen everything is changing, so after 10 disappointing years it looks interesting ahead.
In 2015 consensus for AGI among AI researchers was 2040-2060, most optimistic guess was 2025 which was like 1% of people.
We are very bad at predicting AGI based on these sort of tests and being impressed AI did something well. People in 1960s though AGI was right around the corner based on how fast computers could do calculations. Tech people of that time were having similar discussions all the time.
It’s much more useful to go by the numbers. Directly compare brain computation power vs our hardware it’s not precise, but it’ll probably be in the right ballpark.
Kurzwail believed we need to get to 10^16 flops to get human level AI, Nick Bostrom came up with a bunch of numbers for simulating brain, most optimistic of which was 10^18 (I think he called it “functional simulation of neurons”), but right now limitation seems mostly in memory bandwidth and capacity. A person with 3090 can run significantly better models than a person with 4070Ti, despite the later being a bit faster computationally, but having less VRAM
So flops metric doesn’t seem all that relevant.
4090 tensorcores flops performance reaching 1.3*10^15 flops, while normal floating point flops is like 20 times lower. Yet using tensorcores only speeds up stable diffusion by about 25% - again because it seems we’re not limited by computation power, but by memory speed and capacity.
So that means functionally ML model running on 4090 can’t be better than 10^14 or 1/1000 of human brain, but probably even less than that.
I think what we doing is same thing people did in early 2000s after several very successful self-driving tests. They were so impressive that self-driving felt practically solved at a time. That was almost 20 years and while it’s much better today, functionally it just went from driving well 99% of the time to driving well 99.9% of the time which still not enough.
And I don’t believe self-driving requires “human level AI”. Driving shouldn’t require more than a brain power of rat. 1⁄10 or 1⁄100 of human. And we’re still not there. Which means we’re more than 2-3 or more orders of magnitude behind.
Maybe I’m wrong and AI using functional 10^14 flops could perfectly beat humans in all areas, which would suggest that human brain is very poorly optimized.
Maybe the only bottleneck left is AI algorithms themselves and we already have enough hardware. But I don’t think so.
I don’t think we’re that close to AGI. I think 2025-2040 timeline still stands.