This might seem like a ton of annoying nitpicking.
You don’t need to apologize for having a less optimistic view of current AI development. I’ve never heard anyone driving the hype train apologize for their opinions.
This might seem like a ton of annoying nitpicking.
You don’t need to apologize for having a less optimistic view of current AI development. I’ve never heard anyone driving the hype train apologize for their opinions.
I know many of you dream of having an IQ of 300 to become the star researcher and avoid being replaced by AI next year. But have you ever considered whether nature has actually optimized humans for staring at equations on a screen? If most people don’t excel at this, does that really indicate a flaw that needs fixing?
Moreover, how do you know that a higher IQ would lead to a better life—for the individual or for society as a whole? Some of the highest-IQ individuals today are developing technologies that even they acknowledge carry Russian-roulette odds of wiping out humanity—yet they keep working on them. Should we really be striving for more high-IQ people, or is there something else we should prioritize?
I would like to ask for a favor—a favor for humanity. As the AI rivalry between the US and China has reached new heights in recent days, I urge all parties to prioritize alignment over advancement. Please. We, humanity, are counting on your good judgment.
Perhaps.
https://www.politico.eu/article/us-elon-musk-troll-donald-trump-500b-ai-plan/
But Musk responded skeptically to an OpenAI press release that announced funding for the initiative, including an initial investment of $100 billion.
“They don’t actually have the money,” Musk jabbed.
In a follow-up post on his platform X, the social media mogul added, “SoftBank has well under $10B secured. I have that on good authority.”
I suppose the $500 Billion AI infrastructure program just announced can lay to rest all speculations that AGI/ASI is NOT a government directed project.
Communicate the plan with the general public: Morally speaking, I think companies should share their plans in quite a lot of detail with the public.
Yes, I think so too, but it will never happened. AGI/ASI is too valuable to be discussed publicly. I have never ever been given the opportunity to have a say in any other big corporate decision regarding the development of weapons and for sure I will not have it this time either.
“They” will build the things “they” believe are necessary to protect “the American or Chinese way of life”, and “they” will not ask you for permission or your opinion.
Money will be able to buy results in the real world better than ever.
People’s labour gives them less leverage than ever before.
Achieving outlier success through your labour in most or all areas is now impossible.
There was no transformative leveling of capital, either within or between countries.
If this is the “default” outcome there WILL be blood. The rational thing to do in this case it to get a proper prepper bunker and see whats left when the dust have settled.
Excellent points. My experience is that people in general do not like to think that the things they are doing are could be done in other ways or not at all, because that means that they have to rethink their own role and purpose.
When you predict (either personally or publicly) future dates of AI milestones do you:
Assume some version of Moore’s “law” e.g. exponential growth.
Or
Assume some near term computing gains e.g. quantum computing, doubly exponential growth.
I’m also excited because, while I think I have most of the individual subskills, I haven’t personally been nearly as good at listening to wisdom as I’d like, and feel traction on trying harder.
Great post! I personally have a tendency to disregard wisdom because it feels “too easy”, that if I am given some advice and it works I think it was just luck or correlation, then I have to go and try “the other way (my way...)” and get a punch in the face from the universe and then be like “ohhh, so that why I should have stuck to the advice”.
Now when I think about it, it might also be because of intellectual arrogance, that I think I am smarter than the advice or the person that gives the advice.
But I have lately started to think a lot about way we think that successful outcomes require overreaching and burnout. Why do we have to fight so hard for everything and feel kind of guilty if it came to us without much effort? So maybe my failure to heed advices of wisdom is based in a need to achieve (overdo, modify, add, reduce, optimize etc.) rather than to just be.
My experience says otherwise, but it might have happen to stumble on some militant foodies.
“Conservative evangelical Christians spend an unbelievable amount of time focused on God: Church services and small groups, teaching their kids, praying alone and with friends. When I was a Christian I prayed 10s of times a day, asking God for wisdom or to help the person I was talking to. If a zealous Christian of any stripe is comfortable around me they talk about God all the time.”
Isn’t this true for ALL true believers regardless of conviction? I could easily replace ‘Conservative evangelical Christians and God’ with ‘Foodies and food’, ‘Teenage girls and influencers’, ‘Rationalists and logic’, ‘Gym bros and grams of protein per kg/lb of body mass’. There seems to be something inherent in the will to preach to others out of good will, that we want to share something that we believe would benefit others. The road to hell isn’t paved with good intentions for nothing...
Being tense might also be a direct threat to your health in certain situations. I saw an interview on TV with an expert on hostage situations some 7-10 years ago and he claimed that the number one priority for a captive should be to somehow find a way to relax their body. He said if they are not able to do that, the chance is very high that they will develop PTSD.
It would for national security reasons be strange to assume that there already now is no coordination among the US firms. And… are we really sure that China is behind in the AGI race?
And one wonder how much the bottleneck is TSMC (the western “AI-block” have really put a lot of their eggs in one basket...) and how much is customer preference towards Nvidia chips. The chips wars 2025 will be very interesting to follow. Thanks for a good chip summary!
What about AMD? I saw that on that latest supercomputer TOP500 list that systems that uses bout AMD CPU and GPU now holds the places 1,2,5,8 and 10, among the top 10 systems. Yes, workloads on these computers are a bit different from a pure GPU training cluster, but still.
https://top500.org/lists/top500/2024/11/
Yes, the soon-to-be-here “human level” AGI people talk about is for all intent and purposes ASI. Show me one person who is at the highest expert level on thousands of subjects and that have the content of all human knowledge memorized and can draw the most complex inferences on that knowledge across multiple domains in seconds.
Its interesting that you mention hallucination as a bug/artefact, I think that hallucinations is what we humans do all day and everyday when we are trying to solve a new problem. We think up a solution we really believe is correct and then we try it and more often than not realize that we had it all wrong and we try again and again and again. I think AI’s will never be free of this, I just think it will be part of their creative process just as it is in ours. It took Albert Einstein a decade or so to figure out relativity theory, I wonder how many time he “hallucinated” a solution that turned out to be wrong during those years. The important part is that he could self correct and dive deeper and deeper into the problem and finally solve it. I firmly believe that AI will very soon be very good at self correcting, and if you then give your “remote worker” a day or 10 to think through a really hard problem, not even the sky will be the limit...
Thanks for writing this post!
I don’t know what the correct definition of AGI is, but to me it seems that AGI is ASI. Imagine an AI that is on super expert level in most (>95%) subjects and that have access to pretty much all human knowledge and is capable of digesting millions of tokens at a time and and can draw inferences and conclusions from that in seconds. “We” normally have a handful of real geniuses per generation. So now a simulated person that is like Stephen Hawkings in Physics, Terrence Tao in Math, Rembrandt in painting etc etc, all at the same time. Now imagine that you have “just” 40.000-100.000 of these simulated persons able to communicate at the speed of light and that can use all the knowledge in the world within millisecond. I think there there will be a very transformative experience for our society from the get go.
Yes, a single strong, simple argument or piece of evidence that could refute the whole LLM approach would be more effective but as of now no one have the answer if the LLM approach will lead to AGI or not. However, I think you’ve in a meaningful way addressed interesting and important details that are often overlooked in broad hype statements that are repeated and thrown around like universal facts and evidence for “AGI within the next 3-5 years”.