With AI’s rapid advancements in research and writing capabilities, in what year do you think thesis writing will cease to be required for most BS and MS students? (I.e., effectively being abandoned as a measure of academic proficiency)
Anders Lindström
By the time you have an AI that can monitor and figure out what you are actually doing (or trying to do) on your screen, you do not need the person. Ain´t worth the hassle to install cameras that will be useless in 12 months time...
Cool project, I really like the clean and minimalist design AND functionality!
Two thoughts:
5-level ratings. Don’t really like 5-level rating systems, cause its so easy to be a “lazy” reviewer and go for a three. I prefer 4 or 6-level rating systems where there is no “lazy” middle ground.
Preferred winner. Most of the time when I watch sports of any sort, I have a preferred winner. Perhaps adding that data point to each game could be interesting to see in the aggregate how that affects the rating you give a game.
But how do we know that ANY data is safe for AI consumption? What if the scientific theories that we feed the AI models contain fundamental flaws such that when an AI runs off and do their own experiments in say physics or germline editing based on those theories, it triggers a global disaster?
I guess the best analogy for this dilemma is “The Chinese farmer” (The old man lost his horse), I think we simple do not know which data will be good or bad in the long run.
Yes, a single strong, simple argument or piece of evidence that could refute the whole LLM approach would be more effective but as of now no one have the answer if the LLM approach will lead to AGI or not. However, I think you’ve in a meaningful way addressed interesting and important details that are often overlooked in broad hype statements that are repeated and thrown around like universal facts and evidence for “AGI within the next 3-5 years”.
This might seem like a ton of annoying nitpicking.
You don’t need to apologize for having a less optimistic view of current AI development. I’ve never heard anyone driving the hype train apologize for their opinions.
I know many of you dream of having an IQ of 300 to become the star researcher and avoid being replaced by AI next year. But have you ever considered whether nature has actually optimized humans for staring at equations on a screen? If most people don’t excel at this, does that really indicate a flaw that needs fixing?
Moreover, how do you know that a higher IQ would lead to a better life—for the individual or for society as a whole? Some of the highest-IQ individuals today are developing technologies that even they acknowledge carry Russian-roulette odds of wiping out humanity—yet they keep working on them. Should we really be striving for more high-IQ people, or is there something else we should prioritize?
I would like to ask for a favor—a favor for humanity. As the AI rivalry between the US and China has reached new heights in recent days, I urge all parties to prioritize alignment over advancement. Please. We, humanity, are counting on your good judgment.
Perhaps.
https://www.politico.eu/article/us-elon-musk-troll-donald-trump-500b-ai-plan/But Musk responded skeptically to an OpenAI press release that announced funding for the initiative, including an initial investment of $100 billion.
“They don’t actually have the money,” Musk jabbed.
In a follow-up post on his platform X, the social media mogul added, “SoftBank has well under $10B secured. I have that on good authority.”
I suppose the $500 Billion AI infrastructure program just announced can lay to rest all speculations that AGI/ASI is NOT a government directed project.
Communicate the plan with the general public: Morally speaking, I think companies should share their plans in quite a lot of detail with the public.
Yes, I think so too, but it will never happened. AGI/ASI is too valuable to be discussed publicly. I have never ever been given the opportunity to have a say in any other big corporate decision regarding the development of weapons and for sure I will not have it this time either.
“They” will build the things “they” believe are necessary to protect “the American or Chinese way of life”, and “they” will not ask you for permission or your opinion.
Money will be able to buy results in the real world better than ever.
People’s labour gives them less leverage than ever before.
Achieving outlier success through your labour in most or all areas is now impossible.
There was no transformative leveling of capital, either within or between countries.
If this is the “default” outcome there WILL be blood. The rational thing to do in this case it to get a proper prepper bunker and see whats left when the dust have settled.
Excellent points. My experience is that people in general do not like to think that the things they are doing are could be done in other ways or not at all, because that means that they have to rethink their own role and purpose.
When you predict (either personally or publicly) future dates of AI milestones do you:
Assume some version of Moore’s “law” e.g. exponential growth.
Or
Assume some near term computing gains e.g. quantum computing, doubly exponential growth.
I’m also excited because, while I think I have most of the individual subskills, I haven’t personally been nearly as good at listening to wisdom as I’d like, and feel traction on trying harder.
Great post! I personally have a tendency to disregard wisdom because it feels “too easy”, that if I am given some advice and it works I think it was just luck or correlation, then I have to go and try “the other way (my way...)” and get a punch in the face from the universe and then be like “ohhh, so that why I should have stuck to the advice”.
Now when I think about it, it might also be because of intellectual arrogance, that I think I am smarter than the advice or the person that gives the advice.
But I have lately started to think a lot about way we think that successful outcomes require overreaching and burnout. Why do we have to fight so hard for everything and feel kind of guilty if it came to us without much effort? So maybe my failure to heed advices of wisdom is based in a need to achieve (overdo, modify, add, reduce, optimize etc.) rather than to just be.
My experience says otherwise, but it might have happen to stumble on some militant foodies.
“Conservative evangelical Christians spend an unbelievable amount of time focused on God: Church services and small groups, teaching their kids, praying alone and with friends. When I was a Christian I prayed 10s of times a day, asking God for wisdom or to help the person I was talking to. If a zealous Christian of any stripe is comfortable around me they talk about God all the time.”
Isn’t this true for ALL true believers regardless of conviction? I could easily replace ‘Conservative evangelical Christians and God’ with ‘Foodies and food’, ‘Teenage girls and influencers’, ‘Rationalists and logic’, ‘Gym bros and grams of protein per kg/lb of body mass’. There seems to be something inherent in the will to preach to others out of good will, that we want to share something that we believe would benefit others. The road to hell isn’t paved with good intentions for nothing...
Being tense might also be a direct threat to your health in certain situations. I saw an interview on TV with an expert on hostage situations some 7-10 years ago and he claimed that the number one priority for a captive should be to somehow find a way to relax their body. He said if they are not able to do that, the chance is very high that they will develop PTSD.
It would for national security reasons be strange to assume that there already now is no coordination among the US firms. And… are we really sure that China is behind in the AGI race?
Oh, I mean “required” as in to get a degree in a certain subject you need to write a thesis as your rite of passage.
Yes, you are right. Adept or die. AI can be a wonderful tool for learning but as it is used right now, where everyone have to say that they don´t use it, it beyond silly. I guess there will be some kind of reckoning soon.