How about the culture in catholic countries were gays are mistreated and that the culture “demand” young men to to find a wife and get married? One way to opt out of marriage and condemnation for being gay, with your honor intact and that you do not have to reveal your preferences, is to go into priesthood.
Anders Lindström
Yes, sometimes they are slow, other times they are fast. A private effort to build a nuke or go to the moon in the time frames they did would not have been possible. AFAIK the assumption that Chinese AI development is government directed everyone agrees to, but for some very strange reason people like to think that US AI is directed by a group of quirky nerds that wants to save the world and just happens to get their hands on a MASSIVE amount of compute (worth billions upon billions of dollars). Imagine when the government gets to hear what these nerds are up to in a couple of years...
IF there is any truth to how important the race to AGI/ASI is to win.THEN governments are the key-players in those races.
News of the new models percolates slowly through the US government and beyond.
Well fleshed out scenario, but this kind of assumption is always a dealbreaker for me.
Why would the government not be aware of the development of the mightiest technology and weapon ever created if “we” are aware of it?
Could you please elaborate why you choose to go for the “stupid and uninformed government”, instead of the more plausible scenario where the government actually knows exactly what is going on in every step of the process and is the driving force behind it?
For the majority of human history we lived in a production market for food. We searched for that which tasted well but there was never enough to fill the void. Only the truly elite could afford to import enough food to reach the point of excess.
Humanity ~300.000 years. Agriculture ~12.000 years. We have been hunters and gathers for the vast majority of human history.
Come for the game theory, stay for the slot machines...
Oh, April 1st.
Yes, but as I wrote in the answer to habryka (see below), I am not talking about the present moment. I am concerned with the (near) future. With the break neck speed at which AI is moving it wont be long until it will be hopeless to figure out if its AI generated or not.
So my point and rhetorical question is this: AI is not going to go away. Everyone(!) will use it, all day every day. So instead of trying to come up with arbitrary formulas for how much AI generated content a post can or cannot contain, how can we use AI to the absolute limit to increase the quality of posts and make Lesswrong even better than it already is?!
I know the extremely hard work that a lot of people put into writing their posts, and that the moderators are doing a fantastic job at keeping the standards very high, all of which is much appreciated. Bravo!
But I assume that this policy change is forward looking and that is what I am talking about, the future. We are at the beginning of something truly spectacular that have already yielded results in certain domains that are nothing less than mind blowing. Text generation is one of those fields which have had extreme progress in just a few years time. If this progress continue (which is likely to assume), very soon text generation will be as good or better than the best human writers in pretty much any field.
How do you as moderators expect to keep up with this progress if you want to keep the forum “AI free”? Is there anything more concrete than a mere policy change that could be done to nudge people into NOT posting AI generated content? IMHO Lesswrong is a competition in cleaver ideas and smartness, and I think a fair assumption is that if you can get help from AI to reach “Yudkowsky-level” smartness, you will use it no matter what. Its just like when say athletes use PEDs to get an edge. Winning >> Policies
I understand the motif behind the policy change but its unenforceable and carry no sanctions. In 12-24 months I guess it will be very difficult (impossible) to detect AI spamming. The floodgates are open and you can only appeal to peoples willingness to have a real human to human conversation. But perhaps those conversations are not as interesting as talking to an AI? Those who seek peer validation for their cleverness will use all available tools in doing so no matter what policy there is.
I unfortunately believe that such policy changes are futile. I agree that right now its possible (not 100% by any means) to detect a sh*tpost, at least within a domain a know fairly well. Remember that we are just at the beginning of Q2 2025. Where are we with this Q2 2026 or Q2 2027?
There is no other defense for the oncoming AI forum slaughter than that people find it more valuable to express their own true opinions and ideas then to copy paste or let an agent talk for them.
No policy change is needed, a mindset change is.
Spot on!
Oh, I mean “required” as in to get a degree in a certain subject you need to write a thesis as your rite of passage.
Yes, you are right. Adept or die. AI can be a wonderful tool for learning but as it is used right now, where everyone have to say that they don´t use it, it beyond silly. I guess there will be some kind of reckoning soon.
With AI’s rapid advancements in research and writing capabilities, in what year do you think thesis writing will cease to be required for most BS and MS students? (I.e., effectively being abandoned as a measure of academic proficiency)
By the time you have an AI that can monitor and figure out what you are actually doing (or trying to do) on your screen, you do not need the person. Ain´t worth the hassle to install cameras that will be useless in 12 months time...
Cool project, I really like the clean and minimalist design AND functionality!
Two thoughts:
5-level ratings. Don’t really like 5-level rating systems, cause its so easy to be a “lazy” reviewer and go for a three. I prefer 4 or 6-level rating systems where there is no “lazy” middle ground.
Preferred winner. Most of the time when I watch sports of any sort, I have a preferred winner. Perhaps adding that data point to each game could be interesting to see in the aggregate how that affects the rating you give a game.
But how do we know that ANY data is safe for AI consumption? What if the scientific theories that we feed the AI models contain fundamental flaws such that when an AI runs off and do their own experiments in say physics or germline editing based on those theories, it triggers a global disaster?
I guess the best analogy for this dilemma is “The Chinese farmer” (The old man lost his horse), I think we simple do not know which data will be good or bad in the long run.
Yes, a single strong, simple argument or piece of evidence that could refute the whole LLM approach would be more effective but as of now no one have the answer if the LLM approach will lead to AGI or not. However, I think you’ve in a meaningful way addressed interesting and important details that are often overlooked in broad hype statements that are repeated and thrown around like universal facts and evidence for “AGI within the next 3-5 years”.
This might seem like a ton of annoying nitpicking.
You don’t need to apologize for having a less optimistic view of current AI development. I’ve never heard anyone driving the hype train apologize for their opinions.
I know many of you dream of having an IQ of 300 to become the star researcher and avoid being replaced by AI next year. But have you ever considered whether nature has actually optimized humans for staring at equations on a screen? If most people don’t excel at this, does that really indicate a flaw that needs fixing?
Moreover, how do you know that a higher IQ would lead to a better life—for the individual or for society as a whole? Some of the highest-IQ individuals today are developing technologies that even they acknowledge carry Russian-roulette odds of wiping out humanity—yet they keep working on them. Should we really be striving for more high-IQ people, or is there something else we should prioritize?
I would like to ask for a favor—a favor for humanity. As the AI rivalry between the US and China has reached new heights in recent days, I urge all parties to prioritize alignment over advancement. Please. We, humanity, are counting on your good judgment.
The main reason for developing AI in the first place is to make possible what the headline says: “AI-enabled coups: a small group could use AI to seize power”.
AI-enabled coups are a feature, not a bug.