Why is it obvious
Arjun Panickssery
At the start of the post I describe an argument I often hear:
But many people are under the misconception that the resulting “rat race”—the highly competitive and strenuous admissions ordeal—is the inevitable result of the limited class sizes among top schools and the strong talent in the applicant pools, and that it isn’t merely because of the reasons listed in (2). Some even go so far as to suggest that a better system would be to run a lottery for any applicant who meets a minimum “qualification” standard—under the assumption that there would be many such qualified students.
This is the argument that I’m responding to and refuting.
You’re phrasing this as though it’s rebutting some remark I made; if so, I’m not sure what remark that is. I know that admissions offices are admitting students according to an intentional system.
Didn’t watch the video but is there the short version of this argument? France is at the 90th percentile of population sizes and also has the 4th-most nukes.
Shorter sentences are better. Why? Because they communicate clearly. I used to speak in long sentences. And they were abstract. Thus I was hard to understand. Now I use short sentences. Clear sentences.
It’s been net-positive. It even makes my thinking clearer. Why? Because you need to deeply understand something to explain it simply.
More object-level content: Why Have Sentence Lengths Decreased? — LessWrong
That part encourages captains to avoid shirking in general (rather than to use aggressive tactics in particular) because it increases the costs of job loss (due to high compensation) and because there are captains in reserve that can replace them quickly.
The prior should be towards liberty
Right, I was the first strong-disagree and I disagreed because of the implied premise that homeschooling was deviant in a way that warranted a high degree of scrutiny, rather than being the natural right/default that should be restricted only in extreme cases. I figure that’s why others disagreed as well.
Elaborated in this thread: https://www.lesswrong.com/posts/MJFeDGCRLwgBxkmfs/childhood-and-education-9-school-is-hell?commentId=W8FoHDkB3xAkfcwyw
This can hold true even if we grant that mandatory public school is itself abusive to children
My point doesn’t have anything to do with the relevant merits of public vs homeschool. My point is that your listed interventions aren’t reasonable because they involve too much government intrusion into a parent’s freedom to educate his children how he wants, for reasons based on dubious and marginal “safeguarding” grounds. My example of a different education regime was to consider a different status quo that makes the intrusiveness more clear.
I feel like all my position needs to argue for is that some children have parents/caretakers where it would be worse if they had 100% control and no accountability than if the children also spend some time outside the household in public school [emphasis mine]
This might be the crux: I’m saying that this position isn’t based on any reasonable principle of government intrusion on people’s lives. The government shouldn’t intrude on basic parenting rights with a ton of surveillance just to see whether it can make what it thinks is a marginal improvement.
More concretely, do you think parents should have to pass a criminal background check (assuming this is what you meant by “background check”) in order to homeschool, even if they retain custody of their children otherwise?
Re-reading your previous reply, I noticed this:
So, what you could do to get some of the monitoring back: have homeschooling with (e.g.) yearly check-ins with the affected children from a social worker. I don’t know the details, but my guess is that some states have this and others don’t.
No states use this policy and it doesn’t make sense that parents should have to submit to yearly inspections of their parenting practices. It only makes sense from the point of view where homeschooling is highly deviant and basically impermissible without special dispensation, and where the government has the authority to decide in very specific terms what composes children’s educations.
I strong-disagreed since I don’t think any of your listed criticisms are reasonable. The implied premise is that homeschooling is deviant in a way that justifies a lot of government scrutiny, when really parents have a natural right to educate their children the way they want (with government intervention being reasonable in extreme cases that pass a high bar of scrutiny).
In particular, I think that outside of an existing norm where most students go to public school, the things you listed would be obviously unjust. Do you think that parents who fail a criminal background check shouldn’t be allowed to educate their own children (given that they still have custody of their children), or that a CPS investigation should make some kind of intermediate judgment of “not abusive but not worthy to educate the children without state intervention”?
If we were under a different education regime like universal school vouchers, or just totally unfunded private education, do you think it would be reasonable to have parents’ freedom to educate their children restricted for reasons like the ones you gave, or to introduce mandatory public schooling just for this kind of monitoring?
I think this post is very funny (disclaimer: I wrote this post).
A number of commenters (both here and on r/slatestarcodex) think it’s also profound, basically because of its reference to the anti-critical-thinking position better argued in the Michael Huemer paper that I cite about halfway through the post.
The question of when to defer to experts and when to think for yourself is important. This post is fun as satire or hyperbole, though it ultimately doesn’t take any real stance on the question.
I think this post is very good (note: I am the author).
Nietzsche is brought up often in different contexts related to ethics, politics, and the best way to live. This post is the best summary on the Internet of his substantive moral theory, as opposed to vague gesturing based on selected quotes. So it’s useful for people who
are interested in what Nietzsche’s arguments, as a result of their secondhand impressions
have specific questions like “Why does Nietzsche think that the best people are more important”
want to know whether something can be well-described as “Nietzschean”
It’s able to answer questions like this and describe Nietzsche’s moral theory concisely because it focuses on his lines of argument and avoids any description of his metaphors or historical narratives: no references are made to the Ubermensch, Last Man, the “death of God,” the blond beast, or other concepts that aren’t needed for an analytic account of his theory.
By “calligraphy” do you mean cursive writing?
So why don’t the four states sign a compact to assign all their electoral votes in 2028 and future presidential elections to the winner of the aggregate popular vote in those four states? Would this even be legal?
It would be legal to make an agreement like this (states are authorized to appoint electors and direct their votes however they like; see Chiafalo v. Washington) but it’s not enforceable in the sense that if one of the states reneges, the outcome of the presidential election won’t be reversed.
lol fixed thanks
Yeah it’s for the bounty. Hanson suggested that a list of links might be preferred to a printed book, at least for now, since he might want to edit the posts.
What was it called
Brief comments on what’s bad about the output:
The instruction is to write an article arguing that AI-generated posts suffer from verbosity, hedging, and unclear trains of thought. But ChatGPT makes that complaint in a single sentence in the first paragraph and then spends 6 paragraphs adding a bunch of its own arguments:
that the “nature of conversation itself” draws value from “human experience, emotion, and authenticity” that AI content replaces with “a hollow imitation of dialogue”
that AI content creates “an artificial sense of expertise,” i.e. that a dumb take can be made to seem smarter than it is
that the option to use AI content discourages posters from “engag[ing] deeply with the topic themselves”
It wasn’t instructed to make these arguments and they aren’t really defended as important.
Then lastly ChatGPT offers a “solution”: that people just disclose when they’re using AI assistance. But it’s not explained how this actually prevents the problems above, besides I guess allowing people downvote AI content or scrutinize it more. Importantly this proposed solution wouldn’t solve the concern the model was instructed to present.
So in general instructions aren’t followed and the thinking/argumentation isn’t clear.
I added to your prompt the instructions
Be brief and write concise prose in the style of Paul Graham. Don’t hedge or repeat yourself or go on tangents.
And the output is still bad, but now mostly for the flaw (also present in your output) that ChatGPT can’t resist making the complaint about “human authenticity” and “transparency/trust” when that’s not what you’re talking about:
I’ve noticed a troubling trend on online forums: a surge in posts that clearly seem to be generated by AI. These posts are verbose, meandering, and devoid of real substance. They present every argument with an opposing one, wrapped in so much “on the one hand” and “on the other” that they end up saying nothing at all.
AI in its current form shouldn’t be used for this purpose. The problem isn’t just that these posts waste time, though they do. It’s that they dilute meaningful discourse. When an AI-generated post floods a forum, it clogs the flow of genuine ideas. The writing might seem polished, but it’s hollow, a shell of real thought. And that’s a disservice to any community that values actual discussion.
Forums are places for people to exchange ideas, not for machines to simulate them. Using AI to generate content in this way erodes the authenticity of conversation. It’s a misuse of technology that contributes to the noise rather than to meaningful exchange.
AI can be powerful, but that power should be wielded carefully. Right now, the tools aren’t good enough to replace real human input, and pretending they are does more harm than good.
EDIT: Below is what it looks like if I replace “in the style of Paul Graham” with “in the style of LessWrong”:
AI-generated content in online forums is a growing problem. These tools are designed to mimic human language, but they often fail to deliver meaningful or original contributions. Instead, they produce verbose, meandering prose that tries to cover all bases but ultimately says nothing. This kind of content is worse than useless—it clogs up discussions with noise, making it harder for genuine insights to emerge.
AI, in its current state, lacks the ability to form and defend coherent, novel ideas. It doesn’t understand context, nuance, or the importance of taking a clear stance. The result is writing that hedges endlessly, offering a shallow imitation of thoughtful discourse but devoid of real substance. When forums are flooded with such content, it dilutes the value of the platform. It wastes the time of readers who are searching for real engagement and stifles the development of ideas that can only come from genuine human interaction.
AI tools should not be used to generate content for forums, period. They don’t contribute to the quality of discussion and instead, undermine it. The promise of AI lies in its ability to augment human capabilities, not replace them in areas where human judgment, creativity, and insight are paramount. Until AI can meaningfully contribute to a discussion, we should resist the temptation to use it in contexts where clarity, originality, and human understanding are crucial.
Maybe I should rephrase the sentence in OP. What I mean is that “After assuming that half of international students scored at or above 1550 and half scored below, the remaining spots are divided among domestic students in such-and-such way.”