I was initially surprised that you think I was generalizing too far—because that’s what I criticized about your quoting of Duncan’s list and in my head I was just pointing to myself as an obviously valid counterexample (because I’m a person who exists, and fwiw many but not all of my friends are similar), not claiming that all other people would be similarly turned off.
But seeing Thane’s reply, I think it’s fair to say that I’m generalizing too far for using the framing of “comfort zone expansion” for things that some people might legitimately find fun.
As I’m going to also write in my reply to Thane, I knew some people must find something about things like the ASMR exampe fun, but my model was more like “Some people think comfort/trust zone expansion itself is fun” rather than “Some people with already-wide comfort/trust zones find it fun to do things that other people would only do under the banner of comfort/trust zone expansion.” Point taken!
Still, I feel like the list could be more representative to humanity in general by not using so many examples that only appeal to people who like things like circling, awkward social games, etc.
It’s hard to pinpoint why exactly I think many people are highly turned off by this stuff, but I’m pretty sure (based on introspection) that it’s not just fear of humiliation or not trusting other people in the room. There’s something off-putting to me about the performativeness of it. Something like “If the only reason I’m doing it is because I’m following instructions, not because at least one of us actually likes it and the other person happily consents to it, it feels really weird.”
(This actually feels somewhat related to why I don’t like small talk—but that probably can’t be the full explanation because my model of most rationalists is that they probably don’t like small talk.)
Lukas_Gloor
As this post was coming together, Duncan fortuitously dropped a List of Truths and Dares which is pretty directly designed around willingness to be vulnerable, in exactly the sense we’re interested in here. Here is his list; consider it a definition-by-examples of willingness to be vulnerable:
I’m pretty sure you’re missing something (edit: or rather, you got the thing right but have added some other thing that doesn’t belong) because the list in question is about more than just willingness to be vulnerable in the sense that gives value to relationships. (A few examples of the list are fine for definition-by-examples for that purpose, but more than 50% of examples are about something entirely different.) Most of the examples in the list are about comfort zone expansion. Vulnerability in relationships is natural/authentic (which doesn’t meant it has to feel easy), while comfort zone expansion exercises are artificial/stilted.You might reply that the truth-and-dare context of the list means that obviously everything is going to seem a bit artificial, but the thing you were trying to point at is just “vulnerability is about being comfortable being weird with each other.” But that defense fails because being comfortable is literally the opposite of pushing your comfort zone.
For illustration, if my wife and I put our faces together and we make silly affectionate noises because somehow we started doing this and we like it and it became a thing we do, that’s us being comfortable and a natural expression of playfulness. By contrast, if I were to give people who don’t normally feel like doing this the instruction to put their faces together and make silly affectionate noises, probably the last thing they will be is comfortable!
[Edited to add:] From the list, the best examples are the ones that get people to talk about topics they wouldn’t normally talk about, because the goal is to say true things that are for some reason difficult to say, which is authentic. By contrast, instructing others to perform actions they wouldn’t normally feel like performing (or wouldn’t feel like performing in this artificial sort of setting) is not about authenticity.I’m not saying there’s no use to expanding one’s comfort zone. Personally, I’d rather spend a day in solitary confinement than whisper in friend’s ear for a minute ASMR-syle, but that doesn’t mean that my way of being is normatively correct—I know intellectually that the inner terror of social inhibitions or the intense disdain for performative/fake-feeling social stuff isn’t to my advantage in every situation. Still, in the same way, those who’ve made it a big part of their identity to continuously expand their comfort zones (or maybe see value in helping others come out of their shell) should also keep in mind that not everyone values that sort of thing or needs it in their lives.
At a moderate P(doom), say under 25%, from a selfish perspective it makes sense to accelerate AI if it increases the chance that you get to live forever, even if it increases your risk of dying.
If you’re not elderly or otherwise at risk of irreversible harms in the near future, then pausing for a decade (say) to reduce the chance of AI ruin by even just a few percentage points still seems good. So the crux is still “can we do better by pausing.” (This assumes pauses on the order of 2-20years; the argument changes for longer pauses.)
Maybe people think the background level of xrisk is higher than it used to be over the last decades because the world situation seems to be deteriorating. But IMO this also increases the selfishness aspect of pushing AI forward because if you’re that desperate for a deus ex machina, surely you also have to thihnk that there’s a good chance things will get worse when you push technology forward.
(Lastly, I also want to note that for people who care less about living forever and care more about near-term achievable goals like “enjoy life with loved ones,” the selfish thing would be to delay AI indefinitely because rolling the dice for a longer future is then less obvioiusly worth it.)
Well done finding the direct contradiction. (I also thought the claims seemed fishy but didn’t think of checking whether model running costs are bigger than revenue from subscriptions.)
Two other themes in the article that seem in a bit of tension for me:
Models have little potential/don’t provide much value.
People use their subscriptions so much that the company loses money on its subscriptions.
It feels like if people max out use on their subscriptions, then the models are providing some kind of value (promising to keep working on them even if just to make inference cheaper). By contrast, if people don’t use them much, you should at least be able to make a profit on existing subscriptions (even if you might be worried about user retention and growth rates).
All of that said, I also get the impression “OpenAI is struggling.” I just think it has more to do with their specific situation rather than with the industry (plus I’m not as confident in this take as the author seems to be).
Rob Bensinger: If you’re an AI developer who’s fine with AI wiping out humanity, the thing that should terrify you is AI wiping out AI.
The wrong starting seed for the future can permanently lock in AIs that fill the universe with non-sentient matter, pain, or stagnant repetition.
For those interested in this angle (how AI outcomes without humans could still go a number of ways, and what variables could make them go better/worse), I recently brainstormed here and here some things that might matter.
Parts of how that story was written triggers my sense of “this might have been embellished.” (It reminds me of viral reddit stories.)
I’m curious if there are other accounts where a Nova persona got a user to contact a friend or family member with the intent of getting them to advocate for the AI persona in some way.
“The best possible life” for me pretty much includes “everyone who I care about is totally happy”?
Okay, I can see it being meant that way. (Even though, if you take this logic further, you could, as an altruist, make it include everything going well for everyone everywhere.) Still, that’s only 50% of the coinflip.
And parents certainly do dangerous risky things to provide better future for their children all the time.
Yeah, that’s true. I could even imagine that parents are more likely to flip coins that say “you die for sure but your kids get a 50% chance of the perfect life.” (Especially if the kids are at an age where they would be able to take care of themselves even under the bad outcome.)
Are you kidding me? What is your discount rate? Not flipping that coin is absurd.
Not absurd. Not everything is “maximize your utilty.” Some people care about the trajectory they’re on together with other people. Are parents supposed to just leave their children? Do married people get to flip a coin that decides for both of them, or do they have to make independent throws (or does only one person get the opportunity)?
Also, there may be further confounders so that the question may not tell you exactly what you think it tells you. For instance, some people will flip the coin because they’re unhappy and the coin is an easy way to solve their problems one way or another—suicide feels easier if someone else safely does it for you and if there’s a chance of something good to look forward to.
Thanks for this newsletter, I appreciate the density of information!
I thought about this and I’m not sure Musk’s changes in “unhingedness” require more explanation than “power and fame have the potential to corrupt and distort your reasoning, making you more overconfident.” The result looks a bit like hypomania, but I’ve seen this before with people who got fame and power injections. While Musk was already super accomplished (for justified reasons nonetheless) before taking over Twitter and jumping into politics, being the Twitter owner (so he can activate algorithmic godmode and get even more attention) probably boosted both his actual fame and his perceived fame by a lot, and being close buddies with the President certainly gave him more power too. Maybe this was too much—you probably have to be unusually grounded and principled to not go a bit off the rails if you’re in that sort of position. (Or maybe that means you shouldn’t want to maneuver yourself into quite that much power in the first place.)
It feels vaguely reasonable to me to have a belief as low as 15% on “Superalignment is Real Hard in a way that requires like a 10-30 year pause.” And, at 15%, it still feels pretty crazy to be oriented around racing the way Anthropic is.
Yeah, I think the only way I maybe find the belief combination “15% that alignment is Real Hard” and “racing makes sense at this moment” compelling is if someone thinks that pausing now would be too late and inefficient anyway. (Even then, it’s worth considering the risks of “What if the US aided by AIs during takeoff goes much more authoritarian to the point where there’d be little difference between that and the CCP?”) Like, say you think takeoff is just a couple of years of algorithmic tinkering away and compute restrictions (which are easier to enforce than prohibitions against algorithmic tinkering) wouldn’t even make that much of a difference now.
However, if pausing now is too late, we should have paused earlier, right? So, insofar as some people today justify racing via “it’s too late for a pause now,” where were they earlier?
Separately, I want to flag that my own best guess on alignment difficulty is somewhere in between your “Real Hard” and my model of Anthropic’s position. I’d say I’m overall closer to you here, but I find the “10-30y” thing a bit too extreme. I think that’s almost like saying, “For practical purposes, we non-uploaded humans should think of the deep learning paradigm as inherently unalignable.” I wouldn’t confidently put that below 15% (we simply don’t understand the technology well enough), but I likewise don’t see why we should be confident in such hardness, given that ML at least gives us better control of the new species’ psychology than, say, animal taming and breeding (e.g., Carl Shulman’s arguments somewhere—iirc—in his podcasts with Dwarkesh Patel). Anyway, the thing that I instead think of as the “alignment is hard” objection to the alignment plans I’ve seen described by AI companies, is mostly just a sentiment of, “no way you can wing this in 10 hectic months while the world around you goes crazy.” Maybe we should call this position “alignment can’t be winged.” (For the specific arguments, see posts by John Wentworth, such as this one and this one [particularly the section, “The Median Doom-Path: Slop, Not Scheming”].)
The way I could become convinced otherwise is if the position is more like, “We’ve got the plan. We think we’ve solved the conceptually hard bits of the alignment problem. Now it’s just a matter of doing enough experiments where we already know the contours of the experimental setups. Frontier ML coding AIs will help us with that stuff and it’s just a matter of doing enough red teaming, etc.”
However, note that even when proponents of this approach describe it themselves, it sounds more like “we’ll let AIs do most of it ((including the conceptually hard bits?))” which to me just sounds like they plan on winging it.
The DSM-5 may draw a bright line between them (mainly for insurance reimbursement and treatment protocol purposes), but neurochemically, the transition is gradual.
That sounded mildly surprising to me (though in hindsight I’m not sure why it did) so I checked with Claude 3.7, and it said something similar in reply to me trying to ask a not-too-leading question. (Though it didn’t talk about neurochemistry—just that behaviorally the transition or distinction can often be gradual.)
In my comments thus far, I’ve been almost exclusively focused on preventing severe abuse and too much isolation.
Something else I’m unsure about, but not necessarily a hill I want to die on given that government resources aren’t unlimited, is the question of whether kids should have a right to “something at least similarly good as voluntary public school education.” I’m not sure if this can be done cost-effectively, but if the state had a lot money that they’re not otherwise using in better ways, then I think it would be pretty good to have standardized tests for homeschooled kids every now and then, maybe every two to three years. One of them could be an IQ test, the other an abilities test. If the kid has an IQ that suggests that they could learn things well but they seem super behind other children of their age, and you ask them if they want to learn and they say yes with enthusiasm, then that’s suggestive of the parents doing an inadequate job, in which case you could put them on homeschooling probation and/or force them to allow their child to go to public school?
More concretely, do you think parents should have to pass a criminal background check (assuming this is what you meant by “background check”) in order to homeschool, even if they retain custody of their children otherwise?
I don’t really understand why you’re asking me about this more intrusive and less-obviously-cost-effective intervention, when one of the examples I spelled out above was a lower-effort, less intrusive, less controversial version of this sort of proposal.
I wrote above:
Like, even if yearly check-ins for everyone turn out to be too expensive, you could at least check if people who sign their kid up for homeschooling already have a history of neglect and abuse, so that you can add regular monitoring if that turns out to be the case. (Note that such background checks are a low-effort action where the article claims no state is doing it so far.)
(In case this wasn’t clear, by “regular monitoring” I mean stuff like “have a social worker talk to the kids.”)
To make this more vivid, if someone is, e.g., a step dad with a history of child sexual abuse, or there’s been a previous substantiated complaint about child neglect or physical abuse in some household, then yeah, it’s probably a bad idea if parents with such track records can pull children out of public schools and thereby avoid all outside accountability for the next decade or so, possibly putting their children in a situation where no one would notice if they deteriorated/showed worsening signs of severe abuse. Sure, you’re right that the question of custody plays into that. You probably agree that there are some cases where custody should be taken away. With children in school, there’s quite a bit of “opportunity for noticing surface” for people potentially noticing and checking in if something seems really off. With children in completely unregulated homeschooling environments, there could be all the way down to zero noticing surface (like, maybe the evil grandma locked the children into a dark basement for the last two years and no one outside the household would know). All I’m saying is: The households that opt for potential high isolation, they should get compensatory check ins.
I even flagged that it may be too much effort to hire enough social workers to visit all the relevant households, so I proposed the option that maybe no one needs to check in yearly if Kelsey Piper and her friends are jointly homeschooling their kids, and instead, monitoring resources could get concentrated on cases where there’s a higher prior of severe abuse and neglect.
Again, I don’t see how that isn’t reasonable.
Habryka claims I display a missing mood of not understanding how costly marginal regulation can be. In turn, I for sure feel like the people I’ve been arguing with display something weird. I wouldn’t call it a missing mood, but more like a missing ambition to make things as good as they can be, think in nuances, and not demonize (and write off without closer inspection) all possible regulation just because it’s common for regulation to go too far?
Thanks for elaborating, that’s helpful.
If we were under a different education regime
Something like what you describe would maybe even be my ideal too (I’m hedging because I don’t have super informed views on this). But I don’t understand how my position of “let’s make sure we don’t miss out on low-cost, low-downside ways of safeguarding children (who btw are people too and didn’t consent to be born, especially not in cases where their parents lack empathy or treat children as not people) from severe abuse” is committed to having to answer this hypotethical. I feel like all my position needs to argue for is that some children have parents/caretakers where it would worse if they had 100% control and no accountability than if the children also spend some time outside the household in public school. This can hold true even if we grant that mandatory public school is itself abusive to children who don’t want to be there.
Seriously, −5/-11?
I went through my post line by line and I don’t get what people are allergic to.I’m not taking sides. I flagged that some of the criticisms of homeschooling appear reasonable and important to me. I’m pretty sure I’m right about this, but somehow people want me to say less of this sort of thing, because what? Because public schools are “hell”? How is that different from people who consider the other political party so bad that you cannot say one nuanced thing about them—isn’t that looked down upon on this site?
Also, speaking of “hell,” I want to point out that not all types of abuse are equal and that the most extreme cases of childhood suffering probably happen disproportionally in the most isolated of homes.
How can it not be an ideal to aim for that all children have contact with some qualified person outside their household who can check if they’re not being badly abused? Admittedly, it’s not understood to be a public school teacher’s primary role to notice when something with a child is seriously wrong, but it’s a role that they often end up filling (and I wouldn’t be surprised if they even get trained in this in many areas). You don’t necessarily need public schools for this sort of checking in that serves as a safeguard against some of the most severe kinds of prolonged abuse, but if you just replace public schooling with homeschooling, that role falls away. So, what you could do to get some of the monitoring back: have homeschooling with (e.g.) yearly check-ins with the affected children from a social worker. I don’t know the details, but my guess is that some states have this and others don’t. (Like the “hit piece” claims, regulation differs from state to state and some are poorly regulated.) I’m not saying I know for sure whether yearly check-ins would be cost-effective compared to other things the state puts money in, but it totally might be, and I doubt that the people who are trying to discourage this discussion (with downvotes and fully-general counterarguments that clearly prove way too much) know enough about this either, to be certain that there are no easy/worthwhile ways to make the situation safer. Like, even if yearly check-ins for everyone turn out to be too expensive, you could at least check if people who sign their kid up for homeschooling already have a history of neglect and abuse, so that you can add regular monitoring if that turns out to be the case. (Note that such background checks are a low-effort action where the article claims no state is doing it so far.)
I got the impression that using only an external memory like in the movie Memento (and otherwise immediately forgetting everything that wasn’t explicitly written down) was the biggest hurdle to faster progress. I think it does kind of okay considering that huge limitation. Visually, it would also benefit from learning the difference between what is or isn’t a gate/door, though.
It depends on efficiency of the interventions you’d come up with (some may not be much of a “burden” at all) and on the elasticity with which parents who intend to homeschool are turned away by “burdens”. You make a good point but what you say is not generally true—it totally depends on the specifics of the situation. (Besides, didn’t the cited study say that both rates of abuse were roughly equal? I don’t think anyone suggested that public schooled kids have [edit: drastically] higher abuse rates than home schooled ones? Was it 37% vs 36%?)
I feel like it’s worth pointing out the ways homeschooling can go badly wrong. Whether or not there’s a correlation between homeschooling and abuse, it’s obvious that homeschooling can cover up particulary bad instances of abuse (even if it’s not the only way to do that). So, I feel like a position of “homeschooling has the potential to go very bad; we should probably have good monitoring to prevent that; are we sure we’re doing that? Can we check?” seems sensible.
The article you call a “hit piece” makes some pretty sensible points. The title isn’t something like “The horrors of homeschooling,” but rather, “Children Deserve Uniform Standards in Homeschooling.”
Here are some quotes that support this reasonable angle:
Some children may not be receiving any instruction at all. Most states don’t require homeschooled kids to be assessed on specific topics the way their classroom-based peers are. This practice enables educational neglect that can have long-lasting consequences for a child’s development.
[...]
Not one state checks with Child Protective Services to determine whether the parents of children being homeschooled have a history of abuse or neglect.
[...]
But federal mandates for reporting and assessment to protect children don’t need to be onerous. For example, homeschool parents could be required to pass an initial background check, as every state requires for all K–12 teachers.
Those all seem like important points to me regardless of whether the article is right about the statistics you and Eric Hoel criticize.
(As an aside, I don’t even get why Eric Hoel is convinced that the article wanted to claim that homeschooling is correlated with abuse. To me, these passages, including the 36% figure, are not about claiming that homeschooling leads to more abuse than school schooling. Instead, I interpret them as saying, “There’s too much abuse happening in the homeschooling context, so it would be good to have better controls.” The article even mentions that homeschooling may be the best choice for some children.)
The variance in homeschooling is clearly huge. The affluent rationalists who coordinate with each other to homeschool their kids are a totally different case than aella’s upbringing, which is yet again different from the religious fundamentalist Holocaust-denying household where the father has untreated bipolar disorder and paranoid delusions, and the mother makes medically inadequate home remedies to be the sole treatment for injuries of their half a dozen (or so) children who work on the family scrapyard where life-altering injuries were common. See this memoir—sure, it’s an extreme example, but how many times do we not hear about the experiences of children in situations like that, since the majority of them don’t break free and become successful writers?
Thanks, that’s helpful context! Yeah, it’s worth flagging that I have not read Duncan’s post beyond the list.
Seems like my reaction proved this part right, at least. I knew some people must find something about it fun, but my model was more like “Some people think comfort/trust zone expansion itself is fun” rather than “Some people with already-wide comfort/trust zones find it fun to do things that other people would only do under the banner of comfort/trust zone expansion.”
(Sometimes the truth can be somewhere in the middle, though? I would imagine that the people who would quite like to do most of the things in the list find it appealing that it’s about stuff you “don’t normally do,” that it’s “pushing the envelope” a little?)
That said, I don’t feel understood by the (fear of) humiliation theme in your summary of Duncan’s post. Sure, that’s a thing and I have that as well, but the even bigger reason why I wouldn’t be comfortable going through a list of “actions to do in the context of a game that’s supposed to be fun” is because that entire concept just doesn’t do anything for me? It just seems pointless at best plus there’s uncomfortableness from the artificiality of it?
As I also wrote in my reply to John: