Adding my anecdote to everyone else’s: after learning about the palatability hypothesis, I resolved to eat only non-tasty food for a while, and lost 30 pounds over about four months (200 → 170). I’ve since relaxed my diet a little to include a little tasty food, and now (8 months after the start) have maintained that loss (even going down a little further).
Scott Alexander
Update: I interviewed many of the people involved and feel like I understand the situation better.
My main conclusion is that I was wrong about Michael making people psychotic. Everyone I talked to had some other risk factor, like a preexisting family or personal history, or took recreational drugs at doses that would explain their psychotic episodes.
Michael has a tendency to befriend people with high trait psychoticism and heavy drug use, and often has strong opinions on their treatment, which explains why he is often very close to people and very noticeable at the moment they become psychotic. But aside from one case where he recommended someone take a drug that made a bad situation slightly worse, and the general Berkeley rationalist scene that he (and I and everyone else here) is a part of having lots of crazy ideas that are psychologically stressful, I no longer think he is a major cause.
While interviewing the people involved, I did get some additional reasons to worry that he uses cult-y high-pressure recruitment tactics on people he wants things from, in ways that make me continue to be nervous about the effect he *could* have on people. But the original claim I made that I knew of specific cases of psychosis which he substantially helped precipitate turned out to be wrong, and I apologize to him and to Jessica. Jessica’s later post https://www.lesswrong.com/posts/pQGFeKvjydztpgnsY/occupational-infohazards explained in more detail what happened to her, including the role of MIRI and of Michael and his friends, and everything she said there matches what I found too. Insofar as anything I wrote above produces impressions that differs from her explanation, assume that she is right and I am wrong.
Since the interviews involve a lot of private people’s private details, I won’t be posting anything more substantial than this publicly without a lot of thoughts and discussion. If for some reason this is important to you, let me know and I can send you a more detailed summary of my thoughts.
I’m deliberately leaving this comment in this obscure place for now while I talk to Michael and Jessica about whether they would prefer a more public apology that also brings all of this back to people’s attention again.
- Dec 9, 2024, 2:22 AM; 82 points) 's comment on Sapphire Shorts by (
I agree it’s not necessarily a good idea to go around founding the Let’s Commit A Pivotal Act AI Company.
But I think there’s room for subtlety somewhere like “Conditional on you being in a situation where you could take a pivotal act, which is a small and unusual fraction of world-branches, maybe you should take a pivotal act.”
That is, if you are in a position where you have the option to build an AI capable of destroying all competing AI projects, the moment you notice this you should update heavily in favor of short timelines (zero in your case, but everyone else should be close behind) and fast takeoff speeds (since your AI has these impressive capabilities). You should also update on existing AI regulation being insufficient (since it was insufficient to prevent you)
Somewhere halfway between “found the Let’s Commit A Pivotal Act Company” and “if you happen to stumble into a pivotal act, take it”, there’s an intervention to spread a norm of “if a good person who cares about the world happens to stumble into a pivotal-act-capable AI, take the opportunity”. I don’t think this norm would necessarily accelerate a race. After all, bad people who want to seize power can take pivotal acts whether we want them to or not. The only people who are bound by norms are good people who care about the future of humanity. I, as someone with no loyalty to any individual AI team, would prefer that (good, norm-following) teams take pivotal acts if they happen to end up with the first superintelligence, rather than not doing that.
Another way to think about this is that all good people should be equally happy with any other good person creating a pivotal AGI, so they won’t need to race among themselves. They might be less happy with a bad person creating a pivotal AGI, but in that case you should race and you have no other option. I realize “good” and “bad” are very simplistic but I don’t think adding real moral complexity changes the calculation much.
I am more concerned about your point where someone rushes into a pivotal act without being sure their own AI is aligned. I agree this would be very dangerous, but it seems like a job for normal cost-benefit calculation: what’s the risk of your AI being unaligned if you act now, vs. someone else creating an unaligned AI if you wait X amount of time? Do we have any reason to think teams would be systematically biased when making this calculation?
- What does it take to defend the world against out-of-control AGIs? by Oct 25, 2022, 2:47 PM; 212 points) (
- What does it take to defend the world against out-of-control AGIs? by Oct 25, 2022, 2:47 PM; 43 points) (EA Forum;
- Jun 9, 2022, 12:45 PM; 13 points) 's comment on AGI Ruin: A List of Lethalities by (EA Forum;
- Oct 15, 2022, 5:20 PM; 8 points) 's comment on What does it mean for an AGI to be ‘safe’? by (EA Forum;
- Oct 4, 2024, 3:11 PM; 4 points) 's comment on Shutting down all competing AI projects might not buy a lot of time due to Internal Time Pressure by (
My current plan is to go through most of the MIRI dialogues and anything else lying around that I think would be of interest to my readers, at some slow rate where I don’t scare off people who don’t want to read too much AI stuff. If anyone here feels like something else would be a better use of my time, let me know.
I don’t think hunter-gatherers get 16000 to 32000 IU of Vitamin D daily. This study suggests Hadza hunter-gatherers get more like 2000. I think the difference between their calculation and yours is that they find that hunter-gatherers avoid the sun during the hottest part of the day. It might also have to do with them being black, I’m not sure.
Hadza hunter gatherers have serum D levels of about 44 ng/ml. Based on this paper, I think you would need total vitamin D (diet + sunlight + supplements) of about 4400 IU/day to get that amount. If you start off as a mildly deficient American (15 ng/ml), you’d need an extra 2900 IU/day; if you start out as an average white American (30 ng/ml), you’d need an extra 1400 IU/day. The Hadza are probably an overestimate of what you need since they’re right on the equator—hunter-gatherers in eg Europe probably did fine too. I think this justifies the doses of 400 − 2000 IU/day in studies as reasonably evolutionarily-informed.
Please don’t actually take 16000 IU/day of vitamin D daily, if taken long-term this would put you at risk for vitamin D toxicity.
I also agree with the issues about the individual studies which other people have brought up.
Thanks for looking into this.
Maybe. It might be that if you described what you wanted more clearly, it would be the same thing that I want, and possibly I was incorrectly associating this with the things at CFAR you say you’re against, in which case sorry.
But I still don’t feel like I quite understand your suggestion. You talk of “stupefying egregores” as problematic insofar as they distract from the object-level problem. But I don’t understand how pivoting to egregore-fighting isn’t also a distraction from the object-level problem. Maybe this is because I don’t understand what fighting egregores consists of, and if I knew, then I would agree it was some sort of reasonable problem-solving step.
I agree that the Sequences contain a lot of useful deconfusion, but I interpret them as useful primarily because they provide a template for good thinking, and not because clearing up your thinking about those things is itself necessary for doing good work. I think of the cryonics discussion the same way I think of the Many Worlds discussion—following the motions of someone as they get the right answer to a hard question trains you to do this thing yourself.
I’m sorry if “cultivate your will” has the wrong connotations, but you did say “The problem that’s upstream of this is the lack of will”, and I interpreted a lot of your discussion of de-numbing and so on as dealing with this.
Part of what inspired me to write this piece at all was seeing a kind of blindness to these memetic forces in how people talk about AI risk and alignment research. Making bizarre assertions about what things need to happen on the god scale of “AI researchers” or “governments” or whatever, roughly on par with people loudly asserting opinions about what POTUS should do. It strikes me as immensely obvious that memetic forces precede AGI. If the memetic landscape slants down mercilessly toward existential oblivion here, then the thing to do isn’t to prepare to swim upward against a future avalanche. It’s to orient to the landscape.
The claim “memetic forces precede AGI” seems meaningless to me, except insofar as memetic forces precede everything (eg the personal computer was invented because people wanted personal computers and there was a culture of inventing things). Do you mean it in a stronger sense? If so, what sense?
I also don’t understand why it’s wrong to talk about what “AI researchers” or “governments” should do. Sure, it’s more virtuous to act than to chat randomly about stuff, but many Less Wrongers are in positions to change what AI researchers do, and if they have opinions about that, they should voice them. This post of yours right now seems to be about what “the rationalist community” should do, and I don’t think it’s a category error for you to write it.
Maybe this would easier if you described what actions we should take conditional on everything you wrote being right.
Thank you for writing this. I’ve been curious about this and I think your explanation makes sense.
I wasn’t convinced of this ten years ago and I’m still not convinced.
When I look at people who have contributed most to alignment-related issues—whether directly, like Eliezer Yudkowsky and Paul Christiano—or theoretically, like Toby Ord and Katja Grace—or indirectly, like Sam Bankman-Fried and Holden Karnofsky—what all of these people have in common is focusing mostly on object-level questions. They all seem to me to have a strong understanding of their own biases, in the sense that gets trained by natural intelligence, really good scientific work, and talking to other smart and curious people like themselves. But as far as I know, none of them have made it a focus of theirs to fight egregores, defeat hypercreatures, awaken to their own mortality, refactor their identity, or cultivate their will. In fact, all them (except maybe Eliezer) seem like the kind of people who would be unusually averse to thinking in those terms. And if we pit their plumbing or truck-manuevering skills against those of an average person, I see no reason to think they would do better (besides maybe high IQ and general ability).
It’s seemed to me that the more that people talk about “rationality training” more exotic than what you would get at a really top-tier economics department, the more those people tend to get kind of navel-gazey, start fighting among themselves, and not accomplish things of the same caliber as the six people I named earlier. I’m not just saying there’s no correlation with success, I’m saying there’s a negative correlation.
(Could this be explained by people who are naturally talented not needing to worry about how to gain talent? Possibly, but this isn’t how it works in other areas—for example, all top athletes, no matter how naturally talented, have trained a lot.)
You’ve seen the same data I have, so I’m curious what makes you think this line of research/thought/effort will be productive.
If everyone involved donates a consistent amount to charity every year (eg 10% of income), the loser could donate their losses to charity, and the winner could count that against their own charitable giving for the year, ending up with more money even though the loser didn’t directly pay the winner.
Thanks for doing this!
Interpreting you as saying that January-June 2017 you were basically doing the same thing as the Leveragers when talking about demons and had no other signs of psychosis, I agree this was not a psychiatric emergency, and I’m sorry if I got confused and suggested it was. I’ve edited my post also.
Sorry, yes, I meant the psychosis was emergency. Non-psychotic discussion of auras/demons isn’t.
I’m kind of unclear what we’re debating now.
I interpret us as both agreeing that there are people talking about auras and demons who are not having psychiatric emergencies (eg random hippies, Catholic exorcists), and they should not be bothered, except insofar as you feel like having rational arguments about it.
I interpret us as both agreeing that you were having a psychotic episode, that you were going further / sounded less coherent than the hippies and Catholics, and that some hypothetical good diagnostician / good friend should have noticed that and suggested you seek help.
Am I right that we agree on those two points? Can you clarify what you think our crux is?
You wrote that talking about auras and demons the way Jessica did while at MIRI should be considered a psychiatric emergency. When done by a practicing psychiatrist this is an impingement on Jessica’s free speech.
I don’t think I said any talk of auras should be a psychiatric emergency, otherwise we’d have to commit half of Berkeley. I said that “in the context of her being borderline psychotic” ie including this symptom, they should have “[told] her to seek normal medical treatment”. Suggesting that someone seek normal medical treatment is pretty different from saying this is a psychiatric emergency, and hardly an “impingement” on free speech. I’m kind of playing this in easy mode here because in hindsight we know Jessica ended up needing treatment, I feel like this makes it pretty hard to make it sound sinister when I suggest this.
You wrote this in response to a post that contained the following and only the following mentions of demons or auras:
“During this time, I was intensely scrupulous; I believed that I was intrinsically evil, had destroyed significant parts of the world with my demonic powers, and was in a hell of my own creation...” [followed by several more things along these lines]
Yes? That actually sounds pretty bad to me. If I ever go around saying that I have destroyed significant parts of the world with my demonic powers, you have my permission to ask me if maybe I should seek psychiatric treatment. If you say “Oh yes, Scott, that’s a completely normal and correct thing to think, I am validating you and hope you go deeper into that”, then once I get better I’ll accuse you of being a bad friend. Jessica’s doing the opposite and accusing MIRI of being a bad workplace for not validating and reinforcing her in this!
I think what we all later learned about Leverage confirms all this. Leverage
did the thing Jessica wanted MIRI to dotold everyone ex cathedra that demons were real and they were right to be afraid of them, and so they got an epidemic of mass hysteria that sounds straight out of a medieval nunnery. People were getting all sorts of weird psychosomatic symptoms, and one of the commenters said their group house exploded when one member accused another member of being possessed by demons, refused to talk or communicate with them in case the demons spread, and the “possessed” had to move out. People felt traumatized, relationships were destroyed, it sounded awful.MIRI is under no obligation to
validate and signal-boosttolerate individual employees’ belief in demons, including some sort of metaphorical demons. In fact, I think they’re under a mild obligation not to, as part of their role as ~leader-ish in a rationalist community. They’re under an obligation to model good epistemics for the rest of us and avoid more Leverage-type mass hysterias.One of my heroes is this guy:
https://www.youtube.com/watch?v=Bmo1a-bimAM
Surinder Sharma, an Indian mystic, claimed to be able to kill people with a voodoo curse. He was pretty convincing and lots of people were legitimately scared. Sanal Edamaruku, president of the Indian Rationalist Organization, challenged Sharma to kill him. Since this is the 21st century and capitalism is amazing, they decided to do the whole death curse on live TV. Sharma sprinkled water and chanted magic words around Edamaruku. According to Wikipedia, “the challenge ended after several hours, with Edamaruku surviving unharmed”.
If Leverage had a few more Sanal Edamarukus, a lot of people would have avoided a pretty weird time.
I think the best response MIRI could have had to all this would have been for Nate Soares to challenge Geoff Anders to infect him with a demon on life TV, then walk out unharmed and laugh. I think the second-best was the one they actually did.
EDIT: I think I misunderstood parts of this, see below comments.
Thanks for this.
I’ve been trying to research and write something kind of like this giving more information for a while, but got distracted by other things. I’m still going to try to finish it soon.
While I disagree with Jessica’s interpretations of a lot of things, I generally agree with her facts (about the Vassar stuff which I have been researching; I know nothing about the climate at MIRI). I think this post gives most of the relevant information mine would give. I agree with (my model of) Jessica that proximity to Michael’s ideas (and psychedelics) was not the single unique cause of her problems but may have contributed.
The main thing I’d fight if I felt fighty right now is the claim that by not listening to talk about demons and auras MIRI (or by extension me, who endorsed MIRI’s decision) is impinging on her free speech. I don’t think she should face legal sanction for talking about this these, but I also don’t think other people were under any obligation to take it seriously, including if she was using these terms metaphorically but they disagree with her metaphors or think she wasn’t quite being metaphorical enough.
- Mar 8, 2023, 8:33 PM; 15 points) 's comment on Abuse in LessWrong and rationalist communities in Bloomberg News by (EA Forum;
Embryos produced by the same couple won’t vary in IQ too much, and we only understand some of the variation in IQ, so we’re trying to predict small differences without being able to see what’s going on too clearly. Gwern predicts that if you had ten embryos to choose from, understood the SNP portion of IQ genetics perfectly, and picked the highest-IQ without selecting on any other factor, you could gain ~9 IQ points over natural conception.
Given our current understanding of IQ genetics, keeping the other two factors the same, you can gain ~3 points. But the vast majority of couples won’t get 10 embryos, and you may want to select for things other than IQ (eg not having deadly diseases). So in reality it’ll be less than that.
The only thing here that will get better in the future is our understanding of IQ genetics, but it doesn’t seem to be moving forward especially quickly, at some point we’ll exhaust the low- and medium- hanging fruits, and even if we do a great job there the gains will max out at somewhere less than 9 points.
Also, this is assuming someone decides to make polygenic screening for IQ available at some point, or someone puts in the work to make it easy for the average person to do despite being not officially available.
I am not an expert in this and would defer to Gwern or anyone who knows more.
“Diagnosed” isn’t a clear concept.
The minimum viable “legally-binding” ADHD diagnosis a psychiatrist can give you is to ask you about your symptoms, compare them to extremely vague criteria in the DSM, and agree that you sound ADHD-ish.
ADHD is a fuzzy construct without clear edges and there is no fact of the matter about whether any given individual has it. So this is just replacing your own opinion about whether you seem to fit a vaguely-defined template with a psychiatrist’s only slightly more informed opinion. The most useful things you could get out of this are meds (which it seems you don’t want) accommodations at certain workplaces and schools (as Elizabeth describes in her common), and maybe getting your insurance to pay for certain kinds of therapy—but don’t assume your insurance will actually do this unless you check.
Beyond that minimum viable diagnosis, there are also various complicated formal ADHD tests. Not every psychiatrist will refer you to these, not every insurance company will pay for one of them, and you should be prepared to have to advocate for yourself hard if you want one. If you get one of these, it can tell you eg what percentile you are in for various cognitive skills, for example, 95% of people are better at maintaining focus than you are. Maybe some professional knows how to do something useful with this, but I (a psychiatrist) don’t, and you probably won’t find that professional unless you look hard for them.
If you already have a strong sense of your cognitive strengths and weaknesses and don’t need accommodations, I don’t think the diagnosis would add very much. Even without a diagnosis, if you think you have problems with attention/focus/etc, you can read books aimed at ADHD people to try to see what kind of lifestyle changes you can make.
In very rare cases, you will get a very experienced psychiatrist who is happy to work with you on making lifestyle/routine changes and very good at telling you what to do, but don’t expect this to happen by accident. You’re more likely to get this from an ADHD coach, who will take you as a client whether or not you have an official diagnosis.
I would look into social impact bonds, impact certificates, and retroactive public goods funding. I think these are three different attempts to get at the same insight you’ve had here. There are incipient efforts to get some of them off the ground and I agree that would be great.
There’s polygenic screening now. It doesn’t include eg IQ, but polygenic screening for IQ is unlikely to be very good any time in the near future. Probably polygenic screening for other things will improve at some rate, but regardless of how long you wait, it could always improve more if you wait longer, so there will never be a “right time”.
Even in the very unlikely scenario where your decision about child-rearing should depend on something about polygenic screening, I say do it now.
For the first part of the experiment, mostly nuts, bananas, olives, and eggs. Later I added vegan sausages + condiments.