I write applieddivinitystudies.com
AppliedDivinityStudies
Startup Success Rates Are So Low Because the Rewards Are So Large
Hi, I write AppliedDivinityStudies.com which you link to. A couple quick clarifications:
- The blog is not written by Alexey Guzey.
- In the piece you link I’m just taking Toby Ord’s estimates at face value to use them as a parameter, I haven’t given this a ton of thought.
But basically I do think AI Risk is important. I don’t write about it because I don’t have anything particularly smart to say. As you note, it’s a complex topic, and I don’t really feel like there’s any value in me contributing unless I were to really invest in learning much more.
Once every couple years or so, I feel a bad about this and try to spend a few days learning much more. Given those experiences, I think it’s reasonable for me to believe that I’m bad enough at thinking about AI Risk that I can justify not working on it full-time.
My contributions to the effort, if I have any, will mostly be in more abstract philosophical discourse. The post you link for example is about whether or not trying to accelerate scientific progress would be good for x-risk. I have more work coming up on whether or not we should expect optimized dystopia to be worse than optimized utopia is good.
Guyenet, who’s palatability theory you seem to prefer, shares the same interpretation as SMTM. His chart from The Hungry Brain:
And his commentary:
In 1960, one out of seven US adults had obesity. By 2010, that number had increased to one out of three (see figure 2). The prevalence of extreme obesity increased even more remarkably over that time period, from one out of 111 to one out of 17. Ominously, the prevalence of obesity in children also increased nearly fivefold. Most of these changes occurred after 1978 and happened with dizzying speed. [emaphasis mine]
Wait what? This isn’t even a chart of the same thing.
It’s a chart of data from 1880 to ~1980, whereas SMTM as looking specifically as the change in 1980
It’s a charge of BMI, whereas the SMTM chart is looking at growth in extreme outcomes. Say you have a normal distribution with mean 24 and SD 2. Only 0.13% of the population will have BMI over 30. But as the mean BMI slowly increases, you see rapid growth in extreme outcomes. At mean BMI of 26, you’re up to 2% over 30, at mean BMI 28 you’re up to 16%. So the fact that BMI growth is smooth doesn’t imply that obesity growth is smooth too.
It was 100mg 2x daily for 10 days.
On one hand: that’s higher than a typical dose for depression. On the other hand, I’m tempted to take the maximum safe amount. I think up to 300mg / day is sometimes prescribed for depression, but I really don’t know how the risk of various side effects increases with dosage.
So one option if you get Covid is to start at 50mg, see if you have any side effects, see how they compare to the severity of your Covid symptoms, and react accordingly. But in practice if I got Covid and had symptoms I would probably just do the 100mg 2x.
Nice, that’s cool to see.
FWIW I would personally totally take paxlovid over fluvoxamine if I could, but it seems to be in very short supply.
I have no idea. I guess it falls under “Unauthorized Practice of Medicine”.
Sorry yes, you should take it as evidence, just not definitive evidence.
Might be hard on your liver?
I’m sure you’ve seen it before, but just in case anyone hasn’t: https://www.gwern.net/Modafinil
You Can Get Fluvoxamine
R.e. #2
They provide literally no evidence for the proposition that the Hadza eat as much sugar as Americans do, their citations certainly do not demonstrate this.
The cited paper has a chart showing that Hadza get 14% of their calories from honey. A quick google search claims honey has 17g sugar per 64 calorie serving. So assuming 2000 calories/day, that’s 280 calories of honey, which contains 74g sugar.
According to some google results, Americans consume around 70 (highest I saw was 77) grams of total sugar per day.
So it does seem to be similar. SMTM writes that this is “Combined with all the sugar they get from eating fruit”. The same paper says 19% of their calories are from berries and another 18% from baobab fruit.
So it seems entirely plausible that SMTM is correct here.
Command-f quote marks.
It’s highly suggestive that every single “quote” Sasha uses here to illustrate the supposed social norms of the EA/Rat community is invented. He literally could not find a single actual source to support his claims about what EA/Rats believe.
“don’t ever indulge in Epicurean style, and never, ever stop thinking about your impact on the world.”
Does GiveWell endorse that message on any public materials? Does OpenPhil? FHI? The only relevant EA writing I’m aware of (Scott Alexander, Ben Kuhn, Kelsey Piper) is about how that is specifically not the attitude they endorse.
Come on, this is pure caricature.
Very interested
Chiming as someone who has consistently heard great things about his writing, but was personally put off by his politics.
I think it’s useful to understand:
How he’s achieved this level of real-world influence, f it’s conditional on engaging in “dark arts”, and if so if those “dark arts” have to be used for nefarious aims. For example, I would feel much more favorable towards his ability to manipulate the public if he was using it for causes I agree with on the object-level. So either Dominic’s abilities only work for “evil”, which would be interesting to understand, or they’re actually general purpose with potential good uses as well.
How rationality can be misused, and if so, if this implies the rationality community has a responsibility to prevent similar occurrences in the future.
...the upshot being: I still haven’t read a lot of his stuff, but I feel somewhat guilty about this and plan to get around it eventually.
Ranked Choice Voting is Arbitrarily Bad
Sorry! Fixed
Responses and Testimonies on EA Growth
This is great, thanks! Wish I had seen this earlier.
Thanks! I have an x-post here pending moderator approval https://forum.effectivealtruism.org/posts/dRkGXHxKGWwWY6AqP/why-hasn-t-effective-altruism-grown-since-2015-1
Right yeah, that’s exactly it.