But I (and I think others on LW team although for slightly different reasons) have been thinking about building a feature directly into LW to facilitate it.
Maybe consider making it super easy (one click easy) to export LW posts to google docs?
But I (and I think others on LW team although for slightly different reasons) have been thinking about building a feature directly into LW to facilitate it.
Maybe consider making it super easy (one click easy) to export LW posts to google docs?
ACX is probably a better reference class: https://astralcodexten.substack.com/p/2023-subscription-drive-free-unlocked. In Jan, ACX had 78.2k readers, of which 6.0k subscribers for a 7.7% subscription rate.
I think it might be good to normalize “just try stuff until they fix your condition” as one of the treatment strategies. I guess it’s a bit ironic that Dr. Spray-n-pray’s indifference toward which pill worked and why seems so epistemically careless, while actually maybe being a correct way to orient towards success when you optimize for luck and have little reliable information.
Russian military doctrine allows the usage of nuclear weapons to defend Russian territory.
This is ~false. See: https://forum.effectivealtruism.org/posts/TkLk2xoeE9Hrx5Ziw/nuclear-attack-risk-implications-for-personal-decision?commentId=ukEznwTnD78wFdZip#ukEznwTnD78wFdZip
Here is a google sheet.
I want to mention that Tsvi Benson-Tilsen is a mentor at this summer’s PIBBSS. So some readers might consider applying (the deadline is Jan 23rd).
I myself was mentored by Abram Demski once through the FHI SRF, which AFAIK was matching fellows with a large pull of researchers based on mutual interests.
I am looking for text-to-speech tools for various contexts. As of now, I am using
@Voice Aloud Reader (TTS Reader) and a custom script to extract articles from webpages for Android (supports .epub and .pdf as well);
Capti Voice on my desktop for everything.
I would appreciate it if the ToC linked to the web versions of the essay.
A follow-up (h/t LW review). I got quite a bit out of the workshop, most importantly
I found a close friend and collaborator, whom I don’t think I would have met otherwise.
I found a close friend and co-founder, whom I was likely to meet otherwise, but it’s unlikely that we would have a good enough bond by covid-times.
There was much more but much less legible and “evaluatable.” I think ESE was excellent, and I would have done it even if I knew that I wouldn’t get two close friendships out of it.
Or, to change tack: the operating budget of the LessWrong website has historically been ~$600k, and this budget is artificially low because the site has paid extremely below-market salaries. Adjusting for the market value of the labor, the cost is more like $1M/year, or $2,700/day. If I assume LessWrong generates more value than the cost required to run it, I estimate that the site provides at least $2,700/day in value, probably a good deal more.
I think this estimate is mistaken because it ignores marginalism: basically, the cost of disabling LW for a year is much larger than 365 * the cost of disabling LW for only a day. The same goes for disabling the whole website vs. disabling only the frontpage.
(Sorry for adding salt to hurt feelings; posting because impact evaluation of longtermism projects is important.)
Maybe reading Gelman’s self-contained comments on SSC’s More Confounders would make you more confused in a good way.
Hey! Could you say more about a causal link between Sequences and writing these papers, please:
I was able to do from muscle memory certain calculations about conditional probability and expectation that might have taken weeks otherwise (if we figured them out at all). I attribute this ability in large part to reading the Sequences.
I think my confusion comes from (a) having enough math background (read some chapters of The Probabilistic Method yers ago); (b) while reading Sequences and more so AF discussions added to my understanding of formal epistemology, I am surprised that your emphasis how Sequences affected your muscle memory and ability to do calculations.
As this answer got upvoted, I collected some Dubna’s courses read in English, for which recordings are available (look for “Доступны 4 видеозаписи курса.”)
https://t.me/mathtabletalks (in Russian)
https://www.mccme.ru/dubna/eng/ (a summer school aimed at teaching advanced topic to high schoolers and early undergrads in Russia; there are hundreds of recordings and a dozen of edited notes from the school available in Russian)
http://math.jacobs-university.de/summerschool/ (European version of above; don’t know much about them adding just in case)
Metaculus 2020 U.S. Election Risks Survey doesn’t give >1% for >5000 deaths, but I think it is justified to infer something like that from it:
While large-scale violence and military intervention to quell civil unrest seem unlikely, experts still judged these possibilities to be far from remote. Experts predicted a median of 60 deaths occurring due to election-related violence, with an 80% confidence interval of 0 to 912 fatalities that reflects a high degree of uncertainty. Still, the real possibility of violence is a notable departure from the peaceful transitions that have been the hallmark of past U.S. elections. Results indicate an 8% probability of over 1,000 election-related deaths — suggesting that while widespread sustained clashes are unlikely, this possibility warrants real concern. Experts assigned a 10% median prediction that President Trump will invoke the Insurrection Act to mobilize troops during the transition period.
A better example: one might criticize CDC for lack of advice aimed at the vulnerable demographics. But absence might result not from lack of judgment but from political constraints. E.g. jimrandomh writes:
Addendum: A whistleblower claims that CDC wanted to advise elderly and fragile people to not fly on commercial airlines, but removed this advice at the White House’s direction.
Upd: this might be indicative of other negative characteristics of CDC (which might contribute to unreliability) but I don’t know enough about the US gov to asses it.
There is already a lot of automatic censoring happening. I am unsure how much LLMs add on top of existing and fairly successful techniques from spam filtering. And just using LLMs is probably prohibitive at the scale of social media (definitely for tech companies, maybe not for governments), but perhaps you can get an edge for some use-case with them.