A “Core Views on AI Safety” post is now available at https://www.anthropic.com/index/core-views-on-ai-safety
(Linkpost for that is here: https://www.lesswrong.com/posts/xhKr5KtvdJRssMeJ3/anthropic-s-core-views-on-ai-safety.)
A “Core Views on AI Safety” post is now available at https://www.anthropic.com/index/core-views-on-ai-safety
(Linkpost for that is here: https://www.lesswrong.com/posts/xhKr5KtvdJRssMeJ3/anthropic-s-core-views-on-ai-safety.)
I’ve run Hamming circles within CFAR contexts a few times, and once outside. Tips from outside:
Timing can be tricky here! If you do 4x 20m with breaks, and you’re doing this in an evening, then by the time you get to the last person, people might be tired.
Especially so if you started with the Hamming Questions worksheet exercise (link as prereq at top of post).
I think next time I would drop to 15 each, and keep the worksheet.
Thanks for the writeup! The first paper covers the first half of the video series, more or less. I’ve been working on a second paper which will focus primarily on the induction bump phenomenon (and other things described in the second half of the video series), so much more to come there!
I appreciate the concept of “Numerical-Emotional Literacy”. In fact, this is what I personally think/feel the “rationalist project” should be. To the extent I am a “rationalist” then precisely specifically what I mean by that is that knowing what I value, and pursuing numerical-emotional literacy around it, is important to me.
To make in-line adjustments, grab a copy of the spreadsheet (https://www.microcovid.org/spreadsheet) and do anything you like to it!
Also, if you live alone and don’t have any set agreements with anyone else, then the “budgeting” lens is sort of just a useful tool to guide thinking. Absent pod agreements, as an individual decisionmaker, you should just spend uCoV when it’s worth the tradeoff, and not when it’s not.
You could think about it as an “annualized” risk, more than an “annual” risk; more like “192 points per week, in a typical week, on average” and it kind of amortizes out, and less like “you have 10k and once you spend it you’re done”
There is now a wired article about this tool and the process of creating it: https://www.wired.com/story/group-house-covid-risk-points/
I think the reporter did a great job of capturing what an “SF group house” is like and how to live a kind of “high IQ / high EQ” rationalist-inspired live, so this might be a thing one could send to friends/family about “how we do things”.
It’s not just Dario, it’s a larger subset of OpenAI splitting off: “He and a handful of OpenAI colleagues are planning a new project, which they tell us will probably focus less on product development and more on research. We support their move and we’re grateful for the time we’ve spent working together.”
I heard someone wanted to know about usage statistics for the microcovid.org calculator. Here they are!
Sorry to leave you hanging for so long Richard! This is the reason why in the calculator we ask about “number of people typically near you at a given time” for the duration of the event. (You can also think of this as a proxy for “density of people packed into the room”.) No reports like that that I’m aware of, alas!
Want to just give credit to all the non-rationalist coauthors of microcovid.org! (7 non-rationalists and 2 “half-rationalists”?)
I’ve learned a LOT about the incredible power of trusted collaborations between “hardcore epistemics” folks and much more pragmatic folks with other skillsets (writing, UX design, medical expertise with ordinary people as patients, etc). By our powers combined we were able to build something usable by non-rationalist-but-still-kinda-quantitative folks, and are on our way to something usable by “normal people” 😲.
We’ve been able to get a lot more scale of distribution/usage/uptake with a webapp, than if we had just released a spreadsheet & blogpost. And coauthors put everything I wrote through MANY rounds of extensive writing/copy changes to be more readable by ordinary folks. We get feedback often that we’ve changed someone’s entire way of thinking about risks and probabilities. This has surprised and delighted me. And I think the explicit synthesis between rationalist and non-rationalist perspectives on the team has been directly helpful.
Also, don’t forget to factor in “kicking off a chain of onwards infections” into your COVID avoidance price somehow. You can’t stop at valuing “cost of COVID to *me*”.
We don’t really know how to do this properly yet, but see discussion here: https://forum.effectivealtruism.org/posts/MACKemu3CJw7hcJcN/microcovid-org-a-tool-to-estimate-covid-risk-from-common?commentId=v4mEAeehi4d6qXSHo#No5yn8nves7ncpmMt
Sadly nothing useful. As mentioned here (https://www.microcovid.org/paper/2-riskiness#fn6) we think it’s not higher than 10%, but we haven’t found anything to bound it further.
“I’ve heard people make this claim before but without explaining why. [...] the key risk factors for a dining establishment are indoor vs. outdoor, and crowded vs. spaced. The type of liquor license the place has doesn’t matter.”
I think you’re misunderstanding how the calculator works. All the saved scenarios do is fill in the parameters below. The only substantial difference between “restaurant” and “bar” is that we assume bars are places people speak loudly. That’s all. If the bar you have in mind isn’t like that, just change the parameters.
entry-level leadership
It has become really salient to me recently that good practice involves lots of prolific output in low-stakes throwaway contexts. Whereas a core piece of EA and rationalist mindsets is steering towards high-stakes things to work on, and treating your outputs as potentially very impactful and not to be thrown away. In my own mind “practice mindset” and “impact mindset” feel very directly in tension.
I have a feeling that something around this mindset difference is part of why world-saving orientation in a community might be correlated with inadequate opportunities for low-stakes leadership practice.
COI: I work at Anthropic
I confirmed internally (which felt personally important for me to do) that our partnership with Palantir is still subject to the same terms outlined in the June post “Expanding Access to Claude for Government”:
The contractual exceptions are explained here (very short, easy to read): https://support.anthropic.com/en/articles/9528712-exceptions-to-our-usage-policy
The core of that page is as follows, emphasis added by me:
This is all public (in Anthropic’s up-to-date support.anthropic.com portal). Additionally it was announced when Anthropic first announced its intentions and approach around government in June.