Is this still feasible now?
ClipMonger
Will there be one of these for 2022?
I think that the idea of dath ilan being better at solving racism than earth social media is really valuable (in basically every different way that dath ilan stories are valuable, which is a wide variety of extremely different reasons). It should be covered again, at projectlawful at least, but this is a huge deal, writing more of it can achieve a wide variety of goals, and it definitely isn’t something we should sleep on or let die here.
I don’t think that putting in the guide was a very good idea. It’s the unfamiliarity that makes people click away, not any lack of straightforwardness. All that’s required is a line that says “just read downward and it will make sense” or something like that and people will figure it out on their own nearly 100% of the time.
Generally, this stuff needs to be formatted so that people don’t click away. It’s lame to be so similar to news articles but that doesn’t change the fact that it’s instrumentally convergent to prevent people from clicking away.
It’s been almost 6 months and I still mostly hear people using “infohazard” the original way. Not sure what’s going on here.
The pivotal acts proposed are extremely specific solutions to specific problems, and are only applicable in very specific scenarios of AI clearly being on the brink of vastly surpassing human intelligence. That should be clarified whenever they are brought up; it’s a thought experiment solution to a thought experiment problem, and if it suddenly stops being a thought experiment then that’s great because you have the solution on a silver platter.
Is 664 comments the most on any lesswrong post? I’m not sure how to sort by that.
Do you need any help distilling? I’m fine with working for free on this one, looks like a good idea.
I noticed that it’s been 3 months since this was posted. When can we expect more CFAR content?
I noticed that it’s been 3 months since this was posted. When can we expect more CFAR content?
I think it should be easier to share really good advice on LW, period, without needing a really strong justification other than it helps people out with things that will clearly hold them back otherwise.
Will we have to wait until Dec 2023 for the next update or will the amount of time until the next one halve for each update, 6 months then 3 months then 6 weeks then 3 weeks?
Probably best not to skip to List of Lethalities. But then again that kind of approach was wrong for politics is the mind killer where it turned out to be best to just have the person dive right in.
I think the idea is that it appeared similar to the author’s similar post on Putin’s speech, which took less work and was well recieved on LW.
I’ve heard about Soviet rationality, does anyone have a link to the lesswrong post? I can’t find it.
I definitely like this.
Look for ways to incorporate rationality practice into the things that you are already doing.
if you find that you’re too busy to do useful rationality practice, try thinking of “rationality” as any and all more effective approaches to the things that you’re already doing (instead of as an additional thing to add to the pile).
This is probably the most important known lesson of rationality, and all sorts of results-tested self-improvement gurus like James Clear also converge upon this truth, which is that finding new ways to implement a concept, daily, is the best way to acquire it for real. Same goes for programming, your education isn’t finished (or even really started) until you’ve written your own programs that help you out at various things.
Just do it.
It sure would be nice if the best talking points were ordered by how effective they were, or ranked at all really. Categorization could also be a good idea.
Mossad was allegedly pretty successful procuring large amounts of PPE from hostile countries: https://www.tandfonline.com/doi/full/10.1080/08850607.2020.1783620. They also had covert contact tracing, and one way or another their case counts seemed pretty low until Omicron.
The first few weeks of COVID lockdowns went extremely well: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7675749/
ChatGPT also loves to post a massive copypasta about what LLM’s are and why it doesn’t know about things that happened after 2021 (including saying “this was from 2013, therefore I don’t know anything about it because I only know about things that happened in 2021 or earlier”)