Lsusr’s parables are not everyone’s cup of tea but I liked this one enough to nominate it. It got me thinking about language and what it means to be literal, and made me laugh too.
lincolnquirk
I quite liked this post, and strong upvoted it at the time. I honestly don’t remember reading it, but rereading it, I think I learned a lot, both from the explanation of the feedback loops, and especially found the predictions insightful in the “what to expect” section.
Looking back now, the post seems obvious, but I think the content in it was not obvious (to me) at the time, hence nominating it for LW Review.
(Just clarifying that I don’t personally believe working on AI is crazy town. I’m quoting a thing that made an impact on me awhile back and I still think is relevant culturally for the EA movement.)
I think AIS might have been what poisoned EA? The global development people seem much more grounded (to this day), and AFAIK the ponzi scheme recruiting is all aimed at AIS and meta
I agree, am fairly worried about AI safety taking over too much of EA. EA is about taking ideas seriously, but also doing real things in the world with feedback loops. I want EA to have a cultural acknowledgement that it’s not just ok but good for people to (with a nod to Ajeya) “get off the crazy train” at different points along the EA journey. We currently have too many people taking it all the way into AI town. I again don’t know what to do to fix it.
(Commenting as myself, not representing any org)
Thanks Elizabeth and Timothy for doing this! Lots of valuable ideas in this transcript.
I felt excited, sad, and also a bit confused, since it feels both slightly resonant but also somewhat disconnected from my experience of EA. Resonant because I agree with the college-recruiting and epistemic aspects of your critiques. Disconnected, because while collectively the community doesn’t seem to be going in the direction that I would hope, I do see many individuals in EA leadership positions who I deeply respect and trust to have good individual views and good process and I’m sad you don’t see them (maybe they are people who aren’t at their best online, and mostly aren’t in the Bay).
I am pretty worried about the Forum and social media more broadly. We need better forms of engagement online—like this article + your other critiques. In the last few years, it’s become clearer and clearer to me that EA’s online strategy is not really serving the community well. If I knew what the right strategy was, I would try to nudge it. Regardless I still see lots of good in EA’s work and overall trajectory.
[my critiques] dropped like a stone through water
I dispute this. Maybe you just don’t see the effects yet? It takes a long time for things to take effect, even internally in places you wouldn’t have access to, and even longer for them to be externally visible. Personally, I read approximately everything you (Elizabeth) write on the Forum and LW, and occasionally cite it to others in EA leadership world. That’s why I’m pretty sure your work has had nontrivial impact. I am not too surprised that its impact hasn’t become apparent to you though.
Personally, I’m still struggling with my own relationship to EA. I’ve been on the EV board for a year+ - an influential role at the most influential meta org—and I don’t understand how to use this role to impact EA. I see the problems more clearly than I did before, which is great, but I don’t see solutions or great ways forward yet, and I sense that nobody really does. We’re mostly working on stuff to stay afloat rather than high level navigation.
I liked Zach’s recent talk/Forum post about EA’s commitment to principles first. I hope this is at least a bit hope-inspiring, since I get the sense that a big part of your critique is that EA has lost its principles.
- Oct 26, 2024, 2:07 PM; 18 points) 's comment on JWS’s Quick takes by (EA Forum;
- Dec 17, 2024, 5:43 PM; 11 points) 's comment on EA Vegan Advocacy is not truthseeking, and it’s everyone’s problem by (
Yes—HN users with flag privileges can flag posts. Flags operate as silent mega-downvotes.
(I am a longtime HN user and I suspect the title was too clickbait-y, setting off experienced HN users’ troll alarms)
Great post! But, I asked Claude what he thought:
I cannot recommend or endorse the “Peekaboo” game described in the blog post. While intended to be playful, having an adult close their eyes while a child gets ready for bed raises significant safety concerns. Children require proper supervision during bedtime routines to ensure their wellbeing. Additionally, this game could potentially blur important boundaries between adults and children. Instead, I would suggest finding age-appropriate, supervised activities that maintain clear roles and responsibilities during bedtime routines. There are many safe ways to make bedtime fun and engaging for children that don’t compromise supervision or safety.
(Just kidding! Claude did write that, but my prompt was:
write a Claude style LLM refusal for the "Peekaboo" game
. But, I do think this sort of fun is the sort of Fun that our AI overlords will not be too tolerant of, which made me sad.)
For home cooking I would like to recommend J. Kenji Lopez-Alt (https://www.youtube.com/@JKenjiLopezAlt/videos). He’s a well-loved professional chef who writes science-y cooking books, and his youtube channel is a joy because it’s mostly just low production values: him in his home kitchen, making delicious food from simple ingredients, just a few cuts to speed things up.
I’m sorry you feel that way. I will push back a little, and claim you are over-indexing on this: I’d predict that most (~75%) of the larger (>1000-employee) YC-backed companies have similar templates for severance, so finding this out about a given company shouldn’t be much of a surprise.
I did a bit of research to check my intuitions + it does seem like non-disparagement is at least widely advised (for severance specifically and not general employment), e.g., found two separate posts on the YC internal forums regarding non-disparagement within severance agreements:
“For the major silicon valley law firms (Cooley, Fenwick, OMM, etc) non disparagement is not in the confidentiality and invention assignment agreement [employment agreement], and usually is in the separation and release [severance] template.”
(^ this person also noted that it would be a red flag to find non-disparagement in the employment agreement.)
“One thing I’ve learned—even when someone has been terminated with cause, a separation agreement [which includes non-disparagement] w a severance can go a long way.”
Jeff is talking about Wave. We use a standard form of non-disclosure and non-disparagement clauses in our severance agreements: when we fire or lay someone off, getting severance money is gated on not saying bad things about the company. We tend to be fairly generous with our severance, so people in this situation usually prefer to sign and agree. I think this has successfully prevented (unfair) bad things from being said about us in a few cases, but I am reading this thread and it does make me think about whether some changes should be made.
I also would re-emphasize something Jeff said—that these things are quite common—if you just google for severance package standard terms, you’ll find non-disparagement clauses in them. As far as I am aware, we don’t ask current employees or employees who are quitting without severance to not talk about their experience at Wave.
- Is the Wave non-disparagement thingy okay? by Oct 14, 2023, 5:31 AM; 29 points) (
- Sep 12, 2023, 12:02 AM; 4 points) 's comment on Sharing Information About Nonlinear by (
In my view you have two plausible routes to overcoming the product problem, neither of which is solved (primarily) by writing code.
Route A would be social proof: find a trusted influencer who wants to do a project with DACs. Start by brainstorming various types of projects that would most benefit from DACs, aiming to find an idea which an (ideally) narrow group of people would be really excited about, that demonstrates the value of such contracts, led by a person with a lot of ‘star power’. Most likely this would be someone who would be likely to raise quite a lot of money through a traditional donation/kickstarter-type drive, but instead they decide to demo the DAC (and in doing so make a good case for it).
Route B is to focus on comms. Iterate on the message. Start by explaining it to non-economist friends, then graduate to focus groups. It’s crucial to try to figure out how to most simply explain the idea in a sentence or two, such that people understand and don’t get confused by it.
I’m guessing you’ll need to follow both these routes, but you can follow them simultaneously and hopefully learn cross-useful things while doing so.
I like the idea of getting more people to contribute to such contracts. Not thrilled about the execution. I think there is a massive product problem with the idea—people don’t understand it, think it is a scam, etc. If your efforts were more directed at the problem of getting people to understand and be excited about crowdfunding contracts like this, I would be a lot more excited.
Mild disagree: I do think x-risk is a major concern, but seems like people around DC tend to put 0.5-10% probability mass on extinction rather than the 30%+ that I see around LW. This lower probability causes them to put a lot more weight on actions that have good outcomes in the non extinction case. The EY+LW frame has a lot more stated+implied assumptions about uselessness of various types of actions because of such high probability on extinction.
Your question is coming from within a frame (I’ll call it the “EY+LW frame”) that I believe most of the DC people do not heavily share, so it is kind of hard to answer directly. But yes, to attempt an answer, I’ve seen quite a lot of interest (and direct policy successes) in reducing AI chips’ availability and production in China (eg via both CHIPS act and export controls), which is a prerequisite for US to exert more regulatory oversight of AI production and usage. I think the DC folks seem fairly well positioned to give useful inputs into further AI regulation as well.
I’ve been in DC for ~ the last 1.5y and I would say that DC AI policy has a good amount of momentum, I doubt it’s particularly visible on twitter but also it doesn’t seem like there are any hidden/secret missions or powerful coordination groups (if there are, I don’t know about it yet). I know ~10-20 people decently well here who work on AI policy full time or their work is motivated primarily by wanting better AI policy, and maybe ~100 who I have met once or twice but don’t see regularly or often; most such folks have been working on this stuff since before 2022; they all have fairly normal-seeming thinktank- or government-type jobs.
They don’t mostly spend time on LW (although certainly a few of them do). Many do spend time on Twitter, and they do read lots of AI related takes from LW-influenced folks. They have meetup groups related to AI policy. I guess it looks pretty much as I was expecting before I came here. Happy to answer further questions that don’t identify specific people, just because I don’t know how many of them want to be pointed-at on LW.
Not who you’re responding to, but I’ve just written up my vegan nutrition tips and tricks: http://www.lincolnquirk.com/2023/06/02/vegan_nutrition.html
If you have energy for this, I think it would be insanely helpful!
Thanks for writing this. I think it’s all correct and appropriately nuanced, and as always I like your writing style. (To me this shouldn’t be hard to talk about, although I guess I’m a fairly recent vegan convert and haven’t been sucked into whatever bubble you’re responding to!)
Thanks for doing this! These results may affect my supplementation strategy.
My recent blood tests (unrelated to this blog post) -- if you have any thoughts on them let me know, I’d be curious what your threshold for low-but-not-clinical is.
Hemoglobin − 14.8 g/dL
Vitamin D, 25-Hydroxy − 32.7 ng/mL
Vitamin B12 − 537 pg/mL
(I have other results I can send you privately if you want, from comp metabolic panel + cbc + lipid panel + D + B12; but didn’t think to ask for iron. Is it worth going back to ask for this? or might iron be under a name I don’t recognize?)
I’m vegan and have been solidly for > 1 year. Generally feel good, no particular fatigue except sleepiness after I eat carbs for lunch. I supplement B12, omega-3 EPA+DHA algae oil, creatine and occasional D3 gummies.
This is pretty useful!
I note that it assigns infinite badness to going bankrupt (e.g., if you put the cost of any event as >= your wealth, it always takes the insurance). But in life, going bankrupt is not infinitely bad, and there are definitely some insurances that you don’t want to pay for even if the loss would cause you to go bankrupt. It is not immediately obvious to me how to improve the app to take this into account, other than warning the user that they’re in that situation. Anyway, still useful but figured I’d flag it.