AI discourse triggers severe anxiety in me, and as a non-technical person in a rural area I don’t feel I have anything to offer the field. I personally went so far as to fully hide the AI tag from my front page and frankly I’ve been on the threshold of blocking the site altogether for the amount of content that still gets through by passing reference and untagged posts. I like most non-AI content on the site, been checking regularly since the big LW2.0 launch, and I would consider it a loss of good reading material to stop browsing, but since DWD I’m taking my fate in my hands every time I browse here.
I don’t know how many readers out there are like me, but I think it at least warrants consideration that the AI doomtide acts as a barrier to entry for readers who would benefit from rationality content but can’t stomach the volume and tone of alignment discourse.
Yeah this is a point that I failed to make in my own comment — it’s not just that I’m not interested in AIS content / not technically up to speed, it’s that seeing it is often actively extremely upsetting
I’m sorry to hear that! Do you have any thoughts on ways to rephrase the ai content to make it less upsetting? would it help to have news that emphasizes successes, so that you have frequent context that it’s going relatively alright and is picking up steam? in general, my view is that yudkowskian paranoia about ai safety is detrimental in large part because it’s objectively wrong, and while it’s great for him to be freaked out about it, his worried view shouldn’t be frightening us; I’m quite excited for superintelligence and I just want us to hurry up and get the safe version working so we can solve a bunch of problems. IMO you should feel able to feel comfy that AI now is pretty much nothing but super cool.
[edit to clarify: this is not to say the problem isn’t hard; it’s that I really do think the capabilities folks know that safety and capabilities were always the same engineering task]
Thank you for writing this comment. Just so you know, probably you can contribute to the field, if that is your desire. I would start by joining a community where you will be happy and where people are working seriously on the problem.
I feel like you mean this in kindness, but to me it reads as “You could risk your family’s livelihood relocating and/or trying to get recruited to work remotely so that you can be anxious all the time! It might help on the margins ¯\_(ツ)_/¯ ”
Yes, that’s my point. I’m not aware of a path to meaningful contribution to the field that doesn’t involve either doing research or doing support work for a research group. Neither is accessible to me without risking the aforementioned effects.
Yeah right. It does seem like work in alignment at the moment is largely about research, and so a lot of the options come down to doing or supporting research.
I would just note that there is this relatively huge amount of funding in the space at the moment—OpenPhil and FTX both open to injecting huge amounts of funding and largely not having enough places to put it. It’s not that it’s easy to get funded—I wouldn’t say it’s easy at all—but it does really seems like the basic conditions in the space are such that one would expect to find a lot of opportunities to be funded to do good work.
one would expect to find a lot of opportunities to be funded to do good work.
This reader is a software engineer with over a decade of experience. I’m paid handsomely and live in a remote rural area. I am married with three kids. The idea that my specialized experience of building SaaS products in Scala would somehow port over to AI research seems ludicrous. I am certain I’m cognitively capable enough to contribute to AI research, but I’d be leaving a career where I’m compensated based on my experience for one where I’m starting over anew.
Surely OpenPhil and FTX would not match my current salary in order to start my career over, all while allowing me to remain in my current geography (instead of uprooting my kids from friends and school)? It seems unlikely I’d have such a significant leg up over a recent college graduate with a decent GPA so as to warrant matching my software engineering salary.
I’ll say one thing. I too do not like the AI doomtide/doomerism, despite thinking it’s a real problem. You can take breaks from LW or hide posts for AI from your frontpage if you’re upset.
AI discourse triggers severe anxiety in me, and as a non-technical person in a rural area I don’t feel I have anything to offer the field. I personally went so far as to fully hide the AI tag from my front page and frankly I’ve been on the threshold of blocking the site altogether for the amount of content that still gets through by passing reference and untagged posts. I like most non-AI content on the site, been checking regularly since the big LW2.0 launch, and I would consider it a loss of good reading material to stop browsing, but since DWD I’m taking my fate in my hands every time I browse here.
I don’t know how many readers out there are like me, but I think it at least warrants consideration that the AI doomtide acts as a barrier to entry for readers who would benefit from rationality content but can’t stomach the volume and tone of alignment discourse.
Yeah this is a point that I failed to make in my own comment — it’s not just that I’m not interested in AIS content / not technically up to speed, it’s that seeing it is often actively extremely upsetting
I’m sorry to hear that! Do you have any thoughts on ways to rephrase the ai content to make it less upsetting? would it help to have news that emphasizes successes, so that you have frequent context that it’s going relatively alright and is picking up steam? in general, my view is that yudkowskian paranoia about ai safety is detrimental in large part because it’s objectively wrong, and while it’s great for him to be freaked out about it, his worried view shouldn’t be frightening us; I’m quite excited for superintelligence and I just want us to hurry up and get the safe version working so we can solve a bunch of problems. IMO you should feel able to feel comfy that AI now is pretty much nothing but super cool.
[edit to clarify: this is not to say the problem isn’t hard; it’s that I really do think the capabilities folks know that safety and capabilities were always the same engineering task]
Thank you for writing this comment. Just so you know, probably you can contribute to the field, if that is your desire. I would start by joining a community where you will be happy and where people are working seriously on the problem.
I feel like you mean this in kindness, but to me it reads as “You could risk your family’s livelihood relocating and/or trying to get recruited to work remotely so that you can be anxious all the time! It might help on the margins ¯\_(ツ)_/¯ ”
Why would you risk your family’s livelihood? That doesn’t seem like a good idea. And why would you go somewhere that you’d be anxious all the time?
Yes, that’s my point. I’m not aware of a path to meaningful contribution to the field that doesn’t involve either doing research or doing support work for a research group. Neither is accessible to me without risking the aforementioned effects.
Yeah right. It does seem like work in alignment at the moment is largely about research, and so a lot of the options come down to doing or supporting research.
I would just note that there is this relatively huge amount of funding in the space at the moment—OpenPhil and FTX both open to injecting huge amounts of funding and largely not having enough places to put it. It’s not that it’s easy to get funded—I wouldn’t say it’s easy at all—but it does really seems like the basic conditions in the space are such that one would expect to find a lot of opportunities to be funded to do good work.
This reader is a software engineer with over a decade of experience. I’m paid handsomely and live in a remote rural area. I am married with three kids. The idea that my specialized experience of building SaaS products in Scala would somehow port over to AI research seems ludicrous. I am certain I’m cognitively capable enough to contribute to AI research, but I’d be leaving a career where I’m compensated based on my experience for one where I’m starting over anew.
Surely OpenPhil and FTX would not match my current salary in order to start my career over, all while allowing me to remain in my current geography (instead of uprooting my kids from friends and school)? It seems unlikely I’d have such a significant leg up over a recent college graduate with a decent GPA so as to warrant matching my software engineering salary.
Right—you probably could contribute to AI alignment, but your skills mostly wouldn’t port over, and you’d very likely earn less than your current job.
I’ll say one thing. I too do not like the AI doomtide/doomerism, despite thinking it’s a real problem. You can take breaks from LW or hide posts for AI from your frontpage if you’re upset.