I’m sorry to hear that! Do you have any thoughts on ways to rephrase the ai content to make it less upsetting? would it help to have news that emphasizes successes, so that you have frequent context that it’s going relatively alright and is picking up steam? in general, my view is that yudkowskian paranoia about ai safety is detrimental in large part because it’s objectively wrong, and while it’s great for him to be freaked out about it, his worried view shouldn’t be frightening us; I’m quite excited for superintelligence and I just want us to hurry up and get the safe version working so we can solve a bunch of problems. IMO you should feel able to feel comfy that AI now is pretty much nothing but super cool.
[edit to clarify: this is not to say the problem isn’t hard; it’s that I really do think the capabilities folks know that safety and capabilities were always the same engineering task]
I’m sorry to hear that! Do you have any thoughts on ways to rephrase the ai content to make it less upsetting? would it help to have news that emphasizes successes, so that you have frequent context that it’s going relatively alright and is picking up steam? in general, my view is that yudkowskian paranoia about ai safety is detrimental in large part because it’s objectively wrong, and while it’s great for him to be freaked out about it, his worried view shouldn’t be frightening us; I’m quite excited for superintelligence and I just want us to hurry up and get the safe version working so we can solve a bunch of problems. IMO you should feel able to feel comfy that AI now is pretty much nothing but super cool.
[edit to clarify: this is not to say the problem isn’t hard; it’s that I really do think the capabilities folks know that safety and capabilities were always the same engineering task]