Hi, I write AppliedDivinityStudies.com which you link to. A couple quick clarifications: - The blog is not written by Alexey Guzey. - In the piece you link I’m just taking Toby Ord’s estimates at face value to use them as a parameter, I haven’t given this a ton of thought.
But basically I do think AI Risk is important. I don’t write about it because I don’t have anything particularly smart to say. As you note, it’s a complex topic, and I don’t really feel like there’s any value in me contributing unless I were to really invest in learning much more.
Once every couple years or so, I feel a bad about this and try to spend a few days learning much more. Given those experiences, I think it’s reasonable for me to believe that I’m bad enough at thinking about AI Risk that I can justify not working on it full-time.
My contributions to the effort, if I have any, will mostly be in more abstract philosophical discourse. The post you link for example is about whether or not trying to accelerate scientific progress would be good for x-risk. I have more work coming up on whether or not we should expect optimized dystopia to be worse than optimized utopia is good.
Hi, I write AppliedDivinityStudies.com which you link to. A couple quick clarifications:
- The blog is not written by Alexey Guzey.
- In the piece you link I’m just taking Toby Ord’s estimates at face value to use them as a parameter, I haven’t given this a ton of thought.
But basically I do think AI Risk is important. I don’t write about it because I don’t have anything particularly smart to say. As you note, it’s a complex topic, and I don’t really feel like there’s any value in me contributing unless I were to really invest in learning much more.
Once every couple years or so, I feel a bad about this and try to spend a few days learning much more. Given those experiences, I think it’s reasonable for me to believe that I’m bad enough at thinking about AI Risk that I can justify not working on it full-time.
My contributions to the effort, if I have any, will mostly be in more abstract philosophical discourse. The post you link for example is about whether or not trying to accelerate scientific progress would be good for x-risk. I have more work coming up on whether or not we should expect optimized dystopia to be worse than optimized utopia is good.
My mistake! Fixed