First, your article is very insightful and well-structured, and totally like it.
But there is one thing that bugs me.
I am a person new to AI alignment field, and recently, I realized (maybe by mistake) that there is very hard to find a long-term financially stable full-time job in AI field-building.
For me, it basically means that only a tiny amount of people consider AI alignment important enough to pay money to decrease P(doom). And at the same time, here we are talking about possibility of doom within next 10 or 20 years. For me it is all a bit crazy
I also think that sooner or later, when AIs will become more and more capable, and, either some large Chernobyl-like tragedy caused by AI will happen, or some AI will become so powerful that it will horrify people. In my opinion, probability of that is very high. I already see how ChatGPT spread some fear. And fear might spread like a wildfire. If it will happen too late for governments to react thoughtfully, it will introduce a large amount of risk and uncertainty. In my opinion, too much risk and uncertainty.
So, in my opinion, even if we will educate the public and promote government regulation, and if AGI will appear before 2030, then government policies might suck. But if we will not do it, they might suck much more and it is even more dangerous.
I see your point on fear spreading causing governments to regulate. I basically agree that if it’s what happens, it’s good to be in a position to shape the regulation in a positive way or at least try to. I still think that I’m more optimistic about corporate governance which seems more tractable than policy governance to me.
First, your article is very insightful and well-structured, and totally like it.
But there is one thing that bugs me.
I am a person new to AI alignment field, and recently, I realized (maybe by mistake) that there is very hard to find a long-term financially stable full-time job in AI field-building.
For me, it basically means that only a tiny amount of people consider AI alignment important enough to pay money to decrease P(doom). And at the same time, here we are talking about possibility of doom within next 10 or 20 years. For me it is all a bit crazy
I also think that sooner or later, when AIs will become more and more capable, and, either some large Chernobyl-like tragedy caused by AI will happen, or some AI will become so powerful that it will horrify people. In my opinion, probability of that is very high. I already see how ChatGPT spread some fear. And fear might spread like a wildfire. If it will happen too late for governments to react thoughtfully, it will introduce a large amount of risk and uncertainty. In my opinion, too much risk and uncertainty.
So, in my opinion, even if we will educate the public and promote government regulation, and if AGI will appear before 2030, then government policies might suck. But if we will not do it, they might suck much more and it is even more dangerous.
Thanks for your comment!
I see your point on fear spreading causing governments to regulate. I basically agree that if it’s what happens, it’s good to be in a position to shape the regulation in a positive way or at least try to. I still think that I’m more optimistic about corporate governance which seems more tractable than policy governance to me.