My big concern with AI governance as it is currently being conducted is that I think the people doing it are having a correlated failure to notice and address the true risks and timelines. I’m not sure this makes them actively harmful to humanity’s survival, but it does sure limit their helpfulness.
Thank you for your thoughts! I read through your linked comment, and I think everything you wrote seems plausible to me. In fact, I also have short timelines and think that AGI is around the corner. As for your second perspective regarding correlated failure, I would be curious if you are willing to give an example.
Also, what do you think are the types of research questions (long-term) AI governance researchers should address if we are right?
If you have significant time trade-offs, getting your take on the second prompt would be most valuable.
My big concern with AI governance as it is currently being conducted is that I think the people doing it are having a correlated failure to notice and address the true risks and timelines. I’m not sure this makes them actively harmful to humanity’s survival, but it does sure limit their helpfulness.
For more details, see this comment on my AI timelines.
Thank you for your thoughts! I read through your linked comment, and I think everything you wrote seems plausible to me. In fact, I also have short timelines and think that AGI is around the corner. As for your second perspective regarding correlated failure, I would be curious if you are willing to give an example.
Also, what do you think are the types of research questions (long-term) AI governance researchers should address if we are right?
If you have significant time trade-offs, getting your take on the second prompt would be most valuable.