I am interested in getting feedback on whether it seems worthwhile to advocate for better governance mechanisms (like prediction markets) in the hopes that this might help civilization build common knowledge about AI risk more quickly, or might help civilization do a more “adequate” job of slowing AI progress by, restricting unauthorized access to compute resources. Is this a good cause for me to work on, or is it too indirect and it would be better to try and convince people about AI risk directly? See a more detailed comment here: https://www.lesswrong.com/posts/PABtHv8X28jJdxrD6/racing-through-a-minefield-the-ai-deployment-problem?commentId=ufXuR5xtMGeo5bjon
I am interested in getting feedback on whether it seems worthwhile to advocate for better governance mechanisms (like prediction markets) in the hopes that this might help civilization build common knowledge about AI risk more quickly, or might help civilization do a more “adequate” job of slowing AI progress by, restricting unauthorized access to compute resources. Is this a good cause for me to work on, or is it too indirect and it would be better to try and convince people about AI risk directly? See a more detailed comment here: https://www.lesswrong.com/posts/PABtHv8X28jJdxrD6/racing-through-a-minefield-the-ai-deployment-problem?commentId=ufXuR5xtMGeo5bjon