Thanks for the comment. I agree and was already thinking along those lines.
It is a very tricky, delicate issue where we need to put more work into figuring out what to do while communicating it is urgent, but not so urgent that people act imprudently and make things worse.
Credibility is key and providing reasons for beliefs, like timelines, is an important part of the project.
Darren McKee
Karma: 187
None taken, it’s a reasonable question to ask. It’s part of the broader problem of knowing if anything will be good or bad (unintended consequences and such). To clarify a bit, by general audience, I don’t mean everyone because most people don’t read many books, let alone non-fiction books, let alone non-fiction books that aren’t memoirs/biographies or the like. So, my loose model is that (1) there is a group of people who would care about this issue if they knew more about it and (2) their concerns will lead to interest from those with more power to (3) increase funding for AI safety and/or governance that might help.
Expanding on 1, it could also increase those who want to work on the issue, in a wide range of domains beyond technical work.
It’s also possible that it is net-positive but still insufficient but was worth trying.