None taken, it’s a reasonable question to ask. It’s part of the broader problem of knowing if anything will be good or bad (unintended consequences and such). To clarify a bit, by general audience, I don’t mean everyone because most people don’t read many books, let alone non-fiction books, let alone non-fiction books that aren’t memoirs/biographies or the like. So, my loose model is that (1) there is a group of people who would care about this issue if they knew more about it and (2) their concerns will lead to interest from those with more power to (3) increase funding for AI safety and/or governance that might help. Expanding on 1, it could also increase those who want to work on the issue, in a wide range of domains beyond technical work. It’s also possible that it is net-positive but still insufficient but was worth trying.
No offense, but It’s not obvious to me why communicating to a general audience could be a net positive. Exactly how do you expect this to help?
None taken, it’s a reasonable question to ask. It’s part of the broader problem of knowing if anything will be good or bad (unintended consequences and such). To clarify a bit, by general audience, I don’t mean everyone because most people don’t read many books, let alone non-fiction books, let alone non-fiction books that aren’t memoirs/biographies or the like. So, my loose model is that (1) there is a group of people who would care about this issue if they knew more about it and (2) their concerns will lead to interest from those with more power to (3) increase funding for AI safety and/or governance that might help.
Expanding on 1, it could also increase those who want to work on the issue, in a wide range of domains beyond technical work.
It’s also possible that it is net-positive but still insufficient but was worth trying.