This is a concern I take seriously. While it is possible increasing awareness of the problem of AI will make things worse overall, I think a more likely outcome is that it will be neutral to good. I will strive to do justice to the positions and concerns people in the community have (while understanding that there is disagreement within the community).
A few thoughts on this:
a) I expect that addressing public misconceptions about AGI would be good. b) I think it’s important to try to explain some of the challenges of policy action and why it’s very hard to figure out policies that increase our chances of success[1]. I’d also emphasise the importance of attempting to develop a nuanced understanding of these issues as opposed to other issues where it is easier to dive in and start doing things. In particular, I’d explain how certain actions have the potential to backfire, even if you don’t think they actually would backfire. And I would consider mentioning the unilateralists curse and how producing low-quality content can cost us credibility.
I think it would be possible for this project to be net-positive, but you’d have to navigate these issues carefully.
Even increasing research funding might not do any good if it mostly ends up elevating work that is a dead-end or otherwise dilutes those doing valuable work.
Thanks for the comment. I agree and was already thinking along those lines. It is a very tricky, delicate issue where we need to put more work into figuring out what to do while communicating it is urgent, but not so urgent that people act imprudently and make things worse. Credibility is key and providing reasons for beliefs, like timelines, is an important part of the project.
A few thoughts on this:
a) I expect that addressing public misconceptions about AGI would be good.
b) I think it’s important to try to explain some of the challenges of policy action and why it’s very hard to figure out policies that increase our chances of success[1]. I’d also emphasise the importance of attempting to develop a nuanced understanding of these issues as opposed to other issues where it is easier to dive in and start doing things. In particular, I’d explain how certain actions have the potential to backfire, even if you don’t think they actually would backfire. And I would consider mentioning the unilateralists curse and how producing low-quality content can cost us credibility.
I think it would be possible for this project to be net-positive, but you’d have to navigate these issues carefully.
Even increasing research funding might not do any good if it mostly ends up elevating work that is a dead-end or otherwise dilutes those doing valuable work.
Thanks for the comment. I agree and was already thinking along those lines.
It is a very tricky, delicate issue where we need to put more work into figuring out what to do while communicating it is urgent, but not so urgent that people act imprudently and make things worse.
Credibility is key and providing reasons for beliefs, like timelines, is an important part of the project.