A key sticking point seems to be the lack of a highly plausible concrete scenario.
IMO coming up with highly plausible concrete scenarios should be a major priority of people working on AI safety. It seems both very useful for getting other researchers involved, and also very useful for understanding the problem and making progress.
In terms of talking to other researchers, in-person conversations like the ones you’re having seem like a great way to feel things out before writing public documents.
IMO coming up with highly plausible concrete scenarios should be a major priority of people working on AI safety. It seems both very useful for getting other researchers involved, and also very useful for understanding the problem and making progress.
In terms of talking to other researchers, in-person conversations like the ones you’re having seem like a great way to feel things out before writing public documents.