The most common response I get when I talked to coworkers about AI risk wasn’t denial or an attempt to minimize the problem. It was generally something like “That sounds really interesting. If a company working on the problem was paying a lot, I would consider jumping ship.”
If that’s true I would assume that the people who work on creating the AI guidelines, understand the problem. This would in turn suggests that they take reasonable steps to address it.
Is your model that the people writing the guidelines would be well-intentioned but lack the political power to actually enforce useful guidelines?
If that’s true I would assume that the people who work on creating the AI guidelines, understand the problem. This would in turn suggests that they take reasonable steps to address it.
Is your model that the people writing the guidelines would be well-intentioned but lack the political power to actually enforce useful guidelines?