It’s unfortunate I used the word “optimism” in my comment, since my primary disagreement is whether the traditional sources of AI risk are compelling.
May I beseech you to be more careful about using “optimism” and words like it in the future, because I’m really worried about strategy researchers and decision makers getting the wrong impression from AI safety researchers about how hard the overall AI risk problem is, and for some reason I keep seeing people say that they’re “optimistic” (or other words to that effect) when they mean optimistic about some sub-problem of AI risk instead of AI risk as a whole, but they don’t make that clear. In many cases it’s pretty predictable that people outside technical AI safety research (or even inside, like in this case) would often misinterpret that as being optimistic about AI risk.
May I beseech you to be more careful about using “optimism” and words like it in the future, because I’m really worried about strategy researchers and decision makers getting the wrong impression from AI safety researchers about how hard the overall AI risk problem is, and for some reason I keep seeing people say that they’re “optimistic” (or other words to that effect) when they mean optimistic about some sub-problem of AI risk instead of AI risk as a whole, but they don’t make that clear. In many cases it’s pretty predictable that people outside technical AI safety research (or even inside, like in this case) would often misinterpret that as being optimistic about AI risk.