I think the words “optimism” and “pessimism” are really confusing, because they conflate the probability, utilityandsteam of things:
You can be “optimistic” if you believe a good event is likely (or a bad one unlikely), you can be optimistic because you believe a future event (maybe even unlikely) is good, or you have a plan or idea or stance for which you have a high recursive self-trust/recursive reflectively stable prediction that you will engage in it.
So you could be “pessimistic” in the sense that extinction due to AI is unlikely (say, <1%) but you find it super bad and you currently don’t have anything concrete that you can latch onto to decrease it.
Or (in the case of e.g. MIRI) you might have (“indefinitely optimistic”?) steam for reducing AI risk, find it moderately to extremely likely, and think it’s going to be super bad.
Or you might think that extinction would be super bad, and believe it’s unlikely (as Belrose and Popedo) and have steam for both AI and AI alignment.
But the terms are apparently confusing to many people, and I think using these terminologies can “leak” optimism or pessimism from one category into another, and can lead to worse decisions and incorrect beliefs.
I think the words “optimism” and “pessimism” are really confusing, because they conflate the probability, utility and steam of things:
You can be “optimistic” if you believe a good event is likely (or a bad one unlikely), you can be optimistic because you believe a future event (maybe even unlikely) is good, or you have a plan or idea or stance for which you have a high recursive self-trust/recursive reflectively stable prediction that you will engage in it.
So you could be “pessimistic” in the sense that extinction due to AI is unlikely (say, <1%) but you find it super bad and you currently don’t have anything concrete that you can latch onto to decrease it.
Or (in the case of e.g. MIRI) you might have (“indefinitely optimistic”?) steam for reducing AI risk, find it moderately to extremely likely, and think it’s going to be super bad.
Or you might think that extinction would be super bad, and believe it’s unlikely (as Belrose and Pope do) and have steam for both AI and AI alignment.
But the terms are apparently confusing to many people, and I think using these terminologies can “leak” optimism or pessimism from one category into another, and can lead to worse decisions and incorrect beliefs.