2.) I assign a lower probability to an extremely negative outcome because I believe it to be more likely that we will just die rather than survive and suffer. And in the case that someone only gets their AI partly right, I don’t think it will be extremely negative. All in all, an extremely negative outcome seems rather unlikely. But negative (we’re all dead), is already pretty negative.
4.) I believe that the SIAI currently only needs a little more support because they haven’t said what they would do with a lot more support (money...) right now. I also believe we need a partly empirical approach, as suggested by Ben Goertzel, to learn more about the nature of intelligence.
5.) I don’t have the education and time to research how likely other existential risks are, compared to risks from AI.
2030/2060/2100
10%/0.5%
0.01%/0.1%/20%
little more
don’t know
Invention of an adaptable algorithm capable of making novel and valuable discoveries in science and mathematics given limited resources.
Some annotations:
2.) I assign a lower probability to an extremely negative outcome because I believe it to be more likely that we will just die rather than survive and suffer. And in the case that someone only gets their AI partly right, I don’t think it will be extremely negative. All in all, an extremely negative outcome seems rather unlikely. But negative (we’re all dead), is already pretty negative.
4.) I believe that the SIAI currently only needs a little more support because they haven’t said what they would do with a lot more support (money...) right now. I also believe we need a partly empirical approach, as suggested by Ben Goertzel, to learn more about the nature of intelligence.
5.) I don’t have the education and time to research how likely other existential risks are, compared to risks from AI.