Hi Gerald, thanks for your comment! Note that I am arguing neither in favour of or against doom. What I am arguing though is the following: it is not good practice to group AI with technologies that we were able to iteratively improve towards safety when you are trying to prove AI safety. The point here is that without further arguments, you could easily make the reverse argument and it would have roughly the equal force:
P1 Many new technologies are often unsafe and impossible to iteratively improve (e.g. airhips).
P2 AI is a new technology.
C1 AI is probably unsafe and impossible to iteratively improve.
That is why I argue that this is not a good argument template because through survivorship bias in P1, you‘ll always be able to sneak in whatever it is you’re trying to prove.
With respect to your arguments about doom scenarios, I think they are really interesting and I’d be excited to read a post with your thoughts (maybe you already have one?).
Hi Gerald, thanks for your comment! Note that I am arguing neither in favour of or against doom. What I am arguing though is the following: it is not good practice to group AI with technologies that we were able to iteratively improve towards safety when you are trying to prove AI safety. The point here is that without further arguments, you could easily make the reverse argument and it would have roughly the equal force:
P1 Many new technologies are often unsafe and impossible to iteratively improve (e.g. airhips).
P2 AI is a new technology.
C1 AI is probably unsafe and impossible to iteratively improve.
That is why I argue that this is not a good argument template because through survivorship bias in P1, you‘ll always be able to sneak in whatever it is you’re trying to prove.
With respect to your arguments about doom scenarios, I think they are really interesting and I’d be excited to read a post with your thoughts (maybe you already have one?).