I do worry though that it is mixing together pragmatics and epistemics (even though it does try to distinguish the two). Like there’s a distinction between when it’s reasonable to believe something and when it’s reasonable to act upon something.
For example, when I was working as a web developer, there’s lots of potential bugs where it would have made sense to believe that there was a decent chance we were vulnerable, but pragmatically we couldn’t spare the time to fix every potential security issue. It doesn’t mean that I should walk around saying: “Therefore they aren’t there” though.
I’ll admit, if someone randomly messaged you some of the AI risk arguments and no one else was worried about them, it’d probably be reasonable to conclude that there’s a flaw there and put them aside.
On the other hand, when even two deep learning Turing prize winners are starting to get concerned, and the stakes are so high, I think we should be a bit more cautious regarding dismissing the arguments out of hand.
On the other hand, when even two deep learning Turing prize winners are starting to get concerned, and the stakes are so high, I think we should be a bit more cautious regarding dismissing the arguments out of hand.
I agree, which is why I have an entire section or 2 about why I think ML/AI isn’t like computer security.
The POC || GTFO article was very interesting.
I do worry though that it is mixing together pragmatics and epistemics (even though it does try to distinguish the two). Like there’s a distinction between when it’s reasonable to believe something and when it’s reasonable to act upon something.
For example, when I was working as a web developer, there’s lots of potential bugs where it would have made sense to believe that there was a decent chance we were vulnerable, but pragmatically we couldn’t spare the time to fix every potential security issue. It doesn’t mean that I should walk around saying: “Therefore they aren’t there” though.
I’ll admit, if someone randomly messaged you some of the AI risk arguments and no one else was worried about them, it’d probably be reasonable to conclude that there’s a flaw there and put them aside.
On the other hand, when even two deep learning Turing prize winners are starting to get concerned, and the stakes are so high, I think we should be a bit more cautious regarding dismissing the arguments out of hand.
I agree, which is why I have an entire section or 2 about why I think ML/AI isn’t like computer security.