There doesn’t seem to be a lot of work on adversarial examples for random forests. This paper was the only one I found, but it says:
On a digit recognition task, we demonstrate that both gradient boosted trees and random forests are extremely susceptible to evasions.
Also if you look at Figure 3 and Figure 4 in the paper, it appears that the RF classifier is much more susceptible to adversarial examples than the NN classifier.
There doesn’t seem to be a lot of work on adversarial examples for random forests. This paper was the only one I found, but it says:
Also if you look at Figure 3 and Figure 4 in the paper, it appears that the RF classifier is much more susceptible to adversarial examples than the NN classifier.