Yep, planning to put up a post about that soon. The short argument is something like: The equivalent of an obfuscated argument for IDA is a decomposition that includes questions the model doesn’t know how to answer. We can’t always tell the difference between an IDA tree that uses an obfuscated decomposition and gets the wrong answer, vs an IDA tree that uses a good decomposition and gets the right answer, without unpacking the entire tree
I’d like to read this post but can’t find it in your post history. Any chance it might be sitting in your drafts folder? Also, do you know if obfuscated argument (or the equivalent problem for IDA) is the main reason that research interest in IDA has seemingly declined a lot from 3 years ago?
This may be out-of-scope for the writeup, but I would love to get more detail on how this might be an important problem for IDA.
Yep, planning to put up a post about that soon. The short argument is something like:
The equivalent of an obfuscated argument for IDA is a decomposition that includes questions the model doesn’t know how to answer.
We can’t always tell the difference between an IDA tree that uses an obfuscated decomposition and gets the wrong answer, vs an IDA tree that uses a good decomposition and gets the right answer, without unpacking the entire tree
I’d like to read this post but can’t find it in your post history. Any chance it might be sitting in your drafts folder? Also, do you know if obfuscated argument (or the equivalent problem for IDA) is the main reason that research interest in IDA has seemingly declined a lot from 3 years ago?
Thanks!