A question. Is it relevant for your current problem formulation that you also want to ensure that authorised people still have reasonable access to the diamond? In other words, is it important here that the system still needs to yield to actions or input from certain humans, be interruptible and corrigible? Or, in ML terms, does it have to avoid both false negatives and false positives when detecting or avoiding intrusion scenarios?
I imagine that an algorithmically more trivial way to make the system both “honest” and “secured” is to make it so heavily secured that almost certainly nobody can access the diamond.
A question. Is it relevant for your current problem formulation that you also want to ensure that authorised people still have reasonable access to the diamond? In other words, is it important here that the system still needs to yield to actions or input from certain humans, be interruptible and corrigible? Or, in ML terms, does it have to avoid both false negatives and false positives when detecting or avoiding intrusion scenarios?
I imagine that an algorithmically more trivial way to make the system both “honest” and “secured” is to make it so heavily secured that almost certainly nobody can access the diamond.