If you want to talk about the behavior of the AI being uninformative, you need to talk about the distribution over possible values over R. If the distribution is just “it exists” or “it doesn’t,” then it’s clear that the AI will just have to satisfy R in every case, and you don’t get anything beyond the restriction itself.
If there is some broader distribution, then it’s less clear what happens, but as far as I can tell this is no better than simply having the AI care about an unknown requirement from that distribution.
If you want to talk about the behavior of the AI being uninformative, you need to talk about the distribution over possible values over R. If the distribution is just “it exists” or “it doesn’t,” then it’s clear that the AI will just have to satisfy R in every case, and you don’t get anything beyond the restriction itself.
If there is some broader distribution, then it’s less clear what happens, but as far as I can tell this is no better than simply having the AI care about an unknown requirement from that distribution.
There is an R, it’s given. There is a distribution over possible R’s for agents that only know the data E, F, B(), and v.
But this approach seems very wobbly to me; I no longer give it much potential.