I guess more succinctly, there is no abstract concept of ‘ought’. The label ‘ought’ just refers to an algorithm A, an outcome desired from that algorithm O, an input space of things the algorithm can operate on, X, an assessment of the probability that the outcome happens under the algorithm, P(A(X) = O). Up to the limit of sensory fidelity, this is all in principle experimentally detectable, no?
Just to be a little clearer: saying that “I ought to do X” means “There exists some goal Y such that I want to achieve Y; there exists some set of variables D which I can manipulate to bring about the achievement of Y; X is an algorithm for manipulating variables in D to produce effect Y, and according to my current state of knowledge, I assess that the probability of this model of X(D) yielding Y is high enough such that whatever physical resources it costs me to attempt X(D), as a Bayesian, the trade-off works out in favor of actually doing it. That is, Payoff(Y) P(I was right in modeling the algorithm X(D) as producing Y) > Cost(~Y)P(I was incorrect in modeling the algorithm X(D)), or some similar decision rule.
Just to be a little clearer: saying that “I ought to do X” means “There exists some goal Y such that I want to achieve Y; there exists some set of variables D which I can manipulate to bring about the achievement of Y; X is an algorithm for manipulating variables in D to produce effect Y, and according to my current state of knowledge, I assess that the probability of this model of X(D) yielding Y is high enough such that whatever physical resources it costs me to attempt X(D), as a Bayesian, the trade-off works out in favor of actually doing it. That is, Payoff(Y) P(I was right in modeling the algorithm X(D) as producing Y) > Cost(~Y)P(I was incorrect in modeling the algorithm X(D)), or some similar decision rule.