“I’m grateful to HAL for telling me that cows have feelings. Now I’m pretty sure that, even if HAL had a glitch and mistakenly told me that cows are devoid of feeling, eating them would still be wrong.”
That’s valid reasoning. The right way to formalize it is to have two worlds, one where eating cows is okay and another where eating cows is not okay, without any “nosy preferences”. Then you receive probabilistic evidence about which world you’re in, and deal with it in the usual way.
Are you talking about something like this?
“I’m grateful to HAL for telling me that cows have feelings. Now I’m pretty sure that, even if HAL had a glitch and mistakenly told me that cows are devoid of feeling, eating them would still be wrong.”
That’s valid reasoning. The right way to formalize it is to have two worlds, one where eating cows is okay and another where eating cows is not okay, without any “nosy preferences”. Then you receive probabilistic evidence about which world you’re in, and deal with it in the usual way.