It isn’t surprising that they are proposing it, what is surprising is that their argument that A->B seems to check out on first glance. So if your previous model was, we have no idea of whether A->B is true or not, then you should be updating.
My previous model already incorporated the surface features of the hypothetical, that’s how I got my initial reaction. What is new about THIS presentation of A->B that I didn’t expect?
Is there a concrete example to use? I think there is a lot of variation possible across hypotheticals and across different participants’ past experiences with hypotheticals. In a perfect agent, there is no update possible on fictional evidence. In real-world agents, it’ll depend entirely on what hasn’t already been considered.
Here’s a concrete example. Imagine a trolley problem with one person on one track and a million people on another track. If Bob doesn’t want to engage in the hypothetical because it is “unrealistic”, then his mind has most likely already considered the fact that it would be very hard to argue against switching were he to accept the hypothetical. Many people will do this every time a hypothetical comes up and act as though they have no idea whether the hypothetical is true or not. However, this isn’t quite true, Bob already knows that he would find it very hard to argue against the point being made; indeed, if it were easy to argue against the point being made, Bob would probably do that instead of dodging the hypothetical. So Bob has to update, but only on the first instance of such a problem.
Maybe stating it in terms of updating obscures things rather than clearing them up? I’m actually not sure if I’d write this article the same way if I was writing it again.
I can’t follow what priors Bob has in this case and what update you think he should make. I do think that in this example, the presenter of the hypothetical should update on the evidence that Bob doesn’t find the fictional case worth discussing.
I think stating things in terms of beliefs (priors and updates) is extremely helpful when discussing communication and reflective knowledge. But I haven’t seen the detail needed for it to be a compelling (or even understandable) point on this specific topic.
It isn’t surprising that they are proposing it, what is surprising is that their argument that A->B seems to check out on first glance. So if your previous model was, we have no idea of whether A->B is true or not, then you should be updating.
My previous model already incorporated the surface features of the hypothetical, that’s how I got my initial reaction. What is new about THIS presentation of A->B that I didn’t expect?
Is there a concrete example to use? I think there is a lot of variation possible across hypotheticals and across different participants’ past experiences with hypotheticals. In a perfect agent, there is no update possible on fictional evidence. In real-world agents, it’ll depend entirely on what hasn’t already been considered.
Here’s a concrete example. Imagine a trolley problem with one person on one track and a million people on another track. If Bob doesn’t want to engage in the hypothetical because it is “unrealistic”, then his mind has most likely already considered the fact that it would be very hard to argue against switching were he to accept the hypothetical. Many people will do this every time a hypothetical comes up and act as though they have no idea whether the hypothetical is true or not. However, this isn’t quite true, Bob already knows that he would find it very hard to argue against the point being made; indeed, if it were easy to argue against the point being made, Bob would probably do that instead of dodging the hypothetical. So Bob has to update, but only on the first instance of such a problem.
Maybe stating it in terms of updating obscures things rather than clearing them up? I’m actually not sure if I’d write this article the same way if I was writing it again.
I can’t follow what priors Bob has in this case and what update you think he should make. I do think that in this example, the presenter of the hypothetical should update on the evidence that Bob doesn’t find the fictional case worth discussing.
I think stating things in terms of beliefs (priors and updates) is extremely helpful when discussing communication and reflective knowledge. But I haven’t seen the detail needed for it to be a compelling (or even understandable) point on this specific topic.