The way you typically converge an adaptive simulation is to start with a cheap coarse-grained approximation, then:
Run your simulation.
Check to see if it was accurate enough on the whole for you
2b. If so quit.
Do some a posteriori error estimation to find out where the coarseness was most damaging to your accuracy.
3b. Replace the coarse discretization in those locations (or time steps, models, etc) with a more refined version
Go back to step 1.
I’m not sure how this analogy affects astrophysicists’ decision making processes, though. After seeing odd results, what do you say to yourself (and any hypothetical omniscient listeners) in a loud voice?
“Wow, that certainly looked wrong! Clearly something funny is going on which requires more investigation!” (saving the entire universe from fate 2b)
or
“Well, that’s close enough for me! Nothing strange or erroneous going on there!” (saving our local chunk of universe from being refined-into-something-else via fate 3b)
Personally I would say the latter, but historically the UHECR community has been prone to say things like the former. (E.g., when AGASA failed to detect the GZK cutoff, everyone was like “there must be new physics allowing particles to evade the cutoff!”, as opposed to “there must be something wrong with the experiment”—but given that all later experiments have seen a cutoff, it’s most likely that AGASA did indeed do something wrong. OTOH I can’t recall anyone making “planetarium”-like hypotheses, except jokingly (I suppose).)
EDIT: Also, I can’t count the times people have claimed to detect an anisotropy in the UHECR arrival direction distribution and then retracted them after more statistics was available. Which doesn’t surprise me, given the badly unBayesian ad-hockeries (to borrow E.T. Jaynes’ term) they use to test them. And now, I’ll tap out for, ahem, decision-theoretical reasons.
The way you typically converge an adaptive simulation is to start with a cheap coarse-grained approximation, then:
Run your simulation.
Check to see if it was accurate enough on the whole for you 2b. If so quit.
Do some a posteriori error estimation to find out where the coarseness was most damaging to your accuracy. 3b. Replace the coarse discretization in those locations (or time steps, models, etc) with a more refined version
Go back to step 1.
I’m not sure how this analogy affects astrophysicists’ decision making processes, though. After seeing odd results, what do you say to yourself (and any hypothetical omniscient listeners) in a loud voice?
“Wow, that certainly looked wrong! Clearly something funny is going on which requires more investigation!” (saving the entire universe from fate 2b) or “Well, that’s close enough for me! Nothing strange or erroneous going on there!” (saving our local chunk of universe from being refined-into-something-else via fate 3b)
Personally I would say the latter, but historically the UHECR community has been prone to say things like the former. (E.g., when AGASA failed to detect the GZK cutoff, everyone was like “there must be new physics allowing particles to evade the cutoff!”, as opposed to “there must be something wrong with the experiment”—but given that all later experiments have seen a cutoff, it’s most likely that AGASA did indeed do something wrong. OTOH I can’t recall anyone making “planetarium”-like hypotheses, except jokingly (I suppose).)
EDIT: Also, I can’t count the times people have claimed to detect an anisotropy in the UHECR arrival direction distribution and then retracted them after more statistics was available. Which doesn’t surprise me, given the badly unBayesian ad-hockeries (to borrow E.T. Jaynes’ term) they use to test them. And now, I’ll tap out for, ahem, decision-theoretical reasons.