Exchangeability of copies and monotonicity are pretty important. People always report monotonicity (because you get identification when you could not before). But anyways, I shouldn’t be the one to have to tell you this.
Also, it’s not some, it’s all assumptions needed to get your answer from the data. Even if exchangeability holds for you, it might not hold for someone else who might want to try your design. If you don’t write down what you assume, how should they know if your design will carry over?
Anyways, this is just the Scruffy AI mistake all over again. Actually it’s worse than that. The scientific attitude is to try to falsify, e.g. look for reasons your model might fail. You are assuming as a default that your model is reasonable, and not even leaving a paper trail.
Exchangeability of copies and monotonicity are pretty important. People always report monotonicity (because you get identification when you could not before). But anyways, I shouldn’t be the one to have to tell you this.
Also, it’s not some, it’s all assumptions needed to get your answer from the data. Even if exchangeability holds for you, it might not hold for someone else who might want to try your design. If you don’t write down what you assume, how should they know if your design will carry over?
Anyways, this is just the Scruffy AI mistake all over again. Actually it’s worse than that. The scientific attitude is to try to falsify, e.g. look for reasons your model might fail. You are assuming as a default that your model is reasonable, and not even leaving a paper trail.