TDD is a declarative/generative process: you describe first what you want, then tell the computer how to achieve it, then test that against your initial requirement.
WRT rationality, this cannot be taken literally because you cannot generate reality, unless it’s something of a goal setting. In this case it would be akin to writing a smart goal: how would you know if you have achieved it? But in case of beliefs about reality, the only test you can do is: how does my model matches reality? As Lumifer pointed out though, there’s no simple way to check beliefs against a simple test, which gives you only true/false information. For basically all interesting belief, you need to use probabilities and test using a Bayesian framework.
TDD is a declarative/generative process: you describe first what you want, then tell the computer how to achieve it, then test that against your initial requirement.
WRT rationality, this cannot be taken literally because you cannot generate reality, unless it’s something of a goal setting. In this case it would be akin to writing a smart goal: how would you know if you have achieved it?
But in case of beliefs about reality, the only test you can do is: how does my model matches reality?
As Lumifer pointed out though, there’s no simple way to check beliefs against a simple test, which gives you only true/false information. For basically all interesting belief, you need to use probabilities and test using a Bayesian framework.