Test Driven Thinking
Programmers do something called Test Driven Development. Basically, they write tests that say “I expect my code to do this”, then write more code, and if the subsequent code they write breaks a test they wrote, they’ll be notified.
Wouldn’t it be cool if there was Test Driven Thinking?
Write tests: “I expect that this is true.”
Think: “I claim that A is true. I claim that B is true.”
If A or B causes any of your tests to fail, you’d be notified.
It’d be awesome if you could apply TDT and be notified when your tests fail, but this seems very difficult to implement.
I’m not sure what a lesser but still useful version would look like.
Maybe this idea could serve as some sort of intuition pump for intellectual hygiene (“What do you think you know, and why do you think you know it?”). Ie. having understood the idea of TDT, maybe it’d motivate/help people apply intellectual hygiene. Which is sort of like a manual version of TDT, where you’re the one constantly running the tests.
This is an interesting restatement of beliefs paying rent—in order to qualify as a useful belief to hold, it must be testable. And if it’s testable, you should test it.
Atheism..?
I believe that jumping off tall buildings without a parachute is a bad idea. Should I test it?
Atheism can be legitimately viewed as a lack of belief, if you properly hedge your claims about whether or not it’s possible for gods or other ethereal beings to exist.
Also testing a belief doesn’t necessarily mean testing it in full. You’ve probably tested your belief in the lethality of long drops partially by falling out of trees as a child (or at least, I did).
Not quite, that goes by the name of agnosticism. An atheist answers the question “Do gods exist?” by saying “No”.
The results of all these tests point out that falls are not lethal, of course :-P
Provisionally accepting your distinction between atheism and agnosticism, in what way is the former useful and the latter not?
That’s where an untested auxiliary belief figures in—“if something hurts in proportion to variable x (i.e. the height of the drop), experiencing that thing when x is very large will probably kill you”.
That’s basically the Duhem-Quine spiel right? Which is why strict falsificationism doesn’t quite work. But that’s not to say a weaker form of falsificationism can’t work: a network of ideas is useful to the degree that nodes in the network are testable. A fully isolated network (such as a system of theology) is useless.
Your definition of atheism doesn’t seem to reflect the way the word is used. A good portion of self-identified atheists would in fact be agnostics under your definition. In fact, every flavour of atheism I would consider compatible with general LW beliefs would be agnosticism since we can only claim that P(god) is very small.
Very few people reason in a way that uses probabilities.
True, but I would consider the most common chain of reasoning for atheism (Occam’s razor, therefore no God) equivalent to thinking in terms of probabilities even if probabilities aren’t explicitly mentioned.
Occam’s razor has little to do with probabilities.
Then why accept the simplest solution instead of say, the most beautiful solution, or the most intuitive solution?
Because you decide to accept the simplest solution. At least that’s true for most people. Very few people reason with probabilities.
Good question. I’d argue that actually accepting the most elegant solution is a better heuristic than accepting the simplest.
As an atheist, I answer the question “Do gods exist?” by saying “With the evidence we have right now, it is most likely that they do not.”
For predictions there are e.g. predictionbook or foresightexchange.
Maybe the programming analogy can be pushed forward by applying pair programming: Make the test a cooperative thing by mutually ‘running’ your tests. This is also suggested byOthers’ predictions of your performance are usually more accurate and Bet Your Friends to Be More Right.
Your description of TDD is slightly incomplete: the steps include, after writing the test, running the test when you expect it to fail. The idea being that if it doesn’t fail, you have either written an ineffective test (this is more likely than one might think) or the code under test actually already handles that case.
Then you write the code (as little code as needed) and confirm that the test passes where it didn’t before to validate that work.
I work in software, and the hardest part is explaining to people that until you finish a test plan your requirements are incomplete.
I can see two interpretations.
In one, this is equivalent to
Write down your forecasts and explicitly list reasons for your predictions
Check whether the forecast was correct
If it was not, re-evaluate the reasons listed in step 1.
In the other, this is just an internal consistency check: if you believe A, B, and C, make sure they can co-exist and do not contradict each other.
TDD is a declarative/generative process: you describe first what you want, then tell the computer how to achieve it, then test that against your initial requirement.
WRT rationality, this cannot be taken literally because you cannot generate reality, unless it’s something of a goal setting. In this case it would be akin to writing a smart goal: how would you know if you have achieved it?
But in case of beliefs about reality, the only test you can do is: how does my model matches reality?
As Lumifer pointed out though, there’s no simple way to check beliefs against a simple test, which gives you only true/false information. For basically all interesting belief, you need to use probabilities and test using a Bayesian framework.
For predictions regarding stock exchanges it is possible to state your expectations in form of ranges over time and you will be informed when these ranges are left. It seems straightforward to extend this to more xomplex conditions.
That seems like a job for an expert system—using formal reasoning from premises (as long as you can translate them comfortably into symbols), identifying whether a new fact contradicts any old fact...
What exactly separates this from general critical thinking?
critical thinking is a buzzword
How so?
Different people mean different things with it.
Am I correct to rephrase your idea as “People should develop a habit to apply reductio ad absurdum and, to some extent, absurdity heuristic more often”?
No. I think it applies to more than absurd ideas.