It’s refreshing to see the non-anastrophic arrangement in the title.
What LessWrong would call the “system” of rationality is the rigorous mathematical application of Bayes’ Theorem. The “one thousand tips” you speak of are what we get when we apply this system to itself to quickly guess its behavior under certain conditions, as carrying around a calculator and constantly applying the system in everyday life is rather impractical.
Of course, Bayes’ theorem has the obvious problem that carrying out all of the necessary calculations is practically impossible. I mentioned a bunch of properties that a good system (to take a hint from roryokane, an algorithm) ought to have; surely we can come up with something that has those properties, without being impossible for a human to execute.
When creating such a general algorithm, we must keep a human limitation in mind: subconscious, unsystemized thought. A practical algorithm must account for and exploit it.
There are two types of subconscious thought that an algorithm has to deal with. One is the top-level type that is part of being a human. It is only our subconscious that can fire off the process of choosing to apply a certain conscious algorithm. We won’t even start running our algorithm if we don’t notice that it applies in this situation, or if we don’t remember it, or if we feel bored by the thought of it. So our algorithm has to be friendly to our subconscious in these ways. Splitting the algorithm into multiple algorithms for different situations may be one way of accomplishing that.
The other type of subconscious thought is black-boxfunction calls to our subconscious that our algorithm explicitly uses. This includes steps like like “choose which of these possibilities feels more likely” or “choose the option that looks like the most important”. We would call subconscious functions instead of well-defined sub-algorithms because they are much faster, and time is valuable. I suppose we just have to use our judgement to decide whether a subroutine should be ran explicitly or in our subconscious. (Try not to let the algorithm get stuck recursively calculating whether the time spent calculating the answer consciously instead of subconsciously would be worth the better answer.)
It’s refreshing to see the non-anastrophic arrangement in the title.
What LessWrong would call the “system” of rationality is the rigorous mathematical application of Bayes’ Theorem. The “one thousand tips” you speak of are what we get when we apply this system to itself to quickly guess its behavior under certain conditions, as carrying around a calculator and constantly applying the system in everyday life is rather impractical.
Of course, Bayes’ theorem has the obvious problem that carrying out all of the necessary calculations is practically impossible. I mentioned a bunch of properties that a good system (to take a hint from roryokane, an algorithm) ought to have; surely we can come up with something that has those properties, without being impossible for a human to execute.
When creating such a general algorithm, we must keep a human limitation in mind: subconscious, unsystemized thought. A practical algorithm must account for and exploit it.
There are two types of subconscious thought that an algorithm has to deal with. One is the top-level type that is part of being a human. It is only our subconscious that can fire off the process of choosing to apply a certain conscious algorithm. We won’t even start running our algorithm if we don’t notice that it applies in this situation, or if we don’t remember it, or if we feel bored by the thought of it. So our algorithm has to be friendly to our subconscious in these ways. Splitting the algorithm into multiple algorithms for different situations may be one way of accomplishing that.
The other type of subconscious thought is black-box function calls to our subconscious that our algorithm explicitly uses. This includes steps like like “choose which of these possibilities feels more likely” or “choose the option that looks like the most important”. We would call subconscious functions instead of well-defined sub-algorithms because they are much faster, and time is valuable. I suppose we just have to use our judgement to decide whether a subroutine should be ran explicitly or in our subconscious. (Try not to let the algorithm get stuck recursively calculating whether the time spent calculating the answer consciously instead of subconsciously would be worth the better answer.)
This is … one of my favorite posts, ever.
I suspect that in some cases the subconscious function will be more accurate than most sub-algorithms and you would choose it because of that.