1) Thou shalt never conflate the truth or falsehood of a proposition with any other characteristic, be it the consequences of the proposition if it be true, or the consequences of believing it for thyself personally, or the pleasing or unpleasant aesthetics of the belief itself. Furthermore, thou shalt never let thy feelings regarding the matter overrule what thy critical faculties tells thee, or in any other way act as if reality might adjust itself in accordance with thine own wishes.
The map is not the territory. Rationality is about making effective decisions.
If you as an American ask me whether I come from Berlin, I’m going to say “Yes.” I have been born in Berlin. If someone from Berlin asks me I could say: “No. I have been born in Spandau.” Spandau is a district of Berlin and there a complex history. Both answer are true because it depends on the context in which the question is asked.
When doing biological modeling there often a tradeoff between complexity of the model and accuracy. Which model you want to use depends on the purpose. If you want to model a whole brain you are going to use a less complex model of a neuron than when you want to model 100 neurons and how those neurons interact with each other.
Beauty is a guiding principle in theoretical physics.
Feeling are a valuable source of information. Shutting down any source of information is no good idea.
6) Thou shalt never judge a real or proposed action by any metric other than this: The expected consequences of the action, both direct and indirect, be they subtle or blatant, taking into account all relevant information available at the time of deciding and no more or less than this.
Basically you are saying that Eliezer is wrong with Timeless decision theory.
The map is not the territory. Rationality is about making effective decisions.
I profess I entirely fail to see how your post refutes the quoted paragraph. Yes, using models is useful, but that is in no way the same as falling prey to wishful thinking. I keep trying to re-read that paragraph to see how it might be interpreted in a way that makes your reply seem natural, but my best guess is that you might have read “Do not let feelings overrule critical thinking or in any other way engage in wishful thinking” as “ignore your feelings”. And I still don’t see how saying models are useful flows from there.
Basically you are saying that Eliezer is wrong with Timeless decision theory.
As far as I know, that sequence is meant to detail ways in which your actions might have indirect/timeless/acausal consequences, and therefore supplements rather than contradicts consequentialism. If I’m wrong, please explain how and why.
Yes, using models is useful, but that is in no way the same as falling prey to wishful thinking.
Your paragraph doesn’t mention anything about wishful thinking. Wishful thinking might be the only thing that comes to mind for you if you think about allowing feelings override critical thinking, but it isn’t.
If a sudden feeling of fear triggers in myself and I can’t explain with rational thought why a given situation is dangerous or why I would feel fear, I still remove myself from the situation.
There are studies in nurses, that if a nurse get’s a feeling that a patient is in a critical situation but the nurse has no evidence that the patient is in a critical situation the patient should still get extra supervision.
There good evidence that the nurse should let her intuitive feelings overrule critical thinking if the cost of a false positive is low but the cost of a false negative is high.
In case you want to argue that you can make a rational decision by making an utility calcuation in your head, that might work in the case of the nurses but there are plenty of situation where the time to do that calculation isn’t available and it’s very useful to respond immediately.
If I dance intimitely with a woman who’s a stranger than it’s very important that I immediately act when I get the feeling that something isn’t right. When I started dancing I tried to get a rational model of what intimicy is or isn’t okay and act based on mental rules. It doesn’t work that way.
That requires that I can tell the feeling of “touching a woman feels good” apart from “this interaction doesn’t flow well, it’s better to reduce intimacy”. Understanding emotions and being able to tell different ones apart is useful. There are feelings that you should allow to override critical analysis in specific situations, there are other feelings that you shouldn’t allow to override critical analysis.
In biological modeling feelings of the person doing the modeling aren’t so central that they should override critical thought, but the model still get’s optimized for a certain use case and good models often trade some accuracy for simplicity. Simple models are more beautiful and simply beautiful models should be preferred over ugly complicated one if both models predict reality equally well.
As far as I know, that sequence is meant to detail ways in which your actions might have indirect/timeless/acausal consequences, and therefore supplements rather than contradicts consequentialism. If I’m wrong, please explain how and why.
It not about the indirect consequences of the action but about the consequences of being the kind of person that engages in specific actions.
Perhaps “consequences” needs to be tabooed. A consequence of something is something that is caused by it, but what does “cause” mean? That’s part of what makes Newcomb so paradoxical: it’s generally accepted that cause must precede effect, but the hypothetical is set up to treat Omega’s actions as depending on a decision after those actions. Are the contents of the boxes included in the category of “consequences” of the choice of how many boxes to take?
I think most people actually mean consequence when they say the word. The difference between someone who practices TDT and someone who does CDT is more than a bunch of semantics. The paragraph describes CDT.
Beware of blaming semantics when you should update one of your core beliefs instead.
Who here actually knows exactly what TDT is? (I am not sure I do—it was never written down fully—and I thought about these issues a lot). Are you just assuming people got TDT right? TDT might be “conceptual vaporware”. I read an old paper on it, but I didn’t like the paper (nor did that paper have a full description).
I think the wiki does contain a written down definition:
Timeless decision theory (TDT) is a decision theory, developed by Eliezer Yudkowsky which, in slogan form, says that agents should decide as if they are determining the output of the abstract computation that they implement. This theory was developed in response to the view that rationality should be about winning (that is, about agents achieving their desired ends) rather than about behaving in a manner that we would intuitively label as rational.
I think what Sophronius describes in the paragraph would is what’s “intuitively labeled as rational”.
I think that’s sort of the problem with the post. It’s a list of 10 things that intuitively feel like they are the things rational people should do.
It’s not a list that tries to describe what reasoned principles about rationalism Lesswrong did come up with. TDT is sort of the LW house decision theory. It’s about moving beyond the intuitive idea of rationalism that popular out there. LW rationality is on the other hand supposed to be about winning.
I think the example of reacting when fear comes up is a good example. A nurse should follow the algorithm that if she feels a given patient is in a critical condition the patient gets extra supervision.
The intuitive rational belief that the nurse should have good reasons that she can explain to other people about why a patient needs supervision. The intuitive rational belief is that there should be reasons besides the emotions of the nurse to give the patient extra supervision.
We do have studies that validate the abstract heuristic that the nurse should let her feeling overrule her intellectual analysis of the situation.
If you read the original paper from two decades ago that introduces the concept of evidence-based medicine you find that it’s about getting medical professionals to read more scientific papers and deemphasized intuitive decision making.
We learned something in those two decades. We decided that rationality should be about winning. We don’t know everything but we can at least make an effort to be less wrong. We know that specific choices are well made with intuition than it would be stupid to not go the winning way and instead try to analyse the situation intellectually. Of course the nurse should still learn medical science but she should also listen to her intuition.
We are in the 21st century and not anymore in the 20st. End 20st century ideology is outdated and it’s useful to update. To get less wrong.
Is TDT the best way to think about making decisions? It’s still in it’s infancy and there still room to refine it. Let’s run CFAR workshop to see what heuristics are actually practical when you teach them to humans.
I think the wiki does contain a written down definition:
I am sorry, but that is not specified at all. If I give you a specific problem (I have a list of them right here!), will you be able to tell me what “the TDT answer” should be? The way people seem to use TDT is as a kind of “brand name” for a nebulous cloud of decision theoretic ideas. Until there is a paper and a definition, TDT is not a defensible point. It has to be formally written down in order to have a chance to be wrong (being wrong is how we make progress after all).
If it’s a set of related decision theories, fine—tell me what the set is! Example: “naive EDT” is “choose an action that maximizes utility with respect to the distribution p(outcome | action took place).” This is very clear, I know exactly what this is.
The map is not the territory. Rationality is about making effective decisions.
If you as an American ask me whether I come from Berlin, I’m going to say “Yes.” I have been born in Berlin. If someone from Berlin asks me I could say: “No. I have been born in Spandau.” Spandau is a district of Berlin and there a complex history. Both answer are true because it depends on the context in which the question is asked.
When doing biological modeling there often a tradeoff between complexity of the model and accuracy. Which model you want to use depends on the purpose. If you want to model a whole brain you are going to use a less complex model of a neuron than when you want to model 100 neurons and how those neurons interact with each other.
Beauty is a guiding principle in theoretical physics.
Feeling are a valuable source of information. Shutting down any source of information is no good idea.
Basically you are saying that Eliezer is wrong with Timeless decision theory.
I profess I entirely fail to see how your post refutes the quoted paragraph. Yes, using models is useful, but that is in no way the same as falling prey to wishful thinking. I keep trying to re-read that paragraph to see how it might be interpreted in a way that makes your reply seem natural, but my best guess is that you might have read “Do not let feelings overrule critical thinking or in any other way engage in wishful thinking” as “ignore your feelings”. And I still don’t see how saying models are useful flows from there.
As far as I know, that sequence is meant to detail ways in which your actions might have indirect/timeless/acausal consequences, and therefore supplements rather than contradicts consequentialism. If I’m wrong, please explain how and why.
Your paragraph doesn’t mention anything about wishful thinking. Wishful thinking might be the only thing that comes to mind for you if you think about allowing feelings override critical thinking, but it isn’t.
If a sudden feeling of fear triggers in myself and I can’t explain with rational thought why a given situation is dangerous or why I would feel fear, I still remove myself from the situation.
There are studies in nurses, that if a nurse get’s a feeling that a patient is in a critical situation but the nurse has no evidence that the patient is in a critical situation the patient should still get extra supervision. There good evidence that the nurse should let her intuitive feelings overrule critical thinking if the cost of a false positive is low but the cost of a false negative is high.
In case you want to argue that you can make a rational decision by making an utility calcuation in your head, that might work in the case of the nurses but there are plenty of situation where the time to do that calculation isn’t available and it’s very useful to respond immediately.
If I dance intimitely with a woman who’s a stranger than it’s very important that I immediately act when I get the feeling that something isn’t right. When I started dancing I tried to get a rational model of what intimicy is or isn’t okay and act based on mental rules. It doesn’t work that way.
That requires that I can tell the feeling of “touching a woman feels good” apart from “this interaction doesn’t flow well, it’s better to reduce intimacy”. Understanding emotions and being able to tell different ones apart is useful. There are feelings that you should allow to override critical analysis in specific situations, there are other feelings that you shouldn’t allow to override critical analysis.
In biological modeling feelings of the person doing the modeling aren’t so central that they should override critical thought, but the model still get’s optimized for a certain use case and good models often trade some accuracy for simplicity. Simple models are more beautiful and simply beautiful models should be preferred over ugly complicated one if both models predict reality equally well.
It not about the indirect consequences of the action but about the consequences of being the kind of person that engages in specific actions.
Perhaps “consequences” needs to be tabooed. A consequence of something is something that is caused by it, but what does “cause” mean? That’s part of what makes Newcomb so paradoxical: it’s generally accepted that cause must precede effect, but the hypothetical is set up to treat Omega’s actions as depending on a decision after those actions. Are the contents of the boxes included in the category of “consequences” of the choice of how many boxes to take?
I think most people actually mean consequence when they say the word. The difference between someone who practices TDT and someone who does CDT is more than a bunch of semantics. The paragraph describes CDT.
Beware of blaming semantics when you should update one of your core beliefs instead.
Who here actually knows exactly what TDT is? (I am not sure I do—it was never written down fully—and I thought about these issues a lot). Are you just assuming people got TDT right? TDT might be “conceptual vaporware”. I read an old paper on it, but I didn’t like the paper (nor did that paper have a full description).
I think the wiki does contain a written down definition:
I think what Sophronius describes in the paragraph would is what’s “intuitively labeled as rational”.
I think that’s sort of the problem with the post. It’s a list of 10 things that intuitively feel like they are the things rational people should do.
It’s not a list that tries to describe what reasoned principles about rationalism Lesswrong did come up with. TDT is sort of the LW house decision theory. It’s about moving beyond the intuitive idea of rationalism that popular out there. LW rationality is on the other hand supposed to be about winning.
I think the example of reacting when fear comes up is a good example. A nurse should follow the algorithm that if she feels a given patient is in a critical condition the patient gets extra supervision.
The intuitive rational belief that the nurse should have good reasons that she can explain to other people about why a patient needs supervision. The intuitive rational belief is that there should be reasons besides the emotions of the nurse to give the patient extra supervision.
We do have studies that validate the abstract heuristic that the nurse should let her feeling overrule her intellectual analysis of the situation.
If you read the original paper from two decades ago that introduces the concept of evidence-based medicine you find that it’s about getting medical professionals to read more scientific papers and deemphasized intuitive decision making.
We learned something in those two decades. We decided that rationality should be about winning. We don’t know everything but we can at least make an effort to be less wrong. We know that specific choices are well made with intuition than it would be stupid to not go the winning way and instead try to analyse the situation intellectually. Of course the nurse should still learn medical science but she should also listen to her intuition.
We are in the 21st century and not anymore in the 20st. End 20st century ideology is outdated and it’s useful to update. To get less wrong.
Is TDT the best way to think about making decisions? It’s still in it’s infancy and there still room to refine it. Let’s run CFAR workshop to see what heuristics are actually practical when you teach them to humans.
There are a bunch of folk rationality beliefs.
I am sorry, but that is not specified at all. If I give you a specific problem (I have a list of them right here!), will you be able to tell me what “the TDT answer” should be? The way people seem to use TDT is as a kind of “brand name” for a nebulous cloud of decision theoretic ideas. Until there is a paper and a definition, TDT is not a defensible point. It has to be formally written down in order to have a chance to be wrong (being wrong is how we make progress after all).
If it’s a set of related decision theories, fine—tell me what the set is! Example: “naive EDT” is “choose an action that maximizes utility with respect to the distribution p(outcome | action took place).” This is very clear, I know exactly what this is.