So do you want to define “rationality” as a kind of reasoning? Reasoning is an opaque mental process and, for example, does not include acting which is a large part of instrumental rationality.
When I use the word reasoning, I really mean both the system 1 and 2 cognitive processes. By rational I basically mean reasoning (system 1 and 2) done well. Where done well, is defined based on your most trusted source. For us this is science, so logic, probability, decision theory etc. for system 2.
Hold on, that’s new. Are you claiming that (proper) values are a part of rationality and that rationality will tell you what your values should be? I think I am going to loudly object to that. Maybe you can provide an example to show what you mean?
I don’t know what “proper” would mean. I am talking about coherence which means that its “properness”, I suppose, depends on its context, i.e. the other pre-existing values. I will give you some examples. I will assume that you already know the difference between wanting and liking.
Excessive Wanting—an example is drug addiction: “Only ‘wanting’ systems sensitize, and so ‘wanting’ can increase and become quite intense due to sensitization, regardless of whether a drug still remains ‘liked’ after many repeated uses”.
Not liking things that you should or could—examples are bad experiences that cause aversion conditioning to something that you used to or could like. My general view is that if you don’t like something and you could then this is a limitation.
Not wanting things you like—ugh fields are an example of this.
Conflicting wants—this is often inevitable like you say value is complex. But, I think it is important to look at what the fundamental human values or needs are and try to align with those. If you don’t, then in general there is a going to be a greater amount of conflict.
I would need to write a full post on the details, but that is just a general idea of what I mean. You also consider the values of others that you are interconnected with and care about.
Hm, that’s an interesting approach. Then you’d consider rationality a kind of skill—a skill like writing essays or programming? This is probably worth exploring further.
I don’t see how you can view it as anything but a skill. This is because epistemic rationality, for example, is only valuable instrumentally. It helps makes more rational decisions, but the truer beliefs it causes need to be applied to actually be useful and improve your rationality. If you spend lots of effort creating true beliefs and then compartmentalize that knowledge and don’t apply it, you have effectively gained nothing in terms of rationality. That’s my view anyway. I don’t know how many people would agree. An example is Aumann, he knows a lot about rationality, but I don’t think he is rational because it looks to me like he believes in non overlapping magisteria.
So, yes, it’s complicated. I have issues with listening to “It’s not rational to value/desire this”, but I have much less issues with “The price for this action that you want to do is really high, are you quite sure you want to pay it, that doesn’t look rational”. I am not sure where the proper boundary is.
I agree with you on this and your other points on how value is complex. I think that to say that: “it is rational to value/desire this” there needs to a ‘because’ after that statement. No value/desire is rational or irrational in and of itself. It is only irrational or irrational in a context. That is, because of its relation to other values or the costs to fulfil it etc.
Right now, I am thinking that I need to make the base concepts of rationality more solid before I can move into what rationality is for this compendium.
This is my first attempt at defining things. My goal is to define things in a programatic kind of way. This means that the concepts should follow: single responsibility, loose coupling, yagni etc. Let me know what you think.
The goal of the definitions is just to highlight the right areas in concept space. They are drafts and will obviously need more detail. I would also need to submit them as posts and see if others agree.
I am thinking that there should be two basic areas: system 1 and system 2 rationality. Where rationality, in its most basic form, means done well (this will need to be expanded upon). The goal of the two areas is to define what it is we are referring to when we say that something is rational or irrational. There are two areas so that we can distinguish rationality/irrationality in formal reasoning vs. your intuitions or what you actually do vs. what you think you should do.
There are also skills or general topics which describe groups of techniques and methods that can be used to improve your rationality in one or both of the two areas of it. Using these skills means that you apply them using volitional effort. It is noted, however that if you use these skills often enough they are likely to become embedded in your system 1 processes.
There may be more skills, but I think the main ones are below:
Epistemic rationality—true beliefs and all that
Instrumental rationality - (restricted to reasonable costs)
Value coherence rationality—I gave some examples, but it basically means noticing when your values and desires are out of alignment or could become so if you did some action.
Distributive rationality—this is basically what you are talking about in the above quote. Once you have a semi-sufficient valuation system in place how can you actually distribute resources so that you achieve what you value.
Perspectival rationality—no matter how great you are at being rational you are limited by the ideas that you can come up with. You are limited by your paradigms and perspectives. Perspectival rationality is about knowing when to look at a situation from multiple perspectives and having the ability to model the territory or map accurately from another perspective. By modelling the map from another perspective, it is meant that you are thinking about what the maps of someone else or yourself in a future or past tense would be like for a given situation. By modelling the territory, it is meant that you are thinking about what the territory will be like if some situation occurs. An important part of perspectival rationality is being able to coalesce the information from multiple perspectives into a coherent whole. The aim of prrapectival rationality is greater novelty in your ideas, broader utilities in solutions and more pragmatic results. It also includes understanding the necessarily flawed and limited nature of your perspective. You need to constantly be seeking feedback and other perspectives. It would relate to complexity theory, agile software development, systems dynamics, boydian thinking and mental models/schemas/scripts (whatever you want to call it). I plan to write some posts around this idea.
Communicative rationality—how can you communicate well. I will need to look into this one, but I think it’s important.
Applied rationality—This relates to when you already know what the best thing to do is and is about how you can get yourself to actually do it. Examples of this are training will power or courage (doing something you don’t want to, but believe you should), dealing with ugh fields.
rational I basically mean reasoning (system 1 and 2) done well. Where done well, is defined based on your most trusted source.
I am not sure I understand—is “most trusted source” subjective? What if Jesus is my most trusted source? And He is for a great deal of people.
I am talking about coherence which means that its “properness”, I suppose, depends on its context, i.e. the other pre-existing values.
Do you think it could be reformulated in the framework where values form tree-like networks with some values being “deep” or “primary” and other values being “shallow” or “derived” or “secondary”? Then you might be able to argue that a conflict between a deep and a shallow value should be resolved by the declaring the shallow value not rational.
I don’t see how you can view it as anything but a skill
I meant this more specifically in the looking for a definition context.
One very common way of making a definition is to point to a well-known class, say, Bet and then define a sub-class beta by listing a set of features {X} which allow you to decide whether a particular object b from the super-class Bet belongs to the sub-class beta or not. Such definitions are sometimes called is-a-kind-of definitions: beta is a kind of Bet.
So if we were to try to give an is-a-kind-of defintion of rationality, what is the super-class? Is it reasoning? Is it skills? Something else?
No value/desire is rational or irrational in and of itself. It is only irrational or irrational in a context. That is, because of its relation to other values or the costs to fulfil it etc.
So how to avoid being caught in a loop: values depend on values which depend on values that depend on values..?
This means that the concepts should follow: single responsibility, loose coupling, yagni etc.
Not sure about yagni, since it is not the case that you can always go back to a core definition and easily update it for your new needs. If there’s already a structure built on top of that core definition, changing it might prove to be quite troublesome. Loose coupling and such—sure, if you can pull it off :-) Software architecture is… much less constrained by reality :-)
two basic areas: system 1 and system 2 rationality
What do you mean by system 2 rationality? Intuitions that work particularly well? Successful hunches?
I think the main ones are below
That’s a very wide reach. Are you sure you’re not using “rationality” just as a synonym for “doing something really well”?
That’s a very wide reach. Are you sure you’re not using “rationality” just as a synonym for “doing something really well”?
I mean do well in the areas I talked about before. In summary, I basically mean do well at coming up with solutions to problems or choosing/being able to go through with the best solution, out of all of the solutions you have come up with, to a problem.
I will try to define it again.
First off, there is comprehensive rationality or normative rationality. This does not consider agent limitations. It can be thought of as having two types.
Prescient—outcomes are known and fixed. The decision makers maximise the outcomes with the highest utilities (discounted by costs).
Non-prescient—like the prescient model, but it integrates risk and uncertainty by associating a probability distribution with the models where the probability is estimated by the decision maker.
In both cases, choices among competing goals are handled by something like indifference curves.
We could say that under the comprehensive rational model a rational agent is one that maximizes its expected utility, given its current knowledge.
When we talk about rationality, though, we normally mean in regards to humans. This means that we are talking about bounded rationality. Like comprehensive rationality, bounded rationality assumes that agents are goal-oriented, but bounded rationality also takes into account the cognitive limitations of decision makers in attempting to achieve those goals.
Bounded rationality deals with agents that are limited in many ways which include being:
Unable to determine all outcomes. Organisms with cognitive limitations have a need to satisfice and an inability to consider long sequential outcomes that are inextricably tied. There is also a tendency to focus on a specific set of the overall goals or outcomes due to priming.framing.
Unable to determine all of the pertinent information.
Unable to determine all of the possible inferences.
The big difference between bounded rationality and normative rationality is that in bounded rationality you also consider the agent improving its ability to choose or come up with the best outcomes as rational as long as there are no costs or missed opportunities involved.. Therefore, a rational agent, in the bounded sense, is one that has three characteristics:
It has a honed ability to return decent sets of outcomes from its searches for outcomes
The expected utiltity it assigns to outcomes accurately matches the actual expected utiltity
It chooses the best outcome returned by its searches for outcomes. The best outcome is that one with the highest expected utiltity (discounted by costs)
Do you think it could be reformulated in the framework where values form tree-like networks with some values being “deep” or “primary” and other values being “shallow” or “derived” or “secondary”? Then you might be able to argue that a conflict between a deep and a shallow value should be resolved by the declaring the shallow value not rational.
I think that once a value is in. It is in and works just like all the others in terms of its impact on valuation. However, a distinction like the one you talked about makes sense. But, I would not have ‘deep’ and ‘shallow’ because I have no idea how to determine that. Perhaps, ‘changeable’ vs ‘non-changeable’ would be better. Then, you can look at some conflicting values, i.e. ones that lead you to want opposite things, and ask if any of them are changeable and what the impact is from changing them. The values that relate to what you actually need are non-changeable or at least would cause languishing if you tried to repress them. I think the problem with the tree view is that values are complex, like you were talking about before, one value may conflict with multiple other values.
So how to avoid being caught in a loop: values depend on values which depend on values that depend on values..?
I don’t see the loop. This is because there is no ‘value’. There is only coherence which is just how much it conflicts with the other values. I don’t know how to describe this without an eidetic example. Please let me know if this doesn’t work. Imagine one of those old style screensavers where you have a ball moving across the screen and when it hits the side of the screen it bounces in a random direction. Now, when you have a single ball it can go in any direction at all. There is no concept of coherence because there is only one ball. It is when you introduce another ball that the direction starts to matter as there is now the factor of coherence between the balls. By coherence I mean simply that you don’t want the balls to hit each other. This restricts their movement and it now becomes optimal for them to move in some kind of pattern with vertical or horizontal lines being the simplest,
What this means for values is that you want them to basically be directed towards the same or similar targets or at least targets that are not conflicting. A potential indicator of an irrational value is one that conflicts with other values, Of course, human values are not coherent. But, incoherence is still an indicator of potential irrationality.
Unrelated to the above examples is that you would need to think about if the target of the value is actually valuable and is worth the costs you have to pay to achieve it, this is harder to find out, but you can look at the fundamental human needs. Maybe, your deep vs. shallow distinction would be useful in this context.
I am not sure I understand—is “most trusted source” subjective? What if Jesus is my most trusted source? And He is for a great deal of people.
I don’t think I am conveying this point well. I am trying to say that we only have an incomplete answer as to what is rational and that science provides the best answer we have.
One very common way of making a definition is to point to a well-known class,
I think instead of that type of defintion I would rather say that rationality means doing well in the areas of X, Y and Z. and then have a list of skills or domains that improve your ability in the areas of rationality.
Do you think that there are many types of rationality? I think that there are many types of methods to achieve rationality, but I don’t think there are many types of rationality.
So if we were to try to give an is-a-kind-of defintion of rationality, what is the super-class? Is it reasoning? Is it skills? Something else?
I would say reasoning or maybe problem solving and outcome generation/choosing better convey the idea of it.
Sorry, if I was meandering and repeating my points. I wasn’t viewing this as an argument, so I don’t view it as going in circles, but as going through a series of drafts. Maybe, I will need to be more careful in the future.
I appreciate your feedback.
In regards to what we talked about, I am not really that happy with how rationality is defined in the literature, but I am also not sure of what a better way to define it would be. I guess I will have to look into the bounded types of rationality.
No, that’s perfectly fine, I wasn’t treating it as an argument, either. It’s just that you are spending a lot of time thinking about it, and I’m spending less time, so, having made some points, I really don’t have much more to contribute and I don’t want to fisk your thinking notes. No need to be careful in drafts, that’s not what they are for :-)
When I use the word reasoning, I really mean both the system 1 and 2 cognitive processes. By rational I basically mean reasoning (system 1 and 2) done well. Where done well, is defined based on your most trusted source. For us this is science, so logic, probability, decision theory etc. for system 2.
I don’t know what “proper” would mean. I am talking about coherence which means that its “properness”, I suppose, depends on its context, i.e. the other pre-existing values. I will give you some examples. I will assume that you already know the difference between wanting and liking.
Excessive Wanting—an example is drug addiction: “Only ‘wanting’ systems sensitize, and so ‘wanting’ can increase and become quite intense due to sensitization, regardless of whether a drug still remains ‘liked’ after many repeated uses”.
Not liking things that you should or could—examples are bad experiences that cause aversion conditioning to something that you used to or could like. My general view is that if you don’t like something and you could then this is a limitation.
Not wanting things you like—ugh fields are an example of this.
Conflicting wants—this is often inevitable like you say value is complex. But, I think it is important to look at what the fundamental human values or needs are and try to align with those. If you don’t, then in general there is a going to be a greater amount of conflict.
I would need to write a full post on the details, but that is just a general idea of what I mean. You also consider the values of others that you are interconnected with and care about.
I don’t see how you can view it as anything but a skill. This is because epistemic rationality, for example, is only valuable instrumentally. It helps makes more rational decisions, but the truer beliefs it causes need to be applied to actually be useful and improve your rationality. If you spend lots of effort creating true beliefs and then compartmentalize that knowledge and don’t apply it, you have effectively gained nothing in terms of rationality. That’s my view anyway. I don’t know how many people would agree. An example is Aumann, he knows a lot about rationality, but I don’t think he is rational because it looks to me like he believes in non overlapping magisteria.
I agree with you on this and your other points on how value is complex. I think that to say that: “it is rational to value/desire this” there needs to a ‘because’ after that statement. No value/desire is rational or irrational in and of itself. It is only irrational or irrational in a context. That is, because of its relation to other values or the costs to fulfil it etc.
Right now, I am thinking that I need to make the base concepts of rationality more solid before I can move into what rationality is for this compendium.
This is my first attempt at defining things. My goal is to define things in a programatic kind of way. This means that the concepts should follow: single responsibility, loose coupling, yagni etc. Let me know what you think.
The goal of the definitions is just to highlight the right areas in concept space. They are drafts and will obviously need more detail. I would also need to submit them as posts and see if others agree.
I am thinking that there should be two basic areas: system 1 and system 2 rationality. Where rationality, in its most basic form, means done well (this will need to be expanded upon). The goal of the two areas is to define what it is we are referring to when we say that something is rational or irrational. There are two areas so that we can distinguish rationality/irrationality in formal reasoning vs. your intuitions or what you actually do vs. what you think you should do.
There are also skills or general topics which describe groups of techniques and methods that can be used to improve your rationality in one or both of the two areas of it. Using these skills means that you apply them using volitional effort. It is noted, however that if you use these skills often enough they are likely to become embedded in your system 1 processes.
There may be more skills, but I think the main ones are below:
Epistemic rationality—true beliefs and all that
Instrumental rationality - (restricted to reasonable costs)
Value coherence rationality—I gave some examples, but it basically means noticing when your values and desires are out of alignment or could become so if you did some action.
Distributive rationality—this is basically what you are talking about in the above quote. Once you have a semi-sufficient valuation system in place how can you actually distribute resources so that you achieve what you value.
Perspectival rationality—no matter how great you are at being rational you are limited by the ideas that you can come up with. You are limited by your paradigms and perspectives. Perspectival rationality is about knowing when to look at a situation from multiple perspectives and having the ability to model the territory or map accurately from another perspective. By modelling the map from another perspective, it is meant that you are thinking about what the maps of someone else or yourself in a future or past tense would be like for a given situation. By modelling the territory, it is meant that you are thinking about what the territory will be like if some situation occurs. An important part of perspectival rationality is being able to coalesce the information from multiple perspectives into a coherent whole. The aim of prrapectival rationality is greater novelty in your ideas, broader utilities in solutions and more pragmatic results. It also includes understanding the necessarily flawed and limited nature of your perspective. You need to constantly be seeking feedback and other perspectives. It would relate to complexity theory, agile software development, systems dynamics, boydian thinking and mental models/schemas/scripts (whatever you want to call it). I plan to write some posts around this idea.
Communicative rationality—how can you communicate well. I will need to look into this one, but I think it’s important.
Applied rationality—This relates to when you already know what the best thing to do is and is about how you can get yourself to actually do it. Examples of this are training will power or courage (doing something you don’t want to, but believe you should), dealing with ugh fields.
I am not sure I understand—is “most trusted source” subjective? What if Jesus is my most trusted source? And He is for a great deal of people.
Do you think it could be reformulated in the framework where values form tree-like networks with some values being “deep” or “primary” and other values being “shallow” or “derived” or “secondary”? Then you might be able to argue that a conflict between a deep and a shallow value should be resolved by the declaring the shallow value not rational.
I meant this more specifically in the looking for a definition context.
One very common way of making a definition is to point to a well-known class, say, Bet and then define a sub-class beta by listing a set of features {X} which allow you to decide whether a particular object b from the super-class Bet belongs to the sub-class beta or not. Such definitions are sometimes called is-a-kind-of definitions: beta is a kind of Bet.
So if we were to try to give an is-a-kind-of defintion of rationality, what is the super-class? Is it reasoning? Is it skills? Something else?
So how to avoid being caught in a loop: values depend on values which depend on values that depend on values..?
Not sure about yagni, since it is not the case that you can always go back to a core definition and easily update it for your new needs. If there’s already a structure built on top of that core definition, changing it might prove to be quite troublesome. Loose coupling and such—sure, if you can pull it off :-) Software architecture is… much less constrained by reality :-)
What do you mean by system 2 rationality? Intuitions that work particularly well? Successful hunches?
That’s a very wide reach. Are you sure you’re not using “rationality” just as a synonym for “doing something really well”?
I mean do well in the areas I talked about before. In summary, I basically mean do well at coming up with solutions to problems or choosing/being able to go through with the best solution, out of all of the solutions you have come up with, to a problem.
I will try to define it again.
First off, there is comprehensive rationality or normative rationality. This does not consider agent limitations. It can be thought of as having two types.
Prescient—outcomes are known and fixed. The decision makers maximise the outcomes with the highest utilities (discounted by costs).
Non-prescient—like the prescient model, but it integrates risk and uncertainty by associating a probability distribution with the models where the probability is estimated by the decision maker.
In both cases, choices among competing goals are handled by something like indifference curves.
We could say that under the comprehensive rational model a rational agent is one that maximizes its expected utility, given its current knowledge.
When we talk about rationality, though, we normally mean in regards to humans. This means that we are talking about bounded rationality. Like comprehensive rationality, bounded rationality assumes that agents are goal-oriented, but bounded rationality also takes into account the cognitive limitations of decision makers in attempting to achieve those goals.
Bounded rationality deals with agents that are limited in many ways which include being:
Unable to determine all outcomes. Organisms with cognitive limitations have a need to satisfice and an inability to consider long sequential outcomes that are inextricably tied. There is also a tendency to focus on a specific set of the overall goals or outcomes due to priming.framing.
Unable to determine all of the pertinent information.
Unable to determine all of the possible inferences.
The big difference between bounded rationality and normative rationality is that in bounded rationality you also consider the agent improving its ability to choose or come up with the best outcomes as rational as long as there are no costs or missed opportunities involved.. Therefore, a rational agent, in the bounded sense, is one that has three characteristics:
It has a honed ability to return decent sets of outcomes from its searches for outcomes
The expected utiltity it assigns to outcomes accurately matches the actual expected utiltity
It chooses the best outcome returned by its searches for outcomes. The best outcome is that one with the highest expected utiltity (discounted by costs)
I think that once a value is in. It is in and works just like all the others in terms of its impact on valuation. However, a distinction like the one you talked about makes sense. But, I would not have ‘deep’ and ‘shallow’ because I have no idea how to determine that. Perhaps, ‘changeable’ vs ‘non-changeable’ would be better. Then, you can look at some conflicting values, i.e. ones that lead you to want opposite things, and ask if any of them are changeable and what the impact is from changing them. The values that relate to what you actually need are non-changeable or at least would cause languishing if you tried to repress them. I think the problem with the tree view is that values are complex, like you were talking about before, one value may conflict with multiple other values.
I don’t see the loop. This is because there is no ‘value’. There is only coherence which is just how much it conflicts with the other values. I don’t know how to describe this without an eidetic example. Please let me know if this doesn’t work. Imagine one of those old style screensavers where you have a ball moving across the screen and when it hits the side of the screen it bounces in a random direction. Now, when you have a single ball it can go in any direction at all. There is no concept of coherence because there is only one ball. It is when you introduce another ball that the direction starts to matter as there is now the factor of coherence between the balls. By coherence I mean simply that you don’t want the balls to hit each other. This restricts their movement and it now becomes optimal for them to move in some kind of pattern with vertical or horizontal lines being the simplest,
What this means for values is that you want them to basically be directed towards the same or similar targets or at least targets that are not conflicting. A potential indicator of an irrational value is one that conflicts with other values, Of course, human values are not coherent. But, incoherence is still an indicator of potential irrationality.
Unrelated to the above examples is that you would need to think about if the target of the value is actually valuable and is worth the costs you have to pay to achieve it, this is harder to find out, but you can look at the fundamental human needs. Maybe, your deep vs. shallow distinction would be useful in this context.
I don’t think I am conveying this point well. I am trying to say that we only have an incomplete answer as to what is rational and that science provides the best answer we have.
I think instead of that type of defintion I would rather say that rationality means doing well in the areas of X, Y and Z. and then have a list of skills or domains that improve your ability in the areas of rationality.
Do you think that there are many types of rationality? I think that there are many types of methods to achieve rationality, but I don’t think there are many types of rationality.
I would say reasoning or maybe problem solving and outcome generation/choosing better convey the idea of it.
I have a feeling we’re starting to go in circles. But it was an interesting conversation and I hope it was useful to you :-)
Sorry, if I was meandering and repeating my points. I wasn’t viewing this as an argument, so I don’t view it as going in circles, but as going through a series of drafts. Maybe, I will need to be more careful in the future. I appreciate your feedback.
In regards to what we talked about, I am not really that happy with how rationality is defined in the literature, but I am also not sure of what a better way to define it would be. I guess I will have to look into the bounded types of rationality.
No, that’s perfectly fine, I wasn’t treating it as an argument, either. It’s just that you are spending a lot of time thinking about it, and I’m spending less time, so, having made some points, I really don’t have much more to contribute and I don’t want to fisk your thinking notes. No need to be careful in drafts, that’s not what they are for :-)