[I]t is at times essential to have meaningful action that is not intentionally driven.
If by “not intentionally driven” you mean things like instincts and intuitions, I agree strongly. For one thing, the cerebral approach is way too slow for circumstances that require immediate reactions. There is also an aesthetic component to consider; I kind of enjoy being surprised and shocked from time to time.
Looking at a situation from the outside, how do you determine whether intentional or automatic action is best? From another angle, if you could tweak your brain to make certain sorts of situations trigger certain automatic reactions that otherwise wouldn’t, or vice versa, what (if anything) would you pick?
These evaluations themselves are part of yet another tool.
If by “not intentionally driven” you mean things like instincts and intuitions, I agree strongly.
Yes, exactly.
if you could tweak your brain to make certain sorts of situations trigger certain automatic reactions that otherwise wouldn’t, or vice versa, what (if anything) would you pick?
I think both intentional and unintentional action are required at different times. I have tried to devise a method of regulation, but as of now, the best I have come up with is moderating against extremes on either end. So if it seems like I have been overly intentional in recent days, weeks, etc, I try to rely more on instinct and intuition. It is rarely the case that I am relying too heavily on the later ^_^
So if it seems like I have been overly intentional in recent days, weeks, etc, I try to rely more on instinct and intuition.
Right, this is a good idea! You might want to consider an approach that goes by deciding what situations best require intuition, and which ones require intentional thought, rather than aiming only to keep their balance even (though the latter does approximate the former to the degree that these situations pop up with equal frequency).
Overall, what I’ve been getting at is this: Value systems in general have this property that you have to look at a bunch of different possible outcomes and decide which ones are the best, which ones you want to aim for. For technical reasons, it is always possible (and also usually helpful) to describe this as a single function or algorithm, typically around here called one’s “utility function” or “terminal values”. This is true even though the human brain actually physically implements a person’s values as multiple modules operating at the same time rather than a single central dispatch.
In your article, you seemed to be saying that you specifically think that one shouldn’t have a single “final decision” function at the top of the meta stack. That’s not going to be an easily accepted argument around here, for the reasons I stated above.
In your article, you seemed to be saying that you specifically think that one shouldn’t have a single “final decision” function at the top of the meta stack. That’s not going to be an easily accepted argument around here, for the reasons I stated above.
Yeah, this is exactly what I am arguing.
For technical reasons, it is always possible (and also usually helpful) to describe this as a single function or algorithm, typically around here called one’s “utility function” or “terminal values”.
Could you explain the technical reasons more, or point me to some essays where I could read about this? I am still not convinced why it is more benefical to have a single operating system.
I’m no technical expert, but: if I want X, and I also want Y, and I also want Z, and I also want W, and I also want A1 through A22, it seems pretty clear to me that I can express those wants as “I want X and Z and W and A1 through A22.” Talking about whether I have one goal or 26 goals therefore seems like a distraction.
In regards to why it’s possible, I’ll just echo what TheOtherDaveSaid.
The reason it’s helpful to try for a single top-level utility function is because otherwise, whenever there’s a conflict among the many many things we value, we’d have no good way to consistently resolve it. If one aspect of your mind wants excitement, and another wants security, what should you do when you have to choose between the two?
Is quitting your job a good idea or not? Is going rock climbing instead of staying at home reading this weekend a good idea or not? Different parts of your mind will have different opinions on these subjects. Without a final arbiter to weigh their suggestions and consider how important comfort and security are relative to each other, how do you do decide in a non-arbitrary way?
So I guess it comes down to: how important is it to you that your values are self-consistent?
More discussion (and a lot of controversy on whether the whole notion actually is a good idea) here.
Without a final arbiter to weigh their suggestions and consider how important comfort and security are relative to each other, how do you do decide in a non-arbitrary way?
Well, there’s always the approach of letting all of me influence my actions and seeing what I do.
If by “not intentionally driven” you mean things like instincts and intuitions, I agree strongly. For one thing, the cerebral approach is way too slow for circumstances that require immediate reactions. There is also an aesthetic component to consider; I kind of enjoy being surprised and shocked from time to time.
Looking at a situation from the outside, how do you determine whether intentional or automatic action is best? From another angle, if you could tweak your brain to make certain sorts of situations trigger certain automatic reactions that otherwise wouldn’t, or vice versa, what (if anything) would you pick?
These evaluations themselves are part of yet another tool.
Yes, exactly.
I think both intentional and unintentional action are required at different times. I have tried to devise a method of regulation, but as of now, the best I have come up with is moderating against extremes on either end. So if it seems like I have been overly intentional in recent days, weeks, etc, I try to rely more on instinct and intuition. It is rarely the case that I am relying too heavily on the later ^_^
Right, this is a good idea! You might want to consider an approach that goes by deciding what situations best require intuition, and which ones require intentional thought, rather than aiming only to keep their balance even (though the latter does approximate the former to the degree that these situations pop up with equal frequency).
Overall, what I’ve been getting at is this: Value systems in general have this property that you have to look at a bunch of different possible outcomes and decide which ones are the best, which ones you want to aim for. For technical reasons, it is always possible (and also usually helpful) to describe this as a single function or algorithm, typically around here called one’s “utility function” or “terminal values”. This is true even though the human brain actually physically implements a person’s values as multiple modules operating at the same time rather than a single central dispatch.
In your article, you seemed to be saying that you specifically think that one shouldn’t have a single “final decision” function at the top of the meta stack. That’s not going to be an easily accepted argument around here, for the reasons I stated above.
Yeah, this is exactly what I am arguing.
Could you explain the technical reasons more, or point me to some essays where I could read about this? I am still not convinced why it is more benefical to have a single operating system.
I’m no technical expert, but: if I want X, and I also want Y, and I also want Z, and I also want W, and I also want A1 through A22, it seems pretty clear to me that I can express those wants as “I want X and Z and W and A1 through A22.” Talking about whether I have one goal or 26 goals therefore seems like a distraction.
In regards to why it’s possible, I’ll just echo what TheOtherDaveSaid.
The reason it’s helpful to try for a single top-level utility function is because otherwise, whenever there’s a conflict among the many many things we value, we’d have no good way to consistently resolve it. If one aspect of your mind wants excitement, and another wants security, what should you do when you have to choose between the two?
Is quitting your job a good idea or not? Is going rock climbing instead of staying at home reading this weekend a good idea or not? Different parts of your mind will have different opinions on these subjects. Without a final arbiter to weigh their suggestions and consider how important comfort and security are relative to each other, how do you do decide in a non-arbitrary way?
So I guess it comes down to: how important is it to you that your values are self-consistent?
More discussion (and a lot of controversy on whether the whole notion actually is a good idea) here.
Well, there’s always the approach of letting all of me influence my actions and seeing what I do.
Thanks for the link. I’ll respond back when I get a chance to read it.