In the process of trying to pin down my terminal values, I’ve discovered at least 3 subagents of myself with different desires, as well as my conscious one which doesn’t have its own terminal values, and just listens to theirs and calculates the relevant instrumental values. Does LW have a way for the conscious me to weight those (sometimes contradictory) desires?
What I’m currently using is “the one who yells the loudest wins”, but that doesn’t seem entirely satisfactory.
My current approach is to make the subagents more distinct/dissociated, then identify with one of them and try to destroy the rest. It’s working well, according to the dominant subagent.
My other subagents consider that such an appalling outcome that my processor agent refuses to even consider the possibility...
Though given this, it seems likely that I do have some degree of built-in weighting, I just don’t realise what it is yet. That’s quite reassuring.
Edit: More clarification in case my situation is different from yours: my 3 main subagents have such different aims that each of them evokes a “paper-clipper” sense of confusion in the others. Also, a likely reason why I refuse to consider it is because all of them are hard-wired into my emotions, and my emotions are one of the inputs my processing takes. This doesn’t bode well for my current weighting being consistent (and Dutch-book-proof).
I’m not entirely sure. What questions could I ask myself to figure this out? (I suspect figuring this out is equivalent to answering my original question)
“Whichever subagent currently talks in the “loudest” voice in my head” seems to be the only way I could describe it. However, “volume” doesn’t lend to a consistent weighting because it varies, and I’m pretty sure varies depending on hormone levels amongst many things, making me easily dutch-bookable based on e.g. time of month.
So I started reading this, but it seems a bit excessively presumptuous about what the different parts of me are like. It’s really not that complicated: I just have multiple terminal values which don’t come with a natural weighting, and I find balancing them against each other hard.
A non-exhaustive list of them in very approximate descending order of average loudness:
Offspring (optimising for existence, health and status thereof. This is my most motivating goal right now and most of my actions are towards optimising for this, in more or less direct ways.)
Learning interesting things
Sex (and related brain chemistry feelings)
Love (and related brain chemistry feelings)
Empathy and care for other humans
Prestige and status
Epistemic rationality
Material comfort
I notice the problem mainly as the loudness of “Offspring” varies based on hormone levels, whereas “Learning new things” doesn’t. In particular when I optimise almost entirely for offspring, cryonics is a waste of time and money, but on days where “learning new things” gets up there it isn’t.
In the process of trying to pin down my terminal values, I’ve discovered at least 3 subagents of myself with different desires, as well as my conscious one which doesn’t have its own terminal values, and just listens to theirs and calculates the relevant instrumental values. Does LW have a way for the conscious me to weight those (sometimes contradictory) desires?
What I’m currently using is “the one who yells the loudest wins”, but that doesn’t seem entirely satisfactory.
My current approach is to make the subagents more distinct/dissociated, then identify with one of them and try to destroy the rest. It’s working well, according to the dominant subagent.
My other subagents consider that such an appalling outcome that my processor agent refuses to even consider the possibility...
Though given this, it seems likely that I do have some degree of built-in weighting, I just don’t realise what it is yet. That’s quite reassuring.
Edit: More clarification in case my situation is different from yours: my 3 main subagents have such different aims that each of them evokes a “paper-clipper” sense of confusion in the others. Also, a likely reason why I refuse to consider it is because all of them are hard-wired into my emotions, and my emotions are one of the inputs my processing takes. This doesn’t bode well for my current weighting being consistent (and Dutch-book-proof).
What does your processor agent want?
I’m not entirely sure. What questions could I ask myself to figure this out? (I suspect figuring this out is equivalent to answering my original question)
What choices does your processor agent tend to make? Under what circumstances does it favor particular sub-agents?
“Whichever subagent currently talks in the “loudest” voice in my head” seems to be the only way I could describe it. However, “volume” doesn’t lend to a consistent weighting because it varies, and I’m pretty sure varies depending on hormone levels amongst many things, making me easily dutch-bookable based on e.g. time of month.
My understanding is that this is what Internal Family Systems is for.
So I started reading this, but it seems a bit excessively presumptuous about what the different parts of me are like. It’s really not that complicated: I just have multiple terminal values which don’t come with a natural weighting, and I find balancing them against each other hard.
briefly describe the “subagents” and their personalities/goals?
A non-exhaustive list of them in very approximate descending order of average loudness:
Offspring (optimising for existence, health and status thereof. This is my most motivating goal right now and most of my actions are towards optimising for this, in more or less direct ways.)
Learning interesting things
Sex (and related brain chemistry feelings)
Love (and related brain chemistry feelings)
Empathy and care for other humans
Prestige and status
Epistemic rationality
Material comfort
I notice the problem mainly as the loudness of “Offspring” varies based on hormone levels, whereas “Learning new things” doesn’t. In particular when I optimise almost entirely for offspring, cryonics is a waste of time and money, but on days where “learning new things” gets up there it isn’t.