Yes, I was aware the curiosity thing is not a valid reason, which is why I only qualify it as “+V”. There are other options which give much greater +V. It is not an optimum.
Regarding you description of the Vs, I guess I’m a bit skewed in that regard. I don’t perceive happy thought and stress/sadness as clean-cut + and—utilities. Ceteris Paribus, I find stress to be positive utility against the backdrop of “lack of anything”. I think there’s a Type 1 / Type 2 thing going on, with the “conscious” assigning some value to what’s automatic or built-in, but I don’t remember the right vocabulary and recreating a proper terminology from reductions would take a lot of time better spent studying up on the already-established conventions. Basically, I consciously value all feelings equivalently, with a built-in valuation of what my instinct / human-built-in-devices values too, such that many small forms of pain are actually more pleasant than not feeling anything in particular, but strong pain is less pleasant than temporary lack of feeling.
Stuck in a two-branch decision-theoretic problem between “lifelong torture” and “lifelong lack of sensation or feeling”, my current conscious mind is edging towards the former, assuming the latter means I don’t get that rush from curiosity and figuring stuff out anymore. Of course, in practice I’m not quite so sure that none of the built-in mechanisms I have in my brain would get me to choose otherwise.
Anyway, just wanted to chip in that the utilitarian math for the “if I’m a boltzmann, I want a happy thought rather than a bit of stress” case isn’t quite so clear-cut for me personally, since the happy thought might not “succeed” in being produced or being really happy, and the stress might be valued positively anyway and is probably more likely to “succeed”. This isn’t the real motivation for my choices (so it’s an excuse/rationalization if I decide based on this), but is an interesting bit of detail and trivia, IMO.
Well, if I have evidence that I’m a special kind of telekinetic who can only move stuff with his mind when not physically moving (i.e. not sending signals to my own muscles) instead of a boltzmann, then unless I’m missing something I really do prefer staying immobile and saving the child with my thoughts instead of jumping in and wasting a lot of energy (this is assuming there’s no long-term consequences like other people seeing me save a child with my mind), but I’d still jump in anyway because my mental machinery overrides the far knowledge that I can almost certainly do it without moving.
It would take a lot of actual training in order to overcome this and start actually using the telekinesis. I think in such a situation, an ideal rationalist would use telekinesis instead of jumping in the water—not to mention the practical advantages of saving the child faster and in a safer manner (also with no risk to yourself!), assuming you have that level of control over your telekinetic powers.
That’s a good one, I lean towards jumping in as well, but you are right that the ideal says “use the force”.
doesn’t fit the pascal’s wager pattern tho...
EDIT: it seems a reliable hypothesis that intuition will go with whatever is best in the near mode case, never mind this “probability” and “utility” stuff.
Well, to make it fit Pascal’s Wager pattern a bit more, assume that you’re aware that telekinetics like you sometimes have a finite, very small amount of physical energy you can spend during your entire life, and once you’re out of it you die. You have unlimited “telekinetic energy”. Saving the child is, if this is true, going to chop off a good 95% of your remaining lifespan and permanently sacrifice any possibility of becoming immortal.
Yes, I was aware the curiosity thing is not a valid reason, which is why I only qualify it as “+V”. There are other options which give much greater +V. It is not an optimum.
Regarding you description of the Vs, I guess I’m a bit skewed in that regard. I don’t perceive happy thought and stress/sadness as clean-cut + and—utilities. Ceteris Paribus, I find stress to be positive utility against the backdrop of “lack of anything”. I think there’s a Type 1 / Type 2 thing going on, with the “conscious” assigning some value to what’s automatic or built-in, but I don’t remember the right vocabulary and recreating a proper terminology from reductions would take a lot of time better spent studying up on the already-established conventions. Basically, I consciously value all feelings equivalently, with a built-in valuation of what my instinct / human-built-in-devices values too, such that many small forms of pain are actually more pleasant than not feeling anything in particular, but strong pain is less pleasant than temporary lack of feeling.
Stuck in a two-branch decision-theoretic problem between “lifelong torture” and “lifelong lack of sensation or feeling”, my current conscious mind is edging towards the former, assuming the latter means I don’t get that rush from curiosity and figuring stuff out anymore. Of course, in practice I’m not quite so sure that none of the built-in mechanisms I have in my brain would get me to choose otherwise.
Anyway, just wanted to chip in that the utilitarian math for the “if I’m a boltzmann, I want a happy thought rather than a bit of stress” case isn’t quite so clear-cut for me personally, since the happy thought might not “succeed” in being produced or being really happy, and the stress might be valued positively anyway and is probably more likely to “succeed”. This isn’t the real motivation for my choices (so it’s an excuse/rationalization if I decide based on this), but is an interesting bit of detail and trivia, IMO.
interesting. Again, substitute a new example that does have the desired properties.
Well, if I have evidence that I’m a special kind of telekinetic who can only move stuff with his mind when not physically moving (i.e. not sending signals to my own muscles) instead of a boltzmann, then unless I’m missing something I really do prefer staying immobile and saving the child with my thoughts instead of jumping in and wasting a lot of energy (this is assuming there’s no long-term consequences like other people seeing me save a child with my mind), but I’d still jump in anyway because my mental machinery overrides the far knowledge that I can almost certainly do it without moving.
It would take a lot of actual training in order to overcome this and start actually using the telekinesis. I think in such a situation, an ideal rationalist would use telekinesis instead of jumping in the water—not to mention the practical advantages of saving the child faster and in a safer manner (also with no risk to yourself!), assuming you have that level of control over your telekinetic powers.
That’s a good one, I lean towards jumping in as well, but you are right that the ideal says “use the force”.
doesn’t fit the pascal’s wager pattern tho...
EDIT: it seems a reliable hypothesis that intuition will go with whatever is best in the near mode case, never mind this “probability” and “utility” stuff.
Well, to make it fit Pascal’s Wager pattern a bit more, assume that you’re aware that telekinetics like you sometimes have a finite, very small amount of physical energy you can spend during your entire life, and once you’re out of it you die. You have unlimited “telekinetic energy”. Saving the child is, if this is true, going to chop off a good 95% of your remaining lifespan and permanently sacrifice any possibility of becoming immortal.
Or is that the wrong way around? Hmm.