I don’t think I’m misunderstanding since while we used different notation to describe this:
S P[me][j] + E sum(P[i][j] for i in minds)
1 Me * selfish multiplier + 0.5 Me + 0.3 Me + …. 0 Me + 0 Me + 0 Me
We both described your preferences the same way. Though I neglected to explicitly normalize mine. To demonstrate I’m going to change the notation of my formulation to match yours.
1 Me * selfish multiplier + 0.5 Me + 0.3 Me + …. 0 Me + 0 Me + 0 Me
My notation may have been misleading in this regard, 0.5 Me isn’t 0.5 Me it is just the mark I’d use for a mind that is … well 0.5 Me*. In your model the “me content” dosen’t matter when tallying minds, except when it hits 1 in your own, so there is no need to mark it, but the reason I still used the fraction-of-me notation to describe certain minds was to give an intuition of what your described algorithm and my described algorithm would do with the same data set.
Konkvistador:
1 Me selfish multiplier + 0.5 Me 0.5 * selfish multiplier + …. + 0 Me + 0 Me + 0 Me
syllogism
1 Me * selfish multiplier + 0.5 Me + 0.3 Me + …. 0 Me + 0 Me + 0 Me
So if syllogism and Konkvistador where using the same selfish multiplier (let us call it S for short as you do) the difference between their systems would be the following
0.5 Me 0.5 (S-1) + 0.3 Me 0.3 (S-1) + …. really small fraction of Me really tiny number (S-1)
This may be a lot or it may not be very much, it really depends on how big it is compared to:
1 Me * S + 0 Me + 0 Me + 0 Me + … 0 Me
In other words if “Me” is very concentrated in a universe, say you drop me in a completely alien one, my algorithm wouldn’t produce an output measurably different from your algorithm. Your algorithm can also consistently give the same result if your S and Me embrace an extended self-identify, rather than just your local feeling of self. Now this of course boils down to the S factor and me being different for the same person when using this algorithm (we are after all talking about how something is or isn’t implemented rather than having silly sloppy math for fun), but I think people really do have a different S factor when thinking of such issues.
In other words if for S * P[me][j] you use dosen’t force your P[me][j] to necessarily a value of one. To help you understand a bit more by that imagine there is a universe that you can arrange to your pleasure and it contains P[you] but not just any P[you] it contains P[you] minus the last two weeks of memory. Does he still deserve the S factor boost? Or at least part of it?
Readers may be wondering that if the two things can be made mathematically equivalent, why I prefer my implementation to his (which is probably more standard among utilitarians who don’t embrace an extended self). Why not just adopt the same model but use a different value of Me or a different S to capture your preferences? This is because in practice I think it makes the better heuristic for me:
The more similar a mind is to mine, the less harm is done by my human tendency towards anthropomorphizing (mind projection fallacy is less an issue when the slime monster really does want our women). In other words I can be more sure of my estimation of their interests, goals and desires is are likley to be influenced by subconscious rigging “their” preferences in my favour because they are now explicitly partially determined by the algorithm in my brain that presumably wants to really find the best option for an individual (the ones that runs when I say “What do I want?”). Most rationalist corrections made for 0.5 Me * 0.5 also have to be used in Me and vice versa.
I find it easier to help most people, because most people are pretty darn similar to me when comparing them with non-human or non-living processes. And it dosen’t feel like a grand act selflessness or something that changes my self-image, signals anything or burns “willpower” but more like common sense.
It captures my intuition that I don’t just care about my preferences and some averaged thing, but I care about specific people’s preferences independent of “my own personal desires” more than others. This puts me in the right frame of mind when interacting with people I care about.
Edit: Down-voted already? Ok, can someone tell me what I’m doing wrong here?
I don’t think I’m misunderstanding since while we used different notation to describe this:
S P[me][j] + E sum(P[i][j] for i in minds)
1 Me * selfish multiplier + 0.5 Me + 0.3 Me + …. 0 Me + 0 Me + 0 Me
We both described your preferences the same way. Though I neglected to explicitly normalize mine. To demonstrate I’m going to change the notation of my formulation to match yours.
1 Me * selfish multiplier + 0.5 Me + 0.3 Me + …. 0 Me + 0 Me + 0 Me
P[me][j] * S + P[1] + P[2] + …. P[i-2] + P[i-1] + P[i]
S P[me][j] + E sum(P[i][j] for i in minds)
My notation may have been misleading in this regard, 0.5 Me isn’t 0.5 Me it is just the mark I’d use for a mind that is … well 0.5 Me*. In your model the “me content” dosen’t matter when tallying minds, except when it hits 1 in your own, so there is no need to mark it, but the reason I still used the fraction-of-me notation to describe certain minds was to give an intuition of what your described algorithm and my described algorithm would do with the same data set.
So if syllogism and Konkvistador where using the same selfish multiplier (let us call it S for short as you do) the difference between their systems would be the following
0.5 Me 0.5 (S-1) + 0.3 Me 0.3 (S-1) + …. really small fraction of Me really tiny number (S-1)
This may be a lot or it may not be very much, it really depends on how big it is compared to:
1 Me * S + 0 Me + 0 Me + 0 Me + … 0 Me
In other words if “Me” is very concentrated in a universe, say you drop me in a completely alien one, my algorithm wouldn’t produce an output measurably different from your algorithm. Your algorithm can also consistently give the same result if your S and Me embrace an extended self-identify, rather than just your local feeling of self. Now this of course boils down to the S factor and me being different for the same person when using this algorithm (we are after all talking about how something is or isn’t implemented rather than having silly sloppy math for fun), but I think people really do have a different S factor when thinking of such issues.
In other words if for S * P[me][j] you use dosen’t force your P[me][j] to necessarily a value of one. To help you understand a bit more by that imagine there is a universe that you can arrange to your pleasure and it contains P[you] but not just any P[you] it contains P[you] minus the last two weeks of memory. Does he still deserve the S factor boost? Or at least part of it?
Readers may be wondering that if the two things can be made mathematically equivalent, why I prefer my implementation to his (which is probably more standard among utilitarians who don’t embrace an extended self). Why not just adopt the same model but use a different value of Me or a different S to capture your preferences? This is because in practice I think it makes the better heuristic for me:
The more similar a mind is to mine, the less harm is done by my human tendency towards anthropomorphizing (mind projection fallacy is less an issue when the slime monster really does want our women). In other words I can be more sure of my estimation of their interests, goals and desires is are likley to be influenced by subconscious rigging “their” preferences in my favour because they are now explicitly partially determined by the algorithm in my brain that presumably wants to really find the best option for an individual (the ones that runs when I say “What do I want?”). Most rationalist corrections made for 0.5 Me * 0.5 also have to be used in Me and vice versa.
I find it easier to help most people, because most people are pretty darn similar to me when comparing them with non-human or non-living processes. And it dosen’t feel like a grand act selflessness or something that changes my self-image, signals anything or burns “willpower” but more like common sense.
It captures my intuition that I don’t just care about my preferences and some averaged thing, but I care about specific people’s preferences independent of “my own personal desires” more than others. This puts me in the right frame of mind when interacting with people I care about.
Edit: Down-voted already? Ok, can someone tell me what I’m doing wrong here?