I’m going to assume the living in near-starvation and poverty thing is just an example, since that’s almost certainly not the best way to save the most lives (well-nourished humans are more capable humans), and I’ll assume your point was more along the lines of do-as-much-good-as-you-possibly-can-all-the-time.
I think you need to take into account the fact that you’re human. Just because you do something which would seem to imply some weird or evil preference doesn’t mean you need to accept that as your real preference. We are made of faulty hardware.
Faulty as compared to what? I mean, yes, if you assume our expressed preferences are what we really want, then we’re awfully (even spectacularly) bad at achieving them. If you assume that what we really want is survival, comfort, sex, food, and other things that contribute to our own genetic replication, then we’re not faulty at all. We’re actually quite good at optimizing for our actual preferences, even if we do sometimes become convinced that our preferences are something they aren’t.
EDIT: I’m not trying to bring everyone down with first-world angst. This just troubles me. I may simply have to accept that, under my own defintion of the term, I’m just not a particularly good person.
What about this: Just because we have some desires, it does not automatically mean we are good at following them. And vice versa, just because we are not at doing something, it does not mean that we really don’t want it.
For example if I want to eat pizza, but I can’t cook pizza, I can’t convince anyone to cook pizza for me, I cannot find a pizzeria, and I am too dumb to use internet to find anything of this… does it mean that I really don’t want the pizza… or does it just mean that I am bad at getting it in this environment, but in some other environment (where I have a pizza cookbook at home, and a pizzeria is across the street) I could be more successful?
We do not live in our natural environment. I want to help other people, but my evolutionary algorithms suppose that those people live near to me, their needs are transparent to me, and I see an immediate feedback of my help. Without these conditions, my algorithms start breaking.
If you assume that what we really want is survival, comfort, sex, food, and other things that contribute to our own genetic replication, then we’re not faulty at all.
Actually, we are, since we don’t go after even those things very effectively.
I’m going to assume the living in near-starvation and poverty thing is just an example, since that’s almost certainly not the best way to save the most lives (well-nourished humans are more capable humans), and I’ll assume your point was more along the lines of do-as-much-good-as-you-possibly-can-all-the-time.
I think you need to take into account the fact that you’re human. Just because you do something which would seem to imply some weird or evil preference doesn’t mean you need to accept that as your real preference. We are made of faulty hardware.
Faulty as compared to what? I mean, yes, if you assume our expressed preferences are what we really want, then we’re awfully (even spectacularly) bad at achieving them. If you assume that what we really want is survival, comfort, sex, food, and other things that contribute to our own genetic replication, then we’re not faulty at all. We’re actually quite good at optimizing for our actual preferences, even if we do sometimes become convinced that our preferences are something they aren’t.
EDIT: I’m not trying to bring everyone down with first-world angst. This just troubles me. I may simply have to accept that, under my own defintion of the term, I’m just not a particularly good person.
What about this: Just because we have some desires, it does not automatically mean we are good at following them. And vice versa, just because we are not at doing something, it does not mean that we really don’t want it.
For example if I want to eat pizza, but I can’t cook pizza, I can’t convince anyone to cook pizza for me, I cannot find a pizzeria, and I am too dumb to use internet to find anything of this… does it mean that I really don’t want the pizza… or does it just mean that I am bad at getting it in this environment, but in some other environment (where I have a pizza cookbook at home, and a pizzeria is across the street) I could be more successful?
We do not live in our natural environment. I want to help other people, but my evolutionary algorithms suppose that those people live near to me, their needs are transparent to me, and I see an immediate feedback of my help. Without these conditions, my algorithms start breaking.
Maybe what you need is not a new you, but a new definition.
This explains what I mean.
Actually, we are, since we don’t go after even those things very effectively.