If they knew more, saw the possibilities and understood more about the world I would be surprised if they would choose a >path that diverges so greatly with your own
This is not so simple to assert. You have to think of the intensity of their belief in the words of allah. Their fundamental wordview is so different from ours that there may be nothing humane left when we try to combine them.
I think that you’re conflating “evolution” with “more ancient drives”
In this specific case I was using this figure of speech, yes. I mean’t that we would be extrapolating drives that matter for evolution (our ancient drives) but don’t really matter to us, not in the sense of Want to Want described in 4c.
This is not so simple to assert. You have to think of the intensity of their belief in the words of allah. Their fundamental wordview is so different from ours that there may be nothing humane left when we try to combine them.
CAVEAT: I’m using CEV as I understand it, not necessarily as it was intended as I’m not sure the notion is sufficiently precise for me to be able to accurately parse all of the intended meaning. Bearing that in mind:
If CEV produces a plan or AI to be implemented, I would expect it to be sufficiently powerful that it would entail changing the worldviews of many people during the course of implementation. My very basic template would be that of Asimov’s The Evitable Conflict—the manipulations would be subtle and we would be unlikely to read their exact outcomes at a given time X without implementing them (this would be dangerous, as it means you can’t “peak ahead” at the future you cause) though we still prove that at the end we will be left with a net gain in utility. The Asimov example is somewhat less complex, and does not seek to create the best possible future, only a fairly good, stable one, but this basic notion I am borrowing is relevant to CEV.
The drives behind the conviction of the suicide bomber are still composed of human drives, evolutionary artifacts that have met with a certain set of circumstances. The Al Qaeda example is salient today because the ideology is among the most noncontroversial, damaging ideology we can cite. However, I doubt that any currently held ideology or belief system held by any human today is ideal. The CEV should search for ways of redirecting human thought and action—this is necessary for anything that is meant to have global causal control. The CEV does not reconcile current beliefs and ideologies, it seeks to redirect the course of human events to bring about new, more rational, more efficient and healthy ideologies that will be compatible, if this can be done.
If there exists some method for augmenting our current beliefs and ideologies to become more rational, more coherent and more conducive to positive change, then the CEV should find it. Such a method would allow for much more utility than the failure mode you describe, and said failure mode should only occur when such a method is intractable.
In this specific case I was using this figure of speech, yes. I mean’t that we would be extrapolating drives that matter for evolution (our ancient drives) but don’t really matter to us, not in the sense of Want to Want described in 4c.
My point is that, in general, our drives are a product of evolutionary drives, and are augmented only by context. If the context changes, those drives change as well, but both the old set and new set are comprised of evolutionary drives. CEV changes those higher level drives by controlling the context in sufficiently clever ways.
CEV should probably be able to look at how an individual will develop in different contexts and compute the net utility in each one, and then maximize. The danger here is that we might be directed into a course of events that leads to wireheading.
It occurs to me that the evolutionary failure mode here is probably something like wireheading, though it could be more general. As I see it, CEV is aiming to maximize total utility while minimizing the net negative utility for as many individuals as possible. If some percentage of individuals prove to be impossible to direct towards a good future without causing massive dis-utility in general we have to devise a way to look at each case like this, and ask what sorts of individuals are not getting a pay-off. If it turns out to be a small number of sociopaths, this will probably not be a problem. I expect that we will have the technology to treat sociopaths and bring them into the fold. CEV should consider this possibility as well. If it turns out to be a small number of very likable people, it could be somewhat more complicated, and we should ask why this is happening. I can’t think of any reasonable possible scenarios for this at the moment, but I think this is worth thinking about more.
The kernel of this problem is very central to CEV as I understand it, so I think it is good to discuss it in as much detail as we can in order to glean insight.
CAVEAT: I’m using CEV as I understand it, not necessarily as it was intended as I’m not sure the notion is sufficiently precise for me to be able to accurately parse all of the intended meaning. Bearing that in mind:
If CEV produces a plan or AI to be implemented, I would expect it to be sufficiently powerful that it would entail changing the worldviews of many people during the course of implementation. My very basic template would be that of Asimov’s The Evitable Conflict—the manipulations would be subtle and we would be unlikely to read their exact outcomes at a given time X without implementing them (this would be dangerous, as it means you can’t “peak ahead” at the future you cause) though we still prove that at the end we will be left with a net gain in utility. The Asimov example is somewhat less complex, and does not seek to create the best possible future, only a fairly good, stable one, but this basic notion I am borrowing is relevant to CEV.
The drives behind the conviction of the suicide bomber are still composed of human drives, evolutionary artifacts that have met with a certain set of circumstances. The Al Qaeda example is salient today because the ideology is among the most noncontroversial, damaging ideology we can cite. However, I doubt that any currently held ideology or belief system held by any human today is ideal. The CEV should search for ways of redirecting human thought and action—this is necessary for anything that is meant to have global causal control. The CEV does not reconcile current beliefs and ideologies, it seeks to redirect the course of human events to bring about new, more rational, more efficient and healthy ideologies that will be compatible, if this can be done.
If there exists some method for augmenting our current beliefs and ideologies to become more rational, more coherent and more conducive to positive change, then the CEV should find it. Such a method would allow for much more utility than the failure mode you describe, and said failure mode should only occur when such a method is intractable.
My point is that, in general, our drives are a product of evolutionary drives, and are augmented only by context. If the context changes, those drives change as well, but both the old set and new set are comprised of evolutionary drives. CEV changes those higher level drives by controlling the context in sufficiently clever ways.
CEV should probably be able to look at how an individual will develop in different contexts and compute the net utility in each one, and then maximize. The danger here is that we might be directed into a course of events that leads to wireheading.
It occurs to me that the evolutionary failure mode here is probably something like wireheading, though it could be more general. As I see it, CEV is aiming to maximize total utility while minimizing the net negative utility for as many individuals as possible. If some percentage of individuals prove to be impossible to direct towards a good future without causing massive dis-utility in general we have to devise a way to look at each case like this, and ask what sorts of individuals are not getting a pay-off. If it turns out to be a small number of sociopaths, this will probably not be a problem. I expect that we will have the technology to treat sociopaths and bring them into the fold. CEV should consider this possibility as well. If it turns out to be a small number of very likable people, it could be somewhat more complicated, and we should ask why this is happening. I can’t think of any reasonable possible scenarios for this at the moment, but I think this is worth thinking about more.
The kernel of this problem is very central to CEV as I understand it, so I think it is good to discuss it in as much detail as we can in order to glean insight.