Only one side of the friendliness coin is the ‘mathematical proof’ (verification) part. The other side of the coin is the validation part. That is, the question of whether our design goals are really the right design goals to have. A lot of the ink that has been spilled on topics like CEV centers mostly around this aspect.
I, and a fairly large subset of humanity, place a lot of value in things not stagnating.
People say that but it’s usually just empty words. If progress necessitated that you be destroyed (or, at the very least, accept being unconditionally ruled over), would you prefer progress, or the status quo?
Try to imagine if humanity were bound by the ethical codes of chimpanzees, then you begin to see what I mean.
If progress necessitated that you be destroyed (or, at the very least, accept being unconditionally ruled over), would you prefer progress, or the status quo?
I’m pretty sure there is a potential continuous transform between myself and a Jupiter brain, (assuming as continuity of personality makes sense). Add one more brain cell or make a small alteration and I’m still myself, so by induction you could add an arbitrarly large number of brain cell up until fundamental physical constrains kick in.
And even supposing there are beings I can never evolve into, well, the universe is quite big, so live and let live?
That is, the question of whether our design goals are really the right design goals to have. A lot of the ink that has been spilled on topics like CEV centers mostly around this aspect.
Well, there are aspects of CEV that worry me, but I would say it seems to be far better than an arbitrary (e.g. generated by evolutionary simulations) utility function.
Try to imagine if humanity were bound by the ethical codes of chimpanzees, then you begin to see what I mean.
Do chimps have a concept of ethics? Suppose you started raising the IQ of chimps, wouldn’t they eventually progress and probably develop a vaguely human-like civilisation?
well, the universe is quite big, so live and let live?
Our interests will eventually conflict. Look at ants. We don’t go around knocking down anthills just for the heck of it, even though we easily could. But when it comes time to construct an interstate over the anthill, those ants are toast.
Do chimps have a concept of ethics? Suppose you started raising the IQ of chimps, wouldn’t they eventually progress and probably develop a vaguely human-like civilisation?
That’s a good hypothetical. What if an AI told you that it had decided to start raising your IQ? (I, personally, would find this awesome, but I’m sure many here wouldn’t)
Our interests will eventually conflict. Look at ants. We don’t go around knocking down anthills just for the heck of it, even though we easily could. But when it comes time to construct an interstate over the anthill, those ants are toast.
To continue the metaphor, some environmentalists do protest the building of motorways, although not to all that much effect, and rarely for the benefit of insects. But, we have no history of signing treaties with insects, nor does anyone reminisce about when they used to be an insect.
Regardless of whether posthumans would value humans, current humans do value humans, and also value continuing to value humans, so a correct implementation of CEV would not put humanity on a path where humanity would get wiped out. I think this is the sort of point at which TDT comes in, and so CEV could morph into CEV-with-constrains-added-at-initial-runtime. For instance, perhaps CEV(t)=C CEV(0)+(1-C) CEV(t) where CEV(t) means CEV evaluated at time t, and C is a constant fixed at t=0, determining how much values should remain unchanged.
What if an AI told you that it had decided to start raising your IQ? (I, personally, would find this awesome, but I’m sure many here wouldn’t)
Sounds good to me! I think post-singularity it might be good to fork yourself, with some copies heading off towards superintelligence quickly and others taking the scenic route and exploring baseline human activities first.
Only one side of the friendliness coin is the ‘mathematical proof’ (verification) part. The other side of the coin is the validation part. That is, the question of whether our design goals are really the right design goals to have. A lot of the ink that has been spilled on topics like CEV centers mostly around this aspect.
People say that but it’s usually just empty words. If progress necessitated that you be destroyed (or, at the very least, accept being unconditionally ruled over), would you prefer progress, or the status quo?
Try to imagine if humanity were bound by the ethical codes of chimpanzees, then you begin to see what I mean.
I’m pretty sure there is a potential continuous transform between myself and a Jupiter brain, (assuming as continuity of personality makes sense). Add one more brain cell or make a small alteration and I’m still myself, so by induction you could add an arbitrarly large number of brain cell up until fundamental physical constrains kick in.
And even supposing there are beings I can never evolve into, well, the universe is quite big, so live and let live?
Well, there are aspects of CEV that worry me, but I would say it seems to be far better than an arbitrary (e.g. generated by evolutionary simulations) utility function.
Do chimps have a concept of ethics? Suppose you started raising the IQ of chimps, wouldn’t they eventually progress and probably develop a vaguely human-like civilisation?
Our interests will eventually conflict. Look at ants. We don’t go around knocking down anthills just for the heck of it, even though we easily could. But when it comes time to construct an interstate over the anthill, those ants are toast.
That’s a good hypothetical. What if an AI told you that it had decided to start raising your IQ? (I, personally, would find this awesome, but I’m sure many here wouldn’t)
To continue the metaphor, some environmentalists do protest the building of motorways, although not to all that much effect, and rarely for the benefit of insects. But, we have no history of signing treaties with insects, nor does anyone reminisce about when they used to be an insect.
Regardless of whether posthumans would value humans, current humans do value humans, and also value continuing to value humans, so a correct implementation of CEV would not put humanity on a path where humanity would get wiped out. I think this is the sort of point at which TDT comes in, and so CEV could morph into CEV-with-constrains-added-at-initial-runtime. For instance, perhaps CEV(t)=C CEV(0)+(1-C) CEV(t) where CEV(t) means CEV evaluated at time t, and C is a constant fixed at t=0, determining how much values should remain unchanged.
Sounds good to me! I think post-singularity it might be good to fork yourself, with some copies heading off towards superintelligence quickly and others taking the scenic route and exploring baseline human activities first.