well, the universe is quite big, so live and let live?
Our interests will eventually conflict. Look at ants. We don’t go around knocking down anthills just for the heck of it, even though we easily could. But when it comes time to construct an interstate over the anthill, those ants are toast.
Do chimps have a concept of ethics? Suppose you started raising the IQ of chimps, wouldn’t they eventually progress and probably develop a vaguely human-like civilisation?
That’s a good hypothetical. What if an AI told you that it had decided to start raising your IQ? (I, personally, would find this awesome, but I’m sure many here wouldn’t)
Our interests will eventually conflict. Look at ants. We don’t go around knocking down anthills just for the heck of it, even though we easily could. But when it comes time to construct an interstate over the anthill, those ants are toast.
To continue the metaphor, some environmentalists do protest the building of motorways, although not to all that much effect, and rarely for the benefit of insects. But, we have no history of signing treaties with insects, nor does anyone reminisce about when they used to be an insect.
Regardless of whether posthumans would value humans, current humans do value humans, and also value continuing to value humans, so a correct implementation of CEV would not put humanity on a path where humanity would get wiped out. I think this is the sort of point at which TDT comes in, and so CEV could morph into CEV-with-constrains-added-at-initial-runtime. For instance, perhaps CEV(t)=C CEV(0)+(1-C) CEV(t) where CEV(t) means CEV evaluated at time t, and C is a constant fixed at t=0, determining how much values should remain unchanged.
What if an AI told you that it had decided to start raising your IQ? (I, personally, would find this awesome, but I’m sure many here wouldn’t)
Sounds good to me! I think post-singularity it might be good to fork yourself, with some copies heading off towards superintelligence quickly and others taking the scenic route and exploring baseline human activities first.
Our interests will eventually conflict. Look at ants. We don’t go around knocking down anthills just for the heck of it, even though we easily could. But when it comes time to construct an interstate over the anthill, those ants are toast.
That’s a good hypothetical. What if an AI told you that it had decided to start raising your IQ? (I, personally, would find this awesome, but I’m sure many here wouldn’t)
To continue the metaphor, some environmentalists do protest the building of motorways, although not to all that much effect, and rarely for the benefit of insects. But, we have no history of signing treaties with insects, nor does anyone reminisce about when they used to be an insect.
Regardless of whether posthumans would value humans, current humans do value humans, and also value continuing to value humans, so a correct implementation of CEV would not put humanity on a path where humanity would get wiped out. I think this is the sort of point at which TDT comes in, and so CEV could morph into CEV-with-constrains-added-at-initial-runtime. For instance, perhaps CEV(t)=C CEV(0)+(1-C) CEV(t) where CEV(t) means CEV evaluated at time t, and C is a constant fixed at t=0, determining how much values should remain unchanged.
Sounds good to me! I think post-singularity it might be good to fork yourself, with some copies heading off towards superintelligence quickly and others taking the scenic route and exploring baseline human activities first.