I wonder if this objection to MIRI’s project has been made so far: EY recognizes that placing present day humans in an environment reached by CEV would be immoral, right? Doesn’t this call into question the desirability of instant salvation? Perhaps what is really desirable is reaching the CEV state, but doing so only gradually. Otherwise, we might never reach our CEV state, and we arguably do want to reach it eventually. We can still have a friendly AI, but perhaps it’s role should be to slowly guide us to the CEV state while making sure we don’t get into deep trouble in the mean time. Eg. We shouldn’t be maimed for life as the result of an instant’s inattention, etc.
I wonder if this objection to MIRI’s project has been made so far: EY recognizes that placing present day humans in an environment reached by CEV would be immoral, right? Doesn’t this call into question the desirability of instant salvation? Perhaps what is really desirable is reaching the CEV state, but doing so only gradually. Otherwise, we might never reach our CEV state, and we arguably do want to reach it eventually. We can still have a friendly AI, but perhaps it’s role should be to slowly guide us to the CEV state while making sure we don’t get into deep trouble in the mean time. Eg. We shouldn’t be maimed for life as the result of an instant’s inattention, etc.