Apologies for coming to the discussion very, very late, but I just ran across this.
If I saw no need for more power, e.g. because I’m already maximally happy and there’s a system to ensure sustainability, I’d happily give up everything.
How could you possibly get into this epistemic state? That is, how could you possibly be so sure of the sustainability of your maximally happy state, without any intervention from you, that you would be willing to give up all your optimization power?
(This isn’t the only reason why I personally would not choose wireheading, but other reasons have already been well discussed in this thread and I haven’t seen anyone else zero in on this particular point.)
Apologies for coming to the discussion very, very late, but I just ran across this.
How could you possibly get into this epistemic state? That is, how could you possibly be so sure of the sustainability of your maximally happy state, without any intervention from you, that you would be willing to give up all your optimization power?
(This isn’t the only reason why I personally would not choose wireheading, but other reasons have already been well discussed in this thread and I haven’t seen anyone else zero in on this particular point.)