Interesting analysis, not so much because it is particularly insightful on itself as it stands, but more because it gives a hard step back in order to have a wider view.
I have been intending to investigate another alternative: a soft take off through moral enhancement initially solving the transference of value problem. This is not the only reason I decided to study this, but it does seem like a worthy idea to explore.
Hopefully, I will have some interesting stuff to post here later. I am working on a doctoral thesis proposal about this, I use some material from LessWrong—but, for evil-academic reasons, not as often and directly as I would like. It would be nice to have some feedback from LW.
Interesting analysis, not so much because it is particularly insightful on itself as it stands, but more because it gives a hard step back in order to have a wider view. I have been intending to investigate another alternative: a soft take off through moral enhancement initially solving the transference of value problem. This is not the only reason I decided to study this, but it does seem like a worthy idea to explore. Hopefully, I will have some interesting stuff to post here later. I am working on a doctoral thesis proposal about this, I use some material from LessWrong—but, for evil-academic reasons, not as often and directly as I would like. It would be nice to have some feedback from LW.
Interesting. You should email me your doctoral thesis proposal; I’d like to talk to you about it. I’m luke@intelligence.org.
Just sent now!
I will in about one week. Thank you for the interest.