On a personal level, none of this is relevant to AI risk. Yudkowsky’s interest in it seems like more of a byproduct of his reading choices when he was young and impressionable than anything else, which is not reading I shared. Neither he nor I think this is necessary for xrisk scenarios, with me probably being on the more skeptical side, and me believing more in practical impediments that strongly encourage doing the simple things that work, eg. conventional biotech.
Due to this not being a crux and not having the same personal draw towards discussing it, I basically don’t think about this when I think about modelling AI risk scenarios. I think about it when it comes up because it’s technically interesting. If someone is reasoning about this because they do think it’s a crux for their AI risk scenarios, and they came to me for advice, I’d suggest testing that crux before I suggested being more clever about de novo nanotech arguments.
I think I implicitly answered you elsewhere, though I’ll add a more literal response to your question here.
On a personal level, none of this is relevant to AI risk. Yudkowsky’s interest in it seems like more of a byproduct of his reading choices when he was young and impressionable than anything else, which is not reading I shared. Neither he nor I think this is necessary for xrisk scenarios, with me probably being on the more skeptical side, and me believing more in practical impediments that strongly encourage doing the simple things that work, eg. conventional biotech.
Due to this not being a crux and not having the same personal draw towards discussing it, I basically don’t think about this when I think about modelling AI risk scenarios. I think about it when it comes up because it’s technically interesting. If someone is reasoning about this because they do think it’s a crux for their AI risk scenarios, and they came to me for advice, I’d suggest testing that crux before I suggested being more clever about de novo nanotech arguments.