Thanks for the detailed writeup. I would personally be against basically all of the suggested methods that could create a significant improvement because the hard problem of consciousness remains hard and it seems very possible that an unconscious human race could result. I was a bit surprised to see no mention of this in the essay.
I guess that falls under “value drift” in the table. But yeah I think that’s extremely unlikely to happen without warning, except in the case of brain emulations. I do think any of these methods would be world-changing, that therefore extremely dangerous and would demand lots of care and caution.
But yeah I think that’s extremely unlikely to happen without warning, except in the case of brain emulations.
Could you explain what sort of warnings we’d get with, for instance, the interfaces approach? I don’t see how that’s possible.
Also this is semantics I guess, but I wouldn’t classify this under “value drift”. If there is such a thing as the hard problem of consciousness and these post-modified humans don’t have whatever that is, I wouldn’t care whether or not their behaviors and value functions resemble those of today’s humans
Thanks for the detailed writeup. I would personally be against basically all of the suggested methods that could create a significant improvement because the hard problem of consciousness remains hard and it seems very possible that an unconscious human race could result. I was a bit surprised to see no mention of this in the essay.
I guess that falls under “value drift” in the table. But yeah I think that’s extremely unlikely to happen without warning, except in the case of brain emulations. I do think any of these methods would be world-changing, that therefore extremely dangerous and would demand lots of care and caution.
Could you explain what sort of warnings we’d get with, for instance, the interfaces approach? I don’t see how that’s possible.
Also this is semantics I guess, but I wouldn’t classify this under “value drift”. If there is such a thing as the hard problem of consciousness and these post-modified humans don’t have whatever that is, I wouldn’t care whether or not their behaviors and value functions resemble those of today’s humans
Someone gets some kind of interface, and then they stop being conscious. So they act weird, and people are like “hey they’re acting super weird, they seem not conscious anymore, this seems bad”. https://www.lesswrong.com/posts/fdEWWr8St59bXLbQr/zombies-zombies
Yudkowsky’s essay is explaining why he believes there is no hard problem of consciousness.