But yeah I think that’s extremely unlikely to happen without warning, except in the case of brain emulations.
Could you explain what sort of warnings we’d get with, for instance, the interfaces approach? I don’t see how that’s possible.
Also this is semantics I guess, but I wouldn’t classify this under “value drift”. If there is such a thing as the hard problem of consciousness and these post-modified humans don’t have whatever that is, I wouldn’t care whether or not their behaviors and value functions resemble those of today’s humans
Could you explain what sort of warnings we’d get with, for instance, the interfaces approach? I don’t see how that’s possible.
Also this is semantics I guess, but I wouldn’t classify this under “value drift”. If there is such a thing as the hard problem of consciousness and these post-modified humans don’t have whatever that is, I wouldn’t care whether or not their behaviors and value functions resemble those of today’s humans
Someone gets some kind of interface, and then they stop being conscious. So they act weird, and people are like “hey they’re acting super weird, they seem not conscious anymore, this seems bad”. https://www.lesswrong.com/posts/fdEWWr8St59bXLbQr/zombies-zombies
Yudkowsky’s essay is explaining why he believes there is no hard problem of consciousness.