It seems we cannot allow all behavior-preserving optimizations, because that might lead to a kind of LLM that dutifully says “I’m conscious” without actually being so.
Surely ‘you’ are the algorithm, not the implementation. If I get refactored into a giant lookup table, I don’t think that makes the algorithm any less ‘me’.
Surely ‘you’ are the algorithm, not the implementation. If I get refactored into a giant lookup table, I don’t think that makes the algorithm any less ‘me’.