I observed similar effects when experimented with my mind’s model (sideload) running on LLM. My sideload is a character and it claims, for example, that it has consciousness. But the same LLM without the sideload’s prompt claims that it doesn’t have consciousness.
I observed similar effects when experimented with my mind’s model (sideload) running on LLM. My sideload is a character and it claims, for example, that it has consciousness. But the same LLM without the sideload’s prompt claims that it doesn’t have consciousness.