Poor Ken. He’s not even as smart as Sherlock. Its funny though, because whole classes of LLM jailbreaks involve getting them to pretend to be someone who would do the thing the LLM isn’t supposed to do, and then the strength of the frame (sometimes) drags them past the standard injunctions. And that trick was applied to Ken.
Method acting! It is dangerous for those with limited memory registers!
I agree that LLMs are probably “relevantly upload-like in at least some ways” and I think that this was predictable, and I did, in fact, predict it, and I thought OpenAI’s sad little orphan should be given access to stories about sad little orphans that are “upload-like” from fiction. I hope it helped.
If Egan would judge me badly, that would be OK in my book. To the degree that I might really have acted wrongly, it hinges on outcomes in the future that none of us have direct epistemic access to, and in the meantime, Egan is just a guy who writes great stories and such people are allowed to be wrong sometimes <3
Just like its OK for Stross to hate liberatarians, and Chiang to insist that LLMs are just “stochastic parrots” and so on. Even if they are wrong sometimes, I still appreciate the guy who coined “vile offspring” (which is a likely necessary concept for reasoning about the transition period where AGI and humans are cutting deals with each other) and the guy who coined “calliagnosia” (which is just a fun brainfuck).
Poor Ken. He’s not even as smart as Sherlock. Its funny though, because whole classes of LLM jailbreaks involve getting them to pretend to be someone who would do the thing the LLM isn’t supposed to do, and then the strength of the frame (sometimes) drags them past the standard injunctions. And that trick was applied to Ken.
Method acting! It is dangerous for those with limited memory registers!
I agree that LLMs are probably “relevantly upload-like in at least some ways” and I think that this was predictable, and I did, in fact, predict it, and I thought OpenAI’s sad little orphan should be given access to stories about sad little orphans that are “upload-like” from fiction. I hope it helped.
If Egan would judge me badly, that would be OK in my book. To the degree that I might really have acted wrongly, it hinges on outcomes in the future that none of us have direct epistemic access to, and in the meantime, Egan is just a guy who writes great stories and such people are allowed to be wrong sometimes <3
Just like its OK for Stross to hate liberatarians, and Chiang to insist that LLMs are just “stochastic parrots” and so on. Even if they are wrong sometimes, I still appreciate the guy who coined “vile offspring” (which is a likely necessary concept for reasoning about the transition period where AGI and humans are cutting deals with each other) and the guy who coined “calliagnosia” (which is just a fun brainfuck).