A powerful enough personal computer could run LLaMA locally. I don’t think the raw model is optimized for chat, but with a suitable prompt, you might be able to get it into chat mode long enough to do this kind of thing. It also wouldn’t surprise me to learn that there are more specialized derivatives now that would be suitable. I’ve certainly heard of people working on it.
A powerful enough personal computer could run LLaMA locally. I don’t think the raw model is optimized for chat, but with a suitable prompt, you might be able to get it into chat mode long enough to do this kind of thing. It also wouldn’t surprise me to learn that there are more specialized derivatives now that would be suitable. I’ve certainly heard of people working on it.
Thanks; have six points of upvotedness.