You can do personalized RHLF on a LLM. Because there’s less data, you need to do stronger training per data point than big companies do. The training is still a technical issue, but supposing that becomes cheap enough, one problem is that this produces sycophants. We already see commercial LLMs that just agree with whatever you initially imply you believe.
Producing vector embeddings is, if anything, more natural for neural networks than continuing text, and search engines already use neural networks to produce document embeddings for search. It’s entirely feasible to do this for all your personal or company documents, and then search (a vector database of) them using approximate descriptions of what you want.
You can do personalized RHLF on a LLM. Because there’s less data, you need to do stronger training per data point than big companies do. The training is still a technical issue, but supposing that becomes cheap enough, one problem is that this produces sycophants. We already see commercial LLMs that just agree with whatever you initially imply you believe.
Producing vector embeddings is, if anything, more natural for neural networks than continuing text, and search engines already use neural networks to produce document embeddings for search. It’s entirely feasible to do this for all your personal or company documents, and then search (a vector database of) them using approximate descriptions of what you want.