You’d think they’d train the same model weights and just make it multi-task with the appropriate prompting, but no, that phrasing implies that it’s a separate finetuned model, to the extent that that matters. (I don’t particularly think it does matter because whether it’s one model or multiple, the system as a whole still has most of the same behaviors and feedback loops once it gets more access to data or starts being trained on previous dialogues/sessions—how many systems are in your system? Probably a lot, depending on your level of analysis. Nevertheless...)
Does it itself generate the query, or is it a separate trained system? I was a bit confused about this in the paper.
You’d think they’d train the same model weights and just make it multi-task with the appropriate prompting, but no, that phrasing implies that it’s a separate finetuned model, to the extent that that matters. (I don’t particularly think it does matter because whether it’s one model or multiple, the system as a whole still has most of the same behaviors and feedback loops once it gets more access to data or starts being trained on previous dialogues/sessions—how many systems are in your system? Probably a lot, depending on your level of analysis. Nevertheless...)