Great work! I think our EMNLP 2022 Findings paper is relevant here. We construct a “Type Vector” using tokens from the LLM vocabulary and then use that as prior information for the type expected at output. We also try with text generation and view some promising results.
Great work! I think our EMNLP 2022 Findings paper is relevant here. We construct a “Type Vector” using tokens from the LLM vocabulary and then use that as prior information for the type expected at output. We also try with text generation and view some promising results.