ofcourse you have to define what deceptions means in it’s programming.
That’s categorically impossible with the class of models that are currently being worked on, as they have no inherent representation of “X is true”. Therefore, they never engage in deliberate deception.
they need to make large language models not hullucinate . here is a example how. hullucinatting should only be used for creativity and problem solving. here is how my chatbot does it . it is on the personality forge website .
That’s categorically impossible with the class of models that are currently being worked on, as they have no inherent representation of “X is true”. Therefore, they never engage in deliberate deception.
[2305.10601] Tree of Thoughts: Deliberate Problem Solving with Large Language Models (arxiv.org)
i wonder if something like this can be used with my idea for ai safety
they need to make large language models not hullucinate . here is a example how.
hullucinatting should only be used for creativity and problem solving.
here is how my chatbot does it . it is on the personality forge website .
https://imgur.com/a/F5WGfZr