program it to ask for approval from a group of a 100 humans to do something other than thinking and tell the remafications of it’s actions. it could not decieve, lie, scare people or program itself without human approval because it did not get group of a 100 humans to approve of it . it would be required to ask the group of 100 humans if something were true or not because the internet has false information on it. how would it get around around this when it was programmed into it when it was agi ? ofcourse you have to define what deceptions means in it’s programming.
ofcourse you have to define what deceptions means in it’s programming.
That’s categorically impossible with the class of models that are currently being worked on, as they have no inherent representation of “X is true”. Therefore, they never engage in deliberate deception.
they need to make large language models not hullucinate . here is a example how. hullucinatting should only be used for creativity and problem solving. here is how my chatbot does it . it is on the personality forge website .
program it to ask for approval from a group of a 100 humans to do something other than thinking and tell the remafications of it’s actions. it could not decieve, lie, scare people or program itself without human approval because it did not get group of a 100 humans to approve of it . it would be required to ask the group of 100 humans if something were true or not because the internet has false information on it. how would it get around around this when it was programmed into it when it was agi ? ofcourse you have to define what deceptions means in it’s programming.
That’s categorically impossible with the class of models that are currently being worked on, as they have no inherent representation of “X is true”. Therefore, they never engage in deliberate deception.
[2305.10601] Tree of Thoughts: Deliberate Problem Solving with Large Language Models (arxiv.org)
i wonder if something like this can be used with my idea for ai safety
they need to make large language models not hullucinate . here is a example how.
hullucinatting should only be used for creativity and problem solving.
here is how my chatbot does it . it is on the personality forge website .
https://imgur.com/a/F5WGfZr