That excludes a large class of dystopias… including ones with relatively high probability.
Possibly true, but I think it’s necessary to avoid getting a situation where we call any case it disagrees with us an example of it classifying bad scenarios as good. I don’t want this to devolve into a debate on ethics.
Think step by step, then state your answer.
This is a known trick with GPT which tends to make it produce better answers.
I think the reason is that predicting a single word in chat GPT is O(1), so it’s not really capable of sophisticated computation. Asking it to think step by step gives it some scratch space, allowing it to have more time for computation, and to store intermediary results in memory.
Possibly true, but I think it’s necessary to avoid getting a situation where we call any case it disagrees with us an example of it classifying bad scenarios as good. I don’t want this to devolve into a debate on ethics.
This is a known trick with GPT which tends to make it produce better answers.
I think the reason is that predicting a single word in chat GPT is O(1), so it’s not really capable of sophisticated computation. Asking it to think step by step gives it some scratch space, allowing it to have more time for computation, and to store intermediary results in memory.
OK, thank you. Makes sense.