Is this a setback for ‘AI safety’? In some sense I guess so. In others, not at all. Would we be better off if we could see and modify the code?
I can see the code generated by Code Interpreter. Here is one of my chats about a math problem (impressive performance by GPT-4 with Code Interpreter, giving it “A-” for minor reasoning defects, the code generation and the obtained answers are quite correct):
I can see the code generated by Code Interpreter. Here is one of my chats about a math problem (impressive performance by GPT-4 with Code Interpreter, giving it “A-” for minor reasoning defects, the code generation and the obtained answers are quite correct):
https://chat.openai.com/share/d1c4c289-9907-46a6-8ac1-f6baa34f1b12
One clicks on “Show work” to see the code, and then one can click “Copy code” (as usual) to take the code elsewhere...
And, yes, I can edit the resulting code in an outside editor and ask GPT-4 with Code Interpreter to execute it.
And it executes it just fine while fixing my bug (missing
import math
):https://chat.openai.com/share/0433de45-779a-41c4-8e0d-ca4cfd38c92e