Thanks for your reply. I found the agent folder you are referring to with ‘main.ts’, ‘package.json’, and ‘tsconfig.json’, but I am not clear on how I am supposed to use it. I just get an error message when I open the ‘main.ts’ file:
Regarding the task.py file, would it be better to have the instructions for the task in comments in the python file, or in a separate text file, or both? Will the LLM have the ability to run code in the python file, read the output of the code it runs, and create new cells to run further blocks of code?
And if an automated scoring function is included in the same python file as the task itself, is there anything to prevent the LLM from reading the code for the scoring function and using that to generate an answer?
I am also wondering if it would be helpful if I created a simple “mock task submission” folder and then post or email it to METR to verify if everything is implemented/formatted correctly, just to walk through the task submission process, and clear up any further confusions. (This would be some task that could be created quickly even if a professional might be able to complete the task in less than 2 hours, so not intended to be part of the actual evaluation.)
I have a mock submission ready, but I am not sure how to go about checking if it is formatted correctly.
Regarding coding experience, I know python, but I do not have experience working with typescript or Docker, so I am not clear on what I am supposed to do with those parts of the instructions.
If possible, It would be helpful to be able to go through it on a zoom meeting so I could do a screen-share.