I’m not sure what the objective is here, are you trying to build a kind of Quine prompt? If so why? What attracts you to this project, but more importantly (and I am projecting my own values here) what pragmatic applications? What specific decisions do you imagine you or others here may make differently based on the information you glean from this exercise?
If it’s not a Quine that you’re trying to produce, what is it exactly that you’re hoping to achieve by this recursive feeding and memory summarization loop?
It would be good if you have thoughts on this, as it’s philosophically an “ask the right question” task.
I assume this is a reference to either Wittgenstein saying that a skilled philosopher doesn’t occupy themselves with questions which are of no concern or Daniel Dennett saying philosophy is “what you have to do until you figure out what questions you should have been asking in the first place”. And to be clear, I don’t know what question you’re trying to answer with this exercise, and as such, can’t even begin to ascertain if you’re asking the right question.
It started out with the idea to make a system that can improve itself, without me having to prompt it to improve itself. The “question” is the prompt. And crafting the prompt is very difficult. So it’s an experiment in constructing a question that a system can use to improve itself without having to explicitly say “improve your codebase” or “give yourself more actions or freedoms” or others like that. I want the llm to conjure up ideas.
So as the prompt increases in complexity, with different sections that could be categorised as kinds of human thought, will it get better? Can we find parallels between how the human minds works that can be translated into a prompt. A key area that developed was using the future as well as the past in the prompt. Having memories is an obvious inclusion, as is including what happened in the past runs of the program, but what wasn’t obvious was including predictions for the future. Including the ability to make long and short term predictions, and have it be able to change these, and record the outcomes, saw big improvements in the directions it would take over time. It also seemed to ‘ground it’. Without the predictions space, it became overly concerned with its performance metrics as a proxy for improvement and it began over-optimising.
Defining ‘better’ or ‘improving’ is very difficult. Right now, i’m using token input size growth whilst maintaining clarity of thought as the rough measure.
I’m not sure what the objective is here, are you trying to build a kind of Quine prompt? If so why? What attracts you to this project, but more importantly (and I am projecting my own values here) what pragmatic applications? What specific decisions do you imagine you or others here may make differently based on the information you glean from this exercise?
If it’s not a Quine that you’re trying to produce, what is it exactly that you’re hoping to achieve by this recursive feeding and memory summarization loop?
I assume this is a reference to either Wittgenstein saying that a skilled philosopher doesn’t occupy themselves with questions which are of no concern or Daniel Dennett saying philosophy is “what you have to do until you figure out what questions you should have been asking in the first place”. And to be clear, I don’t know what question you’re trying to answer with this exercise, and as such, can’t even begin to ascertain if you’re asking the right question.
It started out with the idea to make a system that can improve itself, without me having to prompt it to improve itself. The “question” is the prompt. And crafting the prompt is very difficult. So it’s an experiment in constructing a question that a system can use to improve itself without having to explicitly say “improve your codebase” or “give yourself more actions or freedoms” or others like that. I want the llm to conjure up ideas.
So as the prompt increases in complexity, with different sections that could be categorised as kinds of human thought, will it get better? Can we find parallels between how the human minds works that can be translated into a prompt. A key area that developed was using the future as well as the past in the prompt. Having memories is an obvious inclusion, as is including what happened in the past runs of the program, but what wasn’t obvious was including predictions for the future. Including the ability to make long and short term predictions, and have it be able to change these, and record the outcomes, saw big improvements in the directions it would take over time. It also seemed to ‘ground it’. Without the predictions space, it became overly concerned with its performance metrics as a proxy for improvement and it began over-optimising.
Defining ‘better’ or ‘improving’ is very difficult. Right now, i’m using token input size growth whilst maintaining clarity of thought as the rough measure.