Wait my bad, I didn’t except so many people to actually see this.
This is kind of silly, but I had an idea for a post that I thought someone else might say before I have it written out. So I figured I’d post a hash of the thesis here.
It’s not just about, idk, getting more street cred for coming up with an idea. This is also what I’m planning to write for my MATs application to Lee Sharkley’s stream. So in the case someone else did write it up before me, I would have some proof that I didn’t just copy the idea from a post.
(It’s also a bit silly because my guess is that the thesis isn’t even that original)
Edit: to answer the original question, I will post something before October 6th on this if all goes to plan.
What if a bag of heuristics is all there is and a bag of heuristics is all we need? That is, (1) we can decompose each forward pass in current models into a set of heuristics chained together and (2) heauristics chained together is all we need for agi
When will this be revealed?
Wait my bad, I didn’t except so many people to actually see this.
This is kind of silly, but I had an idea for a post that I thought someone else might say before I have it written out. So I figured I’d post a hash of the thesis here.
It’s not just about, idk, getting more street cred for coming up with an idea. This is also what I’m planning to write for my MATs application to Lee Sharkley’s stream. So in the case someone else did write it up before me, I would have some proof that I didn’t just copy the idea from a post.
(It’s also a bit silly because my guess is that the thesis isn’t even that original)
Edit: to answer the original question, I will post something before October 6th on this if all goes to plan.
That was the SHA-256 hash for:
What if a bag of heuristics is all there is and a bag of heuristics is all we need? That is, (1) we can decompose each forward pass in current models into a set of heuristics chained together and (2) heauristics chained together is all we need for agi
Here’s my full post on the subject