I’ve been looking at creating a GPT-powered program which can automatically generate a test of whether one has absorbed the Sequences. It doesn’t currently work that well and I don’t know whether it’s useful, but I thought I should mention it. If I get something that I think is worthwhile, then I’ll ping you about it.
Though my expectation is that one cannot really meaningfully measure the degree to which a person has absorbed the Sequences.
I too expect that testing whether someone can talk as if they’ve absorbed the sequences would measure password-guessing more accurately than comprehension.
The idea gets me wondering whether it’s possible to design a game that’s easy to learn and win using the skills taught by the sequences, but difficult or impossible without them. Since the sequences teach a skill, it seems like we should be able to procedurally generate novel challenges that the skill makes it easy to complete.
As someone who’s gone through the sequences yet isn’t sure whether they “really” “fully” understand them, I’d be interested in taking and retaking such a test from time to time (if it was accurate) to quantify any changes to my comprehension.
I too expect that testing whether someone can talk as if they’ve absorbed the sequences would measure password-guessing more accurately than comprehension.
I think the test could end up working as an ideology measure rather than a password-guessing game.
The idea gets me wondering whether it’s possible to design a game that’s easy to learn and win using the skills taught by the sequences, but difficult or impossible without them. Since the sequences teach a skill, it seems like we should be able to procedurally generate novel challenges that the skill makes it easy to complete.
It’s tricky to me, because the Sequences teach a network of ideas that are often not directly applicable to problem-solving/production tasks, but rather relevant for analyzing or explaining ideas. It’s hard to really evaluate an analysis/explanation without considering its downstream applications, but also it’s hard to come up with a task that is simultaneously:
Big enough that it has both analysis/explanation and downstream applications,
Small enough that it can be done quickly as a single person filling out a test.
I’ve been looking at creating a GPT-powered program which can automatically generate a test of whether one has absorbed the Sequences. It doesn’t currently work that well and I don’t know whether it’s useful, but I thought I should mention it. If I get something that I think is worthwhile, then I’ll ping you about it.
Though my expectation is that one cannot really meaningfully measure the degree to which a person has absorbed the Sequences.
I too expect that testing whether someone can talk as if they’ve absorbed the sequences would measure password-guessing more accurately than comprehension.
The idea gets me wondering whether it’s possible to design a game that’s easy to learn and win using the skills taught by the sequences, but difficult or impossible without them. Since the sequences teach a skill, it seems like we should be able to procedurally generate novel challenges that the skill makes it easy to complete.
As someone who’s gone through the sequences yet isn’t sure whether they “really” “fully” understand them, I’d be interested in taking and retaking such a test from time to time (if it was accurate) to quantify any changes to my comprehension.
I think the test could end up working as an ideology measure rather than a password-guessing game.
It’s tricky to me, because the Sequences teach a network of ideas that are often not directly applicable to problem-solving/production tasks, but rather relevant for analyzing or explaining ideas. It’s hard to really evaluate an analysis/explanation without considering its downstream applications, but also it’s hard to come up with a task that is simultaneously:
Big enough that it has both analysis/explanation and downstream applications,
Small enough that it can be done quickly as a single person filling out a test.