We could probably start by coding up a simplified version of this—just to get something done… then add more fo the complex features after that.
For example a good starting point would be for phase 1 predictions to just ask a (randomised) set of multi-choice or simple write-in questions for predictions: eg “how many red squares will there be at the end? in which part of the screen will the blue circle end up?” etc.
I reckon that in the first “level” they could start by estimating a probability, rather than jumping straight into weightings of evidence? We could then introduce evidence weighting as a “level 2″? What do you think? Would that totally change the nature of what it’s teaching too much?
after we’ve got that working, we could then figure out how to get the user to describe the ruleset to the computer in a flexible way. That’s actually a Tough Problem, BTW. It’s basically forming a mini-language… so definitely on the books, but probably not the first iteration. :)
after we’ve got that working, we could then figure out how to get the user to describe the ruleset to the computer in a flexible way. That’s actually a Tough Problem, BTW. It’s basically forming a mini-language… so definitely on the books, but probably not the first iteration. :)
Yeah, I realized that as I was writing the longer example, and also that it wasn’t strictly necessary. Interesting, but not necessary. =)
Your description of phase 1 prediction coding is very close to what I was picturing, and having a randomized set of questions rather than just saying “predict the final state” (in entirety) would give more game repeatability for less code if I understand correctly.
I actually really like the idea of having them just give a probability estimate the first time, or the first few times. I’m betting that will make for an increased effect of confirmation bias in those stages, and that their scores will improve when they’re forced to itemize evidence weights—which illustrates a point about confirmation bias as well as tying into the kind of thought process needed for Bayesian prediction.
(If you were to get as far as trying to code the user-described ruleset bit… I’d suggest finding someone who’s played Dragon Age and ask about the custom tactics options. I think that sort of format would work, as long as the number of total types of game objects and operators stayed relatively small.)
actually yeah—this is a great idea.
We could probably start by coding up a simplified version of this—just to get something done… then add more fo the complex features after that.
For example a good starting point would be for phase 1 predictions to just ask a (randomised) set of multi-choice or simple write-in questions for predictions: eg “how many red squares will there be at the end? in which part of the screen will the blue circle end up?” etc.
I reckon that in the first “level” they could start by estimating a probability, rather than jumping straight into weightings of evidence? We could then introduce evidence weighting as a “level 2″? What do you think? Would that totally change the nature of what it’s teaching too much?
after we’ve got that working, we could then figure out how to get the user to describe the ruleset to the computer in a flexible way. That’s actually a Tough Problem, BTW. It’s basically forming a mini-language… so definitely on the books, but probably not the first iteration. :)
after we’ve got that working, we could then figure out how to get the user to describe the ruleset to the computer in a flexible way. That’s actually a Tough Problem, BTW. It’s basically forming a mini-language… so definitely on the books, but probably not the first iteration. :)
Yeah, I realized that as I was writing the longer example, and also that it wasn’t strictly necessary. Interesting, but not necessary. =)
Your description of phase 1 prediction coding is very close to what I was picturing, and having a randomized set of questions rather than just saying “predict the final state” (in entirety) would give more game repeatability for less code if I understand correctly.
I actually really like the idea of having them just give a probability estimate the first time, or the first few times. I’m betting that will make for an increased effect of confirmation bias in those stages, and that their scores will improve when they’re forced to itemize evidence weights—which illustrates a point about confirmation bias as well as tying into the kind of thought process needed for Bayesian prediction.
(If you were to get as far as trying to code the user-described ruleset bit… I’d suggest finding someone who’s played Dragon Age and ask about the custom tactics options. I think that sort of format would work, as long as the number of total types of game objects and operators stayed relatively small.)