I’m focusing on the core engine now, so likely the first revision will be a command line tool or api.
It’s still in the planning stage, but it’s unlikely to be something which is strictly predictions you make, or entirely canned questions. I have some thoughts, but no implementation yet.
Do you mean cognitive calibration like uncertainty calibration?
Yes. Both correctness and appropriate confidence in correctness.
What kind of software? What kind of questions are you using/going to use?
I’m focusing on the core engine now, so likely the first revision will be a command line tool or api.
It’s still in the planning stage, but it’s unlikely to be something which is strictly predictions you make, or entirely canned questions. I have some thoughts, but no implementation yet.
OK, interesting. I’m interested in seeing where you take this.