The adaptiveness (in Brain Workshop, which is the only implementation I’ve spent much time with) feels pretty horrible. At least compared to a typical modern well-balanced game.
N-back simply isn’t a fun game in the first place, so I don’t know how much the adaptiveness is to blame.
You’re talking about least squares, or some modification of least squares that deals with outliers, right? Doesn’t this assume linearity? Or I suppose I’d have to model, say, the effect of increasing the N-back as aN^b (or some other suitable formula) and plug in values for a and b as parameters? But that means I have to take some time choosing a good model.
Yes, least squares requires a lot of assumptions to be provably optimal. On the other hand, it works all the time. Stupid simple approaches do that pretty frequently.
Anyway, my understanding is that even if I use regression on individual user data I’d need to use a pretty complex model with lots of parameters to make it work. Is this not true?
I don’t see why it would be, necessarily. What sort of complex model did you have in mind?
Also, is the Bernoulli trials stuff reasonable, or am I making things too complicated with that too?
I’m not sure what the binomial stuff is gaining you over a simple %-correct number. If the user gets 2 out of 10 matches right, then the max-likelihood estimate of the underlying probability under a binomial model is going to be… 0.2. You bring in the binomial/Bernoulli stuff when you want to do something more complex.
N-back simply isn’t a fun game in the first place, so I don’t know how much the adaptiveness is to blame.
Yes, least squares requires a lot of assumptions to be provably optimal. On the other hand, it works all the time. Stupid simple approaches do that pretty frequently.
I don’t see why it would be, necessarily. What sort of complex model did you have in mind?
I’m not sure what the binomial stuff is gaining you over a simple %-correct number. If the user gets 2 out of 10 matches right, then the max-likelihood estimate of the underlying probability under a binomial model is going to be… 0.2. You bring in the binomial/Bernoulli stuff when you want to do something more complex.