I’d suggest measuring the Net Promoter Score (NPS) (link). It’s used in business as a better measure of customer satisfaction than more traditional measures. See here for evidence, sorry for the not-free link.
“On a scale of 0-10, how likely would you be to recommend the minicamp to a friend or colleague?”
“What is the most important reason for your recommendation?
To interpret, split the responses into 3 groups:
9-10: Promoter—people who will be active advocates.
7-8: Passive—people who are generally positive, but aren’t going to do anything about it.
0-6: Detractor—people who are lukewarm (which will turn others off) or will actively advocate against you
NPS = [% who are Promoters] - [% who are Detractors]. Good vs. bad NPS varies by context, but +20-30% is generally very good. The followup question is a good way to identify key strengths and high priority areas to improve.
NPS is a really valuable concept. Means and medians are pretty worthless compared to identifying the percentage in each class, and it’s sobering to realize that a 6 is a detractor score.
(Personal anecdote: I went to a movie theater, watched a movie, and near the end, during an intense confrontation between the hero and villain, the film broke. I was patient, but when they sent me an email later asking me the NPS question, I gave it a 6. I mean, it wasn’t that bad. Then two free movie tickets came in the mail, with a plea to try them out again.
I hadn’t realized it, but I had already put that theater in my “never go again” file, since why give them another chance? I then read The Ultimate Question for unrelated reasons, and had that experience in my mind the whole time.)
Good anecdote. It made me realize that I had just 20 minutes ago made a damning non-recommendation to a friend based off of a single bad experience after a handful of good ones.
Another thing you could do is measure in a more granular way—ask for NPS about particular sessions. You could do this after each session or at the end of each day. This would help you narrow down what sessions are and are not working, and why.
You do have to be careful not to overburden people by asking them for too much detailed feedback too frequently, otherwise they’ll get survey fatigue and the quality of responses will markedly decline. Hence, I would resist the temptation to ask more than 1-2 questions about any particular session. If there are any that are markedly well/poorly received, you can follow up on those later.
I’d suggest measuring the Net Promoter Score (NPS) (link). It’s used in business as a better measure of customer satisfaction than more traditional measures. See here for evidence, sorry for the not-free link.
“On a scale of 0-10, how likely would you be to recommend the minicamp to a friend or colleague?”
“What is the most important reason for your recommendation?
To interpret, split the responses into 3 groups:
9-10: Promoter—people who will be active advocates.
7-8: Passive—people who are generally positive, but aren’t going to do anything about it.
0-6: Detractor—people who are lukewarm (which will turn others off) or will actively advocate against you
NPS = [% who are Promoters] - [% who are Detractors]. Good vs. bad NPS varies by context, but +20-30% is generally very good. The followup question is a good way to identify key strengths and high priority areas to improve.
NPS is a really valuable concept. Means and medians are pretty worthless compared to identifying the percentage in each class, and it’s sobering to realize that a 6 is a detractor score.
(Personal anecdote: I went to a movie theater, watched a movie, and near the end, during an intense confrontation between the hero and villain, the film broke. I was patient, but when they sent me an email later asking me the NPS question, I gave it a 6. I mean, it wasn’t that bad. Then two free movie tickets came in the mail, with a plea to try them out again.
I hadn’t realized it, but I had already put that theater in my “never go again” file, since why give them another chance? I then read The Ultimate Question for unrelated reasons, and had that experience in my mind the whole time.)
Good anecdote. It made me realize that I had just 20 minutes ago made a damning non-recommendation to a friend based off of a single bad experience after a handful of good ones.
Here is the evidence paper.
Right, I’d forgotten about that. I concur that it is used, and I work in market research sort of.
Another thing you could do is measure in a more granular way—ask for NPS about particular sessions. You could do this after each session or at the end of each day. This would help you narrow down what sessions are and are not working, and why.
You do have to be careful not to overburden people by asking them for too much detailed feedback too frequently, otherwise they’ll get survey fatigue and the quality of responses will markedly decline. Hence, I would resist the temptation to ask more than 1-2 questions about any particular session. If there are any that are markedly well/poorly received, you can follow up on those later.