“[weekly review worksheet] Was initially successful, but eventually became useless, and attempts to save it failed. There was also a meta failure where we didn’t notice how badly it was failing, and so continued spending time on it.”
Can you say more about how it became useless?
My experience (both personally and based on others’ experience using Complice) is that weekly reviews tend to be clearly really valuable whenever I do them, but I still often feel like they’re not important/urgent, and so I tend to put them off. So then the meta failure is that without doing my weekly review, I don’t get into the reflective headspace where I remember how valuable weekly reviews are. It sounds like you guys experienced something different, but I’m not sure.
Btw: the default weekly review questions in Complice are:
• What went really well this week? What did you do that worked?
• What got in the way? What didn’t work?
• Based on that, do you want to be approaching things differently?
• What are your priorities for the upcoming week?
These seem to work quite well, although I’m sure they could be further optimized!
Sebastian Marshall recommends these questions, which I also like:
Our first iteration had questions extremely similar to yours, actually. I believe we had rewordings of each of those questions.
I don’t have a good idea of what made them work, because I only started participating after they’d started to decline. There was a lot of socializing that dragged down discussions, but even when we limited socializing I didn’t notice any improvement.
Personally I’m skeptical of the entire endeavour. People claimed lots of positive effects, but then as soon as I tried to measure them they disappeared. I kind of suspect that people notice lots of possibilities during weekly review, and feel like they’ve accomplished something, but then don’t actually follow through.
However I think it’s pretty plausible that there exists a useful weekly review structure, so I plan to continue testing them.
I’m not sure if you have any data on your weekly reviews (maybe how often you change a behavior as a result?) but I’d be very interested.
I don’t have statistical data on it, but it is generally my experience that doing weekly reviews causes me to choose new priorities for the week, that I wouldn’t have chosen otherwise, and to the extent that those priorities are actually better, I then do them.
One of the advantages of doing weekly reviews as part of Complice is that the review system is integrated with a system for intentionally doing things each day, so I suspect it means that any possibilities noticed are more likely to be followed through on.
The integration isn’t as good as it could be though, and we have some sketches of a UI that’ll make it better. That’ll be added sometime this year unless my priorities shift.
Weekly reviews
Can you say more about how it became useless?
My experience (both personally and based on others’ experience using Complice) is that weekly reviews tend to be clearly really valuable whenever I do them, but I still often feel like they’re not important/urgent, and so I tend to put them off. So then the meta failure is that without doing my weekly review, I don’t get into the reflective headspace where I remember how valuable weekly reviews are. It sounds like you guys experienced something different, but I’m not sure.
Btw: the default weekly review questions in Complice are:
• What went really well this week? What did you do that worked?
• What got in the way? What didn’t work?
• Based on that, do you want to be approaching things differently?
• What are your priorities for the upcoming week?
These seem to work quite well, although I’m sure they could be further optimized!
Sebastian Marshall recommends these questions, which I also like:
• “What’s really going on?”
• “So what do I do about it?”
• “What matters, what doesn’t?”
Our first iteration had questions extremely similar to yours, actually. I believe we had rewordings of each of those questions.
I don’t have a good idea of what made them work, because I only started participating after they’d started to decline. There was a lot of socializing that dragged down discussions, but even when we limited socializing I didn’t notice any improvement.
Personally I’m skeptical of the entire endeavour. People claimed lots of positive effects, but then as soon as I tried to measure them they disappeared. I kind of suspect that people notice lots of possibilities during weekly review, and feel like they’ve accomplished something, but then don’t actually follow through.
However I think it’s pretty plausible that there exists a useful weekly review structure, so I plan to continue testing them.
I’m not sure if you have any data on your weekly reviews (maybe how often you change a behavior as a result?) but I’d be very interested.
I don’t have statistical data on it, but it is generally my experience that doing weekly reviews causes me to choose new priorities for the week, that I wouldn’t have chosen otherwise, and to the extent that those priorities are actually better, I then do them.
One of the advantages of doing weekly reviews as part of Complice is that the review system is integrated with a system for intentionally doing things each day, so I suspect it means that any possibilities noticed are more likely to be followed through on.
The integration isn’t as good as it could be though, and we have some sketches of a UI that’ll make it better. That’ll be added sometime this year unless my priorities shift.