So you have a batch of things that need to pass muster. The failure mode presented above is that you’ll get bored with just saying ‘pass, pass, pass...’
The corrective proposed is to ask for the worst item, whether or not it passes, in addition to asking for rejects.
It would be something to think about while looking at a bunch of good ones, and would keep one in practice… if one tries. If you just fake it and no one can tell because they’re all passes anyway, then it doesn’t work.
It may also be useful to identify the best thing. The difference between the best and worst is probably a useful measure of quality control as well as ensuring the tests are general enough to detect good as well as bad.
If your process is good enough that this is a problem, then ‘so good you can’t tell it’s not perfect’ could well be the most common case. In any case, it’s most important to concentrate the expertise around the border of OK and not.
If you are using humans to mass-test for a failure rate of 1⁄10,000 you are doing something wrong. Ship ten thousand units, let the end-user test them at the time of use/installation/storage, and ship replacement parts to the user who got a defective part. That way no one human gets bored with testing that part (though they might get bored with inspecting good parts in general)
Don’t you demand that your parachute packer inspects it when he packs it? Especially given that more than zero parachutes will be damaged after manufacture but before first use.
So you have a batch of things that need to pass muster. The failure mode presented above is that you’ll get bored with just saying ‘pass, pass, pass...’
The corrective proposed is to ask for the worst item, whether or not it passes, in addition to asking for rejects.
It would be something to think about while looking at a bunch of good ones, and would keep one in practice… if one tries. If you just fake it and no one can tell because they’re all passes anyway, then it doesn’t work.
It may also be useful to identify the best thing. The difference between the best and worst is probably a useful measure of quality control as well as ensuring the tests are general enough to detect good as well as bad.
If your process is good enough that this is a problem, then ‘so good you can’t tell it’s not perfect’ could well be the most common case. In any case, it’s most important to concentrate the expertise around the border of OK and not.
Interesting. May be applicable to some of the situations we’re studying...
Just look out that you don’t end up picking out something that’s not the worst, and think you’re still doing a good job.
That looks like an ideal case for automation...
And then you miss the one in ten thousand that was no good.
If you are using humans to mass-test for a failure rate of 1⁄10,000 you are doing something wrong. Ship ten thousand units, let the end-user test them at the time of use/installation/storage, and ship replacement parts to the user who got a defective part. That way no one human gets bored with testing that part (though they might get bored with inspecting good parts in general)
Sounds great if failure is acceptable. I don’t want my parachute manufacturer taking on that method, though.
Don’t you demand that your parachute packer inspects it when he packs it? Especially given that more than zero parachutes will be damaged after manufacture but before first use.