They are slightly different, but in practical terms they describe the same error; sensitivity and specificity are properties of a test while Type I and II errors are properties of a system, but both errors are basically saying, “Our test is not perfectly accurate so if we want to catch more people with a disease we need to misdiagnose more people”
To illustrate the distinction, consider a test which is 90% sensitive and 90% specific in a population of 100 where a disease has a 50% prevelance. This means 50 people have the disease, of which the test will identify 45 as having the problem (90% sensitive). 50 people are free of the disease, of which the test will correctly identify 45 (90% specific). So if diagnosed your probability of the diagnosis being a Type I error is 5⁄50 = 10% (if given the all clear the same logic applies for a Type II error). You derive this from the number of people in the population who were told they have a disease who were incorrectly diagnosed divided by the total population who were told they have a disease (rightly or wrongly)
But if the disease prevelance changes due to demographic pressue to 10% then 10 people have the disease of whom 9 are diagnosed, and 90 people are disease-free of whom 81 are given the all-clear. This means the probabilities of the different ‘Type’ errors change dramatically; now 9⁄18 = 50% for a Type I error and 1⁄82 ~ 1.2% for a Type II error. But the sensitivity and specificity of the test are completely unchanged.
I think that’s Sensitivity vs Specificity
They are slightly different, but in practical terms they describe the same error; sensitivity and specificity are properties of a test while Type I and II errors are properties of a system, but both errors are basically saying, “Our test is not perfectly accurate so if we want to catch more people with a disease we need to misdiagnose more people”
To illustrate the distinction, consider a test which is 90% sensitive and 90% specific in a population of 100 where a disease has a 50% prevelance. This means 50 people have the disease, of which the test will identify 45 as having the problem (90% sensitive). 50 people are free of the disease, of which the test will correctly identify 45 (90% specific). So if diagnosed your probability of the diagnosis being a Type I error is 5⁄50 = 10% (if given the all clear the same logic applies for a Type II error). You derive this from the number of people in the population who were told they have a disease who were incorrectly diagnosed divided by the total population who were told they have a disease (rightly or wrongly)
But if the disease prevelance changes due to demographic pressue to 10% then 10 people have the disease of whom 9 are diagnosed, and 90 people are disease-free of whom 81 are given the all-clear. This means the probabilities of the different ‘Type’ errors change dramatically; now 9⁄18 = 50% for a Type I error and 1⁄82 ~ 1.2% for a Type II error. But the sensitivity and specificity of the test are completely unchanged.