(E.g., the trimean will have a better breakdown point but be less efficient than the mean; a worse breakdown point but more efficient than the median.)
What does “efficient” mean, in this context? Time to calculate would be my first guess, but the median should be faster to calculate than the trimean.
[EDITED to add:] Sorry, that’s a bit rude; I should also give a brief explanation here.
Any estimator will be noisy. All else being equal, you would prefer one with less noise. There is a thing called the Cramér-Rao inequality that gives a lower bound on how noisy an estimator can be, as measured by its variance. (But see the note below.)
The efficiency of an estimator is the ratio between its variance and the bound given by Cramér-Rao. An estimator whose efficiency is 1 has as little variance as any estimator can have. (Such estimators need not exist.) Noisier estimators have lower efficiency.
Efficiency depends on the underlying distribution. I shouldn’t really have said that the mean will be more efficient than the median; if the underlying distribution is thin-tailed enough then it will be; e.g., for large samples of normally distributed data the mean has efficiency 1 while the median has efficiency about 0.64. But if the actual distribution is fat-tailed, the median may actually be a more efficient estimator than the mean.
(Note: You might notice that I said something obviously false above. It’s trivial to make a completely un-noisy, zero-variance estimator of any parameter: just always estimate zero. But this will be a biased estimator; i.e., its expectation will not equal the underlying value it’s meant to be estimating. The Cramér-Rao inequality only applies to unbiased estimators. In some cases, for some applications, the “best” estimator may actually be a biased but less noisy one. For instance, suppose you have some samples of a normally-distributed random variable and you want to estimate its variance. The “obvious” thing to do is to compute 1/n sum(x-xbar)^2. That gives you an unbiased estimator but, famously, you can get rid of the bias by computing 1/(n-1) sum (x-xbar)^2 instead. But if you want to minimize your mean squared error—i.e., minimize the expectation of (est. variance—actual variance)^2 -- then what you want is neither of those, but instead 1/(n+1) sum (x-xbar)^2.)
[EDITED again to add:] Er, I should add that everything above is addressing the situation where you have some samples of a random variable and want to estimate its location parameter. Stuart is actually considering something a bit different, where you actually know the random variable itself and want a representative location parameter that may or may not be the mean, in order to make comparisons between different random variables by computing that parameter for each and comparing. So this is of doubtful relevance to the OP.
What does “efficient” mean, in this context? Time to calculate would be my first guess, but the median should be faster to calculate than the trimean.
See here.
[EDITED to add:] Sorry, that’s a bit rude; I should also give a brief explanation here.
Any estimator will be noisy. All else being equal, you would prefer one with less noise. There is a thing called the Cramér-Rao inequality that gives a lower bound on how noisy an estimator can be, as measured by its variance. (But see the note below.)
The efficiency of an estimator is the ratio between its variance and the bound given by Cramér-Rao. An estimator whose efficiency is 1 has as little variance as any estimator can have. (Such estimators need not exist.) Noisier estimators have lower efficiency.
Efficiency depends on the underlying distribution. I shouldn’t really have said that the mean will be more efficient than the median; if the underlying distribution is thin-tailed enough then it will be; e.g., for large samples of normally distributed data the mean has efficiency 1 while the median has efficiency about 0.64. But if the actual distribution is fat-tailed, the median may actually be a more efficient estimator than the mean.
(Note: You might notice that I said something obviously false above. It’s trivial to make a completely un-noisy, zero-variance estimator of any parameter: just always estimate zero. But this will be a biased estimator; i.e., its expectation will not equal the underlying value it’s meant to be estimating. The Cramér-Rao inequality only applies to unbiased estimators. In some cases, for some applications, the “best” estimator may actually be a biased but less noisy one. For instance, suppose you have some samples of a normally-distributed random variable and you want to estimate its variance. The “obvious” thing to do is to compute 1/n sum(x-xbar)^2. That gives you an unbiased estimator but, famously, you can get rid of the bias by computing 1/(n-1) sum (x-xbar)^2 instead. But if you want to minimize your mean squared error—i.e., minimize the expectation of (est. variance—actual variance)^2 -- then what you want is neither of those, but instead 1/(n+1) sum (x-xbar)^2.)
[EDITED again to add:] Er, I should add that everything above is addressing the situation where you have some samples of a random variable and want to estimate its location parameter. Stuart is actually considering something a bit different, where you actually know the random variable itself and want a representative location parameter that may or may not be the mean, in order to make comparisons between different random variables by computing that parameter for each and comparing. So this is of doubtful relevance to the OP.