I came up with an algorithm that compromises between them.
I am not sure of the point. If you can “sample … from your probability distribution” then you fully know your probability distribution including all of its statistics—mean, median, etc. And then you proceed to generate some sample estimates which just add noise but, as far as I can see, do nothing else useful.
If you want something more robust than the plain old mean, check out M-estimators which are quite flexible.
If you can “sample … from your probability distribution” then you fully know your probability distribution
That’s not true. (Though it might well be in all practical cases.) In particular, there are good algorithms for sampling from unknown or uncomputable probability distributions. Of course, any method that lets you sample from it lets you sample the parameters as well, but that’s exactly the process the parent comment is suggesting.
It might be more amenable to MCMC sampling than you think. MCMC basically is a series of operations of the form “make a small change and compare the result to the status quo”, which now that I phrase it that way sounds a lot like human ethical reasoning. (Maybe the real problem with philosophy is that we don’t consider enough hypothetical cases? I kid… mostly...)
In practice, the symmetry constraint isn’t as nasty as it looks. For example, you can do MH to sample a random node from a graph, knowing only local topology (you need some connectivity constraints to get a good walk length to get good diffusion properties). Basically, I posit that the hard part is coming up with a sane definition for “nearby possible world” (and that the symmetry constraint and other parts are pretty easy after that).
Maybe the real problem with philosophy is that we don’t consider enough hypothetical cases? I kid… mostly...
In that case we can have wonderful debates about which sub-space to sample our hypotheticals from, and once a bright-eyed and bushy-tailed acolyte breates out “ALL of it!” we can pontificate about the boundaries of all :-)
P.S. In about a century philosophy will discover the curse of dimensionality and there will be much rending of clothes and gnashing of teeth...
I should have explained it better. You take n samples, and calculate the mean of those samples. You do that a bunch of times, and create a new distribution of those means of samples. Then you take the median of that.
This gives a tradeoff between mean and median. As n goes to infinity, you just get the mean. As n goes to 1, you just get the median. Values in between are a compromise. n = 100 will roughly ignore things that have less than 1% chance of happening (as opposed to less than 50% chance of happening, like the standard median.)
There is a variety of ways to get a tradeoff between the mean and the median (or, more generally, between an efficient but not robust estimator and a robust but not efficient estimator). The real question is how do you decide what a good tradeoff is.
Basically if your mean and your median are different, your distribution is asymmetric. If you want a single-point summary of the entire distribution, you need to decide how to deal with that asymmetry. Until you specify some criteria under which you’ll be optimizing your single-point summary you can’t really talk about what’s better and what’s worse.
This is just one of many possible algorithms which trade off between median and mean. Unfortunately there is no objective way to determine which one is best (or the setting of the hyperparameter.)
The criteria we are optimizing is just “how closely does it match the behavior we actually want.”
I am not sure of the point. If you can “sample … from your probability distribution” then you fully know your probability distribution including all of its statistics—mean, median, etc. And then you proceed to generate some sample estimates which just add noise but, as far as I can see, do nothing else useful.
If you want something more robust than the plain old mean, check out M-estimators which are quite flexible.
That’s not true. (Though it might well be in all practical cases.) In particular, there are good algorithms for sampling from unknown or uncomputable probability distributions. Of course, any method that lets you sample from it lets you sample the parameters as well, but that’s exactly the process the parent comment is suggesting.
A fair point, though I don’t think it makes any difference in the context. And I’m not sure the utility function is amenable to MCMC sampling...
I basically agree. However...
It might be more amenable to MCMC sampling than you think. MCMC basically is a series of operations of the form “make a small change and compare the result to the status quo”, which now that I phrase it that way sounds a lot like human ethical reasoning. (Maybe the real problem with philosophy is that we don’t consider enough hypothetical cases? I kid… mostly...)
In practice, the symmetry constraint isn’t as nasty as it looks. For example, you can do MH to sample a random node from a graph, knowing only local topology (you need some connectivity constraints to get a good walk length to get good diffusion properties). Basically, I posit that the hard part is coming up with a sane definition for “nearby possible world” (and that the symmetry constraint and other parts are pretty easy after that).
In that case we can have wonderful debates about which sub-space to sample our hypotheticals from, and once a bright-eyed and bushy-tailed acolyte breates out “ALL of it!” we can pontificate about the boundaries of all :-)
P.S. In about a century philosophy will discover the curse of dimensionality and there will be much rending of clothes and gnashing of teeth...
I should have explained it better. You take n samples, and calculate the mean of those samples. You do that a bunch of times, and create a new distribution of those means of samples. Then you take the median of that.
This gives a tradeoff between mean and median. As n goes to infinity, you just get the mean. As n goes to 1, you just get the median. Values in between are a compromise. n = 100 will roughly ignore things that have less than 1% chance of happening (as opposed to less than 50% chance of happening, like the standard median.)
There is a variety of ways to get a tradeoff between the mean and the median (or, more generally, between an efficient but not robust estimator and a robust but not efficient estimator). The real question is how do you decide what a good tradeoff is.
Basically if your mean and your median are different, your distribution is asymmetric. If you want a single-point summary of the entire distribution, you need to decide how to deal with that asymmetry. Until you specify some criteria under which you’ll be optimizing your single-point summary you can’t really talk about what’s better and what’s worse.
This is just one of many possible algorithms which trade off between median and mean. Unfortunately there is no objective way to determine which one is best (or the setting of the hyperparameter.)
The criteria we are optimizing is just “how closely does it match the behavior we actually want.”
EDIT: Stuart Armstrong’s idea is much better: http://lesswrong.com/r/discussion/lw/mqk/mean_of_quantiles/
And what is “the behavior we actually want”?