Very definitely, its easy to forget the level of knowledge necessary t work at for this stuff. For example I recently realised that in a room of competitive debaters (college educated well read people) no-one knew what I meant by epistemic uncertainty. And very few philosophers know anything about QM or neurology…
For example I recently realised that in a room of competitive debaters (college educated well read people) no-one knew what I meant by epistemic uncertainty.
Wait, what do you mean by “epistemic uncertainty”? The top Google results for the phrase contrast it with “aleatoric uncertainty” which is so esoteric that it’s not even in LW’s vocabulary (zero results for “aleatoric” on LW search).
“Epistemic uncertainty” sounds like a fancy way of saying “ignorance”. “Aleatoric” I think means “stochastic” (the cognate of that word in Italian is not terribly uncommon).
Aleatoric uncertainty, aka statistical uncertainty, which is unknowns that differ each time we run the same experiment. For an example of simulating the take-off of an airplane, even if we could exactly control the wind speeds along the run way, if we let 10 planes of the same make start their trajectories would still differ due to fabrication differences. Similarly, if all we knew is that the average wind speed is the same, letting the same plane start 10 times would still yield different trajectories because we do not know the exact wind speed at every point of the runway, only its average. Aleatoric uncertainties are therefore something an experimenter cannot do anything about: they exist, and they cannot be suppressed by more accurate measurements. Epistemic uncertainty, aka systematic uncertainty, which is due to things we could in principle know but don’t in practice. This may be because we have not measured a quantity sufficiently accurately, or because our model neglects certain effects, or because particular data are deliberately hidden.
Could we say that aleatoric uncertainty would be akin to not knowing whether a coin will land heads or tails (but we know the odds are 1:1) and epistemic uncertainty would be akin to not knowing the odds of the coin at all?
Aleatoric uncertainty is basically seeing randomness as a property of the universe, rather than a property of minds. Unless you verge into quantum territory, basically all randomness is actually epistemic uncertainty, and even if you verge into quantum territory, you can view quantum randomness as epistemic uncertainty.
Bayesians are comfortable viewing all uncertainties as epistemic. Non-Bayesians aren’t, and all of the people I know who do professional decision-making under uncertainty dread someone even mentioning aleatoric uncertainty because it’s a dead giveaway that the person mentioning it isn’t Bayesian, and thus a long, unproductive philosophical discussion may be necessary before they can get anywhere.
The Wikipedia definition makes it sound more like aleatoric uncertainty is not knowing whether it will land heads or tails (because it will do something different each time), and epistemic uncertainty is not having a camera accurate enough to see whether it has landed heads or tails.
I realize that LW collectively doesn’t like unreferenced definitions, but in this case maybe it’s OK… a friend of mine whose PhD is in decision theory explained aleatory uncertainty to me as the uncertainty of chance with known parameters: if you roll a normal six-sided die, you know it’s going to come up with a value in the range 1-6, but you don’t know what it will be. There’s no chance it will come up 7. Epistemic uncertainty is the uncertainty of chance with unknown parameters: there may not be enough data to know the bounds of an event, or it may have such large and random bounds that trying to place them is not very meaningful.
Isn’t it simply the extent to which one is not certain about some (piece of) knowledge? At least that was my intuition when I first read that.
After googling, the closest definition I could find was on Wikipedia under systemic uncertainty—in contrast to statistical uncertainty (aleatoric uncertainty) apparently.
QM potentially answers cool philosophical questions like, “does cut & paste transportation preserves identity” (it looks like it does, for our universe doesn’t seem to encode any identity at all).
Neurology will most probably tell us nearly everything we will ever know about how humans actually work. I expect many questions formerly considered “philosophical” will be answered by this piece of science.
Therefore, I think nearly all philosophers need to know some QM and neurology.
However, as for your second statement, I would really like an example, because I am not entirely sure what you mean. (I am sincerely requesting examples.)
Unfortunately, I strongly disagree with your third statement. The time it would take to learn QM with sufficient rigor to be interesting could be better spent reading the findings of experimental psychology or learning more mathematics. For the majority of philosophers, their subject matter simply does not overlap with QM in such a way that knowing rigorous QM would help them.
Further, I agree with what paper-machine seemed to imply in their post. A little QM can make a philosopher stupid.
Of course, in certain subjects, knowing QM or neurology should be mandatory.
However, as for your second statement, I would really like an example, because I am not entirely sure what you mean. (I am sincerely requesting examples.)
Few quick examples:
A lot of philosophy of mind assumes there is a singular unified self, whereas neurology might lead you to think of the mind as a group of systems, and this could resolve some dilmnas.
Lots of traditional moral theories assume people make choices in certain ways not backed by observation of their brains.
Your willingness to accept materialist explanations for the mind probably increases exponentially the more you know about the mechanics of the brain. (Are the any dualist neuroscientists?)
A lot of philosophy uses ‘armchair’ reflection and introspection to get foundational intuitions and make judgements. Knowing the hardware you’re running that on is probably helpful. (E.g. showing how easy it is to trigger people’s intuitions one way or the other changed the debate about Gettier cases massively.)
The phrase. In context the argument I was making wasn’t that complicated (uncertainty of moral status of fetus), but the inferential gap was in not realising that the phrasing I found natural was fairly incomprehensible.
The TL;DR was mainly for the purposes of humour in this instance rather than actual ease of reading. It also seems a generally useful thing to be reminded of.
There probably is.
Possibly larger.
Very definitely, its easy to forget the level of knowledge necessary t work at for this stuff. For example I recently realised that in a room of competitive debaters (college educated well read people) no-one knew what I meant by epistemic uncertainty. And very few philosophers know anything about QM or neurology…
TL;DR Illusion of transparency is a bitch.
Wait, what do you mean by “epistemic uncertainty”? The top Google results for the phrase contrast it with “aleatoric uncertainty” which is so esoteric that it’s not even in LW’s vocabulary (zero results for “aleatoric” on LW search).
“Epistemic uncertainty” sounds like a fancy way of saying “ignorance”. “Aleatoric” I think means “stochastic” (the cognate of that word in Italian is not terribly uncommon).
Wikipedia says:
http://en.wikipedia.org/wiki/Uncertainty_quantification
Could we say that aleatoric uncertainty would be akin to not knowing whether a coin will land heads or tails (but we know the odds are 1:1) and epistemic uncertainty would be akin to not knowing the odds of the coin at all?
Aleatoric uncertainty is basically seeing randomness as a property of the universe, rather than a property of minds. Unless you verge into quantum territory, basically all randomness is actually epistemic uncertainty, and even if you verge into quantum territory, you can view quantum randomness as epistemic uncertainty.
Bayesians are comfortable viewing all uncertainties as epistemic. Non-Bayesians aren’t, and all of the people I know who do professional decision-making under uncertainty dread someone even mentioning aleatoric uncertainty because it’s a dead giveaway that the person mentioning it isn’t Bayesian, and thus a long, unproductive philosophical discussion may be necessary before they can get anywhere.
The Wikipedia definition makes it sound more like aleatoric uncertainty is not knowing whether it will land heads or tails (because it will do something different each time), and epistemic uncertainty is not having a camera accurate enough to see whether it has landed heads or tails.
I realize that LW collectively doesn’t like unreferenced definitions, but in this case maybe it’s OK… a friend of mine whose PhD is in decision theory explained aleatory uncertainty to me as the uncertainty of chance with known parameters: if you roll a normal six-sided die, you know it’s going to come up with a value in the range 1-6, but you don’t know what it will be. There’s no chance it will come up 7. Epistemic uncertainty is the uncertainty of chance with unknown parameters: there may not be enough data to know the bounds of an event, or it may have such large and random bounds that trying to place them is not very meaningful.
You could probably mad words any two buzz words together though. How about quantum rationality?
I find myself in the embarrassing position of not knowing what that term refers to...
EDIT A few upvotes but no definitions. In case it wasn’t clear, can someone tell me what “epistemic uncertainty” means, if it is a thing.
Isn’t it simply the extent to which one is not certain about some (piece of) knowledge? At least that was my intuition when I first read that.
After googling, the closest definition I could find was on Wikipedia under systemic uncertainty—in contrast to statistical uncertainty (aleatoric uncertainty) apparently.
welcome to the club!
Very few philosophers need to know anything about QM or neurology.
QM potentially answers cool philosophical questions like, “does cut & paste transportation preserves identity” (it looks like it does, for our universe doesn’t seem to encode any identity at all).
Neurology will most probably tell us nearly everything we will ever know about how humans actually work. I expect many questions formerly considered “philosophical” will be answered by this piece of science.
Therefore, I think nearly all philosophers need to know some QM and neurology.
The question is whether knowing a little QM and neurology is more or less harmful than knowing none at all.
Nothing can protect you from people who fail to apply their knowledge well. Partial knowledge at least makes them aware that there is more to learn.
I agree with your first statement.
However, as for your second statement, I would really like an example, because I am not entirely sure what you mean. (I am sincerely requesting examples.)
Unfortunately, I strongly disagree with your third statement. The time it would take to learn QM with sufficient rigor to be interesting could be better spent reading the findings of experimental psychology or learning more mathematics. For the majority of philosophers, their subject matter simply does not overlap with QM in such a way that knowing rigorous QM would help them.
Further, I agree with what paper-machine seemed to imply in their post. A little QM can make a philosopher stupid.
Of course, in certain subjects, knowing QM or neurology should be mandatory.
Few quick examples:
A lot of philosophy of mind assumes there is a singular unified self, whereas neurology might lead you to think of the mind as a group of systems, and this could resolve some dilmnas.
Lots of traditional moral theories assume people make choices in certain ways not backed by observation of their brains.
Your willingness to accept materialist explanations for the mind probably increases exponentially the more you know about the mechanics of the brain. (Are the any dualist neuroscientists?)
A lot of philosophy uses ‘armchair’ reflection and introspection to get foundational intuitions and make judgements. Knowing the hardware you’re running that on is probably helpful. (E.g. showing how easy it is to trigger people’s intuitions one way or the other changed the debate about Gettier cases massively.)
I see and concede. I had been thinking at an excessively low-level.
Do you mean they weren’t familiar with the phrase “epistemic uncertainty” or they didn’t know the concept?
The phrase. In context the argument I was making wasn’t that complicated (uncertainty of moral status of fetus), but the inferential gap was in not realising that the phrasing I found natural was fairly incomprehensible.
If you need to do TL;DR for a single paragraph...
Dunno. Feels like there’s some kind of joke opportunity here for inferential distance but I can’t quite nail it.
The TL;DR was mainly for the purposes of humour in this instance rather than actual ease of reading. It also seems a generally useful thing to be reminded of.