critical thinking skills; knowledge of the past and other cultures; an ability to work with and interpret numbers and statistics; access to the insights of great writers and artists; a willingness to experiment, to open up to change; and the ability to navigate ambiguity.′
I think this is one of the most important skills you get from the humanities. I have a friend who’s a history professor. He’s very used to hearing 20 different accounts of the same event told by different people, most of whom are self-serving if not outright lying, and working out what must actually have gone on, which looks like a strength to me.
He has a skill I’d like to have, but don’t, and he got it from studying history, (and playing academic politics).
When you flip a coin a bunch of times and decide that it’s fair, you’ve made numbers. There are no numbers in the coin itself, but you reasonably can state the probability of the coin coming up heads and even state your certainty in this estimate. These are numbers you made.
As a more general observation, in the Bayesian approach the prior represents information available to you before data arrives. The prior rarely starts as a number, but you must make it a number before you can proceed further.
There are no numbers in the coin itself, but you reasonably can state the probability of the coin coming up heads and even state your certainty in this estimate. These are numbers you made.
No, those are numbers you found. The inherent tendency to produce numbers when tested in that way (“fairness/unfairness”) was already a property of the coin; you found what numbers it produced, and used that information to derive useful information.
Making numbers, on the other hand, is almost always making numbers up. Sometimes processes where you make numbers up have useful side-effects
Of course, the point of a subjective Bayesian calculation wasn’t that, after you made up a bunch of numbers, multiplying them out would give you an exactly right answer. The real point was that the process of making up numbers would force you to tally all the relevant facts and weigh all the relative probabilities.
but that doesn’t mean that making numbers is at all useful.
Basically, I think it’s important to distinguish between finding numbers which encode information about the world, and making numbers from information you already have. Making numbers may be a necessary prerequisite for other useful processes, but it is not in itself useful, since it requires you to already have the information.
That phrase is so general as to be pretty meaningless.
I do not subscribe to the notion that anything not expressible in math is worthless, but “in most circumstances” the inability to find any numbers is a strong indication that you don’t understand the issue well.
the inability to find any numbers is a strong indication that you don’t understand the issue well.
Yes, that’s the whole point. There aren’t always numbers you can find, even when there are, finding them is nontrivial, and you often have to deal with the ambiguous situation or problem regardless.
{ the ability to navigate ambiguity } I think this is one of the most important skills you get from the humanities.
Statistics is precisely that, but with numbers.
What you said here is a vast oversimplification; if you have gotten to the point where you can find relevant numbers, you have already successfully navigated most of the ambiguity.
Is there still an inferential gap here? I thought I made my point clear about three comments ago, but this is clearly not as obvious a distinction as I expected it to be.
if you have gotten to the point where you can find relevant numbers, you have already successfully navigated most of the ambiguity.
And that’s where you are being misled by your insistence on “finding” numbers instead of “making” them.
It’s pretty easy to construct estimates. The problem is that without good data these estimates will be too wide to the point of uselessness. But you can think, and find some data, and clean some existing data, and maybe narrow these estimates down a bit. Go back to 1. and repeat until you run out of data or the estimate is narrow enough to fit its purpose.
Ambiguity isn’t some magical concept limited to the humanities. The whole of statistics is dedicated to dealing with ambiguity. In fact, my standard definition of statistics is “a toolbox of methods to deal with uncertainty”.
I understand your point, I just think it’s mistaken.
I consider all the things you’ve said to be my best arguments why you’re wrong, so there’s clearly something wrong here. But I’ve run out of novel arguments and can’t figure out where the disconnect is.
You seem to think that it is generally easy to turn arbitrary ambiguities into numbers in a way amenable to using statistics to resolve them. I find that to be obviously, blatantly false.
Where you see things like this:
It’s pretty easy to construct estimates. The problem is that without good data these estimates will be too wide to the point of uselessness. But you can think, and find some data, and clean some existing data, and maybe narrow these estimates down a bit. Go back to 1. and repeat until you run out of data or the estimate is narrow enough to fit its purpose.
I see something more like
In order to get an estimate narrow enough to fit the purpose, gather data, make a bad estimate, gather more data, refine the estimate, gather still more data, refine further, repeat until you can’t find any more data and then hope you got something useful out of it.
Where the difficult part is gather data. If you can gather data that is relevant, then statistics are useful. But often, you can’t, and so they aren’t. I outlined the exact same process as you, I’m just significantly more pessimistic about how often and how well it works.
You seem to think that it is generally easy to turn arbitrary ambiguities into numbers
Yes, I do.
in a way amenable to using statistics to resolve them.
No, I do not. I said nothing about “resolving” things.
When I say “numbers” in the context of statistics, I really mean probability distributions, often uncertain probability distributions. For example, the probability of anything lies somewhere between zero and one—see, we don’t have any information, but we already have numbers.
You’re likely thinking that when I am turning ambiguities into numbers, I turn them into nice hard scalars, like “the probability of X is 0.7”. No, I don’t. I turn them into wide probability distributions, often without any claims about the shape of these distributions. That is still firmly within the purview of statistics.
Where the difficult part is gather data. If you can gather data that is relevant, then statistics are useful.
If you have no data, nothing is useful. Remember, the original context was how humanities teach us to deal with ambiguity. But if you have no data, humanities won’t help and if you do, you can use numbers.
I’m not saying that everything should be converted to numbers. My point is that there are disciplines—specifically statistics—that are designed to deal with uncertainty and, arguably, do it better than handwaving common in the humanities.
If you take people across a big swath of humanities, and ask them about subjects where there is a substantial amount of debate and not a lot of decisive evidence—say, theories of a historical Jesus—how many of those people are going to describe one of those theories as more likely than not?
Like, if you have dozens of theories that you’ve studied and examined closely, are we going to see people assigning >50% to their favored theory? Or will people be a lot more conservative with their confidence?
BTW, the probability that the Jesus character in the four Gospels was based on a real person would be a great question to ask in the next LW census/survey.
What does it take for a fictional character to be based on a real person? Does it suffice to have a similar name, live in a similar place at a similar time? Do they have to perform similar actions as well? This has to be made clear before the question can be meaningfully answered.
That’s an extraordinarily weak “based on”. The Dracula/Tepes connection in Bram Stoker’s work doesn’t go much beyond Stoker borrowing what he thought was a cool name with exotic, ominous associations (and that “exotic” is important; Eastern Europe in Stoker’s time was seen as capital-F Foreign to Brits, which comes through quite clearly in the book). Later authors played on it a bit more.
The equivalent here would be saying that there was probably someone named Yeshua in the Galilee area around 30 AD.
Was Yeshua that uncommon of a name? You’re setting the bar pretty low here. (That being said, my understanding is that there’s a strong scholarly consensus that there was a Jew named Yeshua who lived in Galilee, founded a cult which later became Christianity, and was crucified by the Romans controlling the area. So these picky ambiguities about “based on” aren’t really relevant anyway)
Was Yeshua that uncommon of a name? You’re setting the bar pretty low here.
Not that uncommon, no. I’m exaggerating for effect, but the point should still have carried if I’d used “Yeshua ben Yosef” or something even more specific: if you can’t predict anything about the character from the name, the character isn’t meaningfully based on the name’s original bearer.
That being said, my understanding is that there’s a strong scholarly consensus that there was a Jew named Yeshua who lived in Galilee, founded a cult which later became Christianity, and was crucified by the Romans controlling the area.
There also is a strongly scholar consensus that anthropogenic global warning is occurring, and yet plenty of LW census respondents put in there numbers not very close to 100%.
That is true, and intentional. It is far from obvious that the connection between the fictional Jesus and the (hypothetical?) historical one is any less tenuous than that (1) . The comparison also underscores the pointlessness of the debate : just as evidence for Vlad Dracul’s existence is at best extemely weak evidence for the existence of vampires, so too is evidence for a historical Jesus at best extremely weak evidence for the truth of Christianity.
I predict you’d get a minority of people using it as a proxy for atheism, another minority favoring it simply because it’s an intensely contrarian position, and the majority choosing whatever the closest match to “I don’t know” on the survey is.
I seem to remember reading that virtually all serious scholars agree that there was a historical Jesus, and that the opposite claim is considered a fringe idea along the lines of homeopathy, so soundly has it been debunked. My memory might be exaggerating, but I think the gist is correct.
If you take people across a big swath of humanities, and ask them about subjects where there is a substantial amount of debate and not a lot of decisive evidence—say, theories of a historical Jesus
Could you have picked an example where one side isn’t composed entirely of crackpots?
Seriously, I can’t see how anyone could claim that Jesus was ahistorical who isn’t some combination of doing reverse-stupidity on Christianity or taking an absurd contrarian position for the sake of taking an absurd contrarian position.
I would think that believing Jesus didn’t exist would be just as absurd as thinking that all or almost all of the events in the Gospels literally happened. Yet the latter make up a significant number of practicing Biblical scholars. And for the majority of Biblical scholars who don’t think the Gospels are almost literally true, still have a form of Jesus-worship going on as they are practicing Christians. It would be hard to think that Jesus both came back from the dead and also didn’t exist; meaning that it would be very hard to remain a Christian while also claiming that Jesus didn’t exist, and most Biblical scholars were Christians before they were scholars.
The field both is biased in a non-academic way against one extreme position while giving cover and legitimacy to the opposite extreme position.
Modern day people who believe there was no real historical preacher, probably named Yeshua or something like that, wandering around Palestine in the first century, and on whom the Gospels are based, are crackpots. Their position is strongly refuted by the available evidence. You don’t have to be a theist or a Christian to accept this. See, for example, pretty much any of the works of Bart Ehrman, particularly “Did Jesus Exist?”
There are legitimate disputes about this historical figure. How educated was he? Was he more Jewish or Greek in terms of philosophy and theology? (That he was racially Jewish is undenied.) Was he a Zealot? etc. However that he existed has been very well established.
Depends on your definition of crackpots. I don’t think most Jesus scholars are crackpots, just most likely overly credulous of their favored theories.
What I’m curious about is if people in these fields that are starved for really decisive evidence still feel compelled to name a >50% confidence theory, or if they are comfortable with the notion that their most-favored hypothesis indicated by the evidence is still probably wrong, and just comparatively much better than the other hypotheses that they have considered.
Well, hence “historical Jesus”. If I were talking about Jesus mythicists, I would have said that. I ignorantly assume there aren’t that many Jesus mythicist camps fighting each other out over specific theories of mythicism...
I’m actually looking forward to Richard Carrier’s book on that, but I do not expect it to decide mythicism.
Then you’ve got this one by itself because what the heck does it even mean:
{ the ability to navigate ambiguity }
Perhaps the ability to work with poorly-defined objectives? Including how to get some idea of what someone wants and use that to ask useful questions to refine it?
Some of these things are not like the others...
Which are the odd ones out?
To a first approximation:
{ critical thinking skills; an ability to work with and interpret numbers and statistics; a willingness to experiment, to open up to change }
vs.
{ knowledge of the past and other cultures; access to the insights of great writers and artists }
Then you’ve got this one by itself because what the heck does it even mean:
{ the ability to navigate ambiguity }
{ the ability to navigate ambiguity }
I think this is one of the most important skills you get from the humanities. I have a friend who’s a history professor. He’s very used to hearing 20 different accounts of the same event told by different people, most of whom are self-serving if not outright lying, and working out what must actually have gone on, which looks like a strength to me.
He has a skill I’d like to have, but don’t, and he got it from studying history, (and playing academic politics).
How did he know that his judgment of what actually had gone on was correct? How did he verify his conclusion?
Statistics is precisely that, but with numbers.
That only works if you have numbers.
Luckily, you can make numbers.
“Making numbers” is unlikely to produce useful numbers.
Not necessarily.
Relevant Slate Star Codex post: “If It’s Worth Doing, It’s Worth Doing With Made-Up Statistics”
“Making” is not “making up”.
When you flip a coin a bunch of times and decide that it’s fair, you’ve made numbers. There are no numbers in the coin itself, but you reasonably can state the probability of the coin coming up heads and even state your certainty in this estimate. These are numbers you made.
As a more general observation, in the Bayesian approach the prior represents information available to you before data arrives. The prior rarely starts as a number, but you must make it a number before you can proceed further.
No, those are numbers you found. The inherent tendency to produce numbers when tested in that way (“fairness/unfairness”) was already a property of the coin; you found what numbers it produced, and used that information to derive useful information.
Making numbers, on the other hand, is almost always making numbers up. Sometimes processes where you make numbers up have useful side-effects
but that doesn’t mean that making numbers is at all useful.
Basically, I think it’s important to distinguish between finding numbers which encode information about the world, and making numbers from information you already have. Making numbers may be a necessary prerequisite for other useful processes, but it is not in itself useful, since it requires you to already have the information.
I don’t think this is a useful distinction, but if you insist...
You said: “That only works if you have numbers.” Then the answer is: “Luckily, you can find numbers.”
Finding relevant numbers is significantly difficult in most circumstances.
That phrase is so general as to be pretty meaningless.
I do not subscribe to the notion that anything not expressible in math is worthless, but “in most circumstances” the inability to find any numbers is a strong indication that you don’t understand the issue well.
Yes, that’s the whole point. There aren’t always numbers you can find, even when there are, finding them is nontrivial, and you often have to deal with the ambiguous situation or problem regardless.
What you said here is a vast oversimplification; if you have gotten to the point where you can find relevant numbers, you have already successfully navigated most of the ambiguity.
Is there still an inferential gap here? I thought I made my point clear about three comments ago, but this is clearly not as obvious a distinction as I expected it to be.
And that’s where you are being misled by your insistence on “finding” numbers instead of “making” them.
It’s pretty easy to construct estimates. The problem is that without good data these estimates will be too wide to the point of uselessness. But you can think, and find some data, and clean some existing data, and maybe narrow these estimates down a bit. Go back to 1. and repeat until you run out of data or the estimate is narrow enough to fit its purpose.
Ambiguity isn’t some magical concept limited to the humanities. The whole of statistics is dedicated to dealing with ambiguity. In fact, my standard definition of statistics is “a toolbox of methods to deal with uncertainty”.
I understand your point, I just think it’s mistaken.
I consider all the things you’ve said to be my best arguments why you’re wrong, so there’s clearly something wrong here. But I’ve run out of novel arguments and can’t figure out where the disconnect is.
What is that statement of mine to which you are assigning the not-true value?
You seem to think that it is generally easy to turn arbitrary ambiguities into numbers in a way amenable to using statistics to resolve them. I find that to be obviously, blatantly false.
Where you see things like this:
I see something more like
Where the difficult part is gather data. If you can gather data that is relevant, then statistics are useful. But often, you can’t, and so they aren’t. I outlined the exact same process as you, I’m just significantly more pessimistic about how often and how well it works.
Yes, I do.
No, I do not. I said nothing about “resolving” things.
When I say “numbers” in the context of statistics, I really mean probability distributions, often uncertain probability distributions. For example, the probability of anything lies somewhere between zero and one—see, we don’t have any information, but we already have numbers.
You’re likely thinking that when I am turning ambiguities into numbers, I turn them into nice hard scalars, like “the probability of X is 0.7”. No, I don’t. I turn them into wide probability distributions, often without any claims about the shape of these distributions. That is still firmly within the purview of statistics.
If you have no data, nothing is useful. Remember, the original context was how humanities teach us to deal with ambiguity. But if you have no data, humanities won’t help and if you do, you can use numbers.
I’m not saying that everything should be converted to numbers. My point is that there are disciplines—specifically statistics—that are designed to deal with uncertainty and, arguably, do it better than handwaving common in the humanities.
Your confidence in your ability to do statistics to everything is clearly unassailable, and I have no desire to be strawmanned further.
This is part of critical thinking. Taking a vaguely defined or ambiguous problem, parsing out what it means and figuring out an approach.
I’m rather curious;
If you take people across a big swath of humanities, and ask them about subjects where there is a substantial amount of debate and not a lot of decisive evidence—say, theories of a historical Jesus—how many of those people are going to describe one of those theories as more likely than not?
Like, if you have dozens of theories that you’ve studied and examined closely, are we going to see people assigning >50% to their favored theory? Or will people be a lot more conservative with their confidence?
BTW, the probability that the Jesus character in the four Gospels was based on a real person would be a great question to ask in the next LW census/survey.
Was Bram Stoker’s Dracula “based on” a real person ? Possibly, given an extremely weak interpretation of “based on”.
What does it take for a fictional character to be based on a real person? Does it suffice to have a similar name, live in a similar place at a similar time? Do they have to perform similar actions as well? This has to be made clear before the question can be meaningfully answered.
That’s an extraordinarily weak “based on”. The Dracula/Tepes connection in Bram Stoker’s work doesn’t go much beyond Stoker borrowing what he thought was a cool name with exotic, ominous associations (and that “exotic” is important; Eastern Europe in Stoker’s time was seen as capital-F Foreign to Brits, which comes through quite clearly in the book). Later authors played on it a bit more.
The equivalent here would be saying that there was probably someone named Yeshua in the Galilee area around 30 AD.
Was Yeshua that uncommon of a name? You’re setting the bar pretty low here. (That being said, my understanding is that there’s a strong scholarly consensus that there was a Jew named Yeshua who lived in Galilee, founded a cult which later became Christianity, and was crucified by the Romans controlling the area. So these picky ambiguities about “based on” aren’t really relevant anyway)
Not that uncommon, no. I’m exaggerating for effect, but the point should still have carried if I’d used “Yeshua ben Yosef” or something even more specific: if you can’t predict anything about the character from the name, the character isn’t meaningfully based on the name’s original bearer.
There also is a strongly scholar consensus that anthropogenic global warning is occurring, and yet plenty of LW census respondents put in there numbers not very close to 100%.
That is true, and intentional. It is far from obvious that the connection between the fictional Jesus and the (hypothetical?) historical one is any less tenuous than that (1) . The comparison also underscores the pointlessness of the debate : just as evidence for Vlad Dracul’s existence is at best extemely weak evidence for the existence of vampires, so too is evidence for a historical Jesus at best extremely weak evidence for the truth of Christianity.
(1) Keep in mind that there are no contemporary sources that refer to him, let alone to anthing he did.
I predict you’d get a minority of people using it as a proxy for atheism, another minority favoring it simply because it’s an intensely contrarian position, and the majority choosing whatever the closest match to “I don’t know” on the survey is.
I seem to remember reading that virtually all serious scholars agree that there was a historical Jesus, and that the opposite claim is considered a fringe idea along the lines of homeopathy, so soundly has it been debunked. My memory might be exaggerating, but I think the gist is correct.
Could you have picked an example where one side isn’t composed entirely of crackpots?
Which side are you claiming to be crackpots?
Seriously, I can’t see how anyone could claim that Jesus was ahistorical who isn’t some combination of doing reverse-stupidity on Christianity or taking an absurd contrarian position for the sake of taking an absurd contrarian position.
Edit: fixed typo.
Am I correct in reading “a historical” as “ahistorical” and not as “a historical figure”?
I would think that believing Jesus didn’t exist would be just as absurd as thinking that all or almost all of the events in the Gospels literally happened. Yet the latter make up a significant number of practicing Biblical scholars. And for the majority of Biblical scholars who don’t think the Gospels are almost literally true, still have a form of Jesus-worship going on as they are practicing Christians. It would be hard to think that Jesus both came back from the dead and also didn’t exist; meaning that it would be very hard to remain a Christian while also claiming that Jesus didn’t exist, and most Biblical scholars were Christians before they were scholars.
The field both is biased in a non-academic way against one extreme position while giving cover and legitimacy to the opposite extreme position.
Modern day people who believe there was no real historical preacher, probably named Yeshua or something like that, wandering around Palestine in the first century, and on whom the Gospels are based, are crackpots. Their position is strongly refuted by the available evidence. You don’t have to be a theist or a Christian to accept this. See, for example, pretty much any of the works of Bart Ehrman, particularly “Did Jesus Exist?”
There are legitimate disputes about this historical figure. How educated was he? Was he more Jewish or Greek in terms of philosophy and theology? (That he was racially Jewish is undenied.) Was he a Zealot? etc. However that he existed has been very well established.
Depends on your definition of crackpots. I don’t think most Jesus scholars are crackpots, just most likely overly credulous of their favored theories.
What I’m curious about is if people in these fields that are starved for really decisive evidence still feel compelled to name a >50% confidence theory, or if they are comfortable with the notion that their most-favored hypothesis indicated by the evidence is still probably wrong, and just comparatively much better than the other hypotheses that they have considered.
I think he meant “jesus myth” proponents, who IIRC are … dubious.
Well, hence “historical Jesus”. If I were talking about Jesus mythicists, I would have said that. I ignorantly assume there aren’t that many Jesus mythicist camps fighting each other out over specific theories of mythicism...
I’m actually looking forward to Richard Carrier’s book on that, but I do not expect it to decide mythicism.
Perhaps the ability to work with poorly-defined objectives? Including how to get some idea of what someone wants and use that to ask useful questions to refine it?