I think that means I’m [...] bad at describing/ searching for what I’m looking for.
One thing that might help, in terms of understanding what you’re looking for, is—how do you expect to be able to use this “model of ranking”?
It’s not quite clear to me whether you’re looking for something like an algorithm—where somebody could code it up as a computer program and you could feed in sentences and it will spit out scores, or something more like a framework or rubrik—where the work of understanding and evaluating sentences will still be done by people, but they can use the framework/rubrik as a guide to decide how to rate the sentences, or something else.
Definitely the “framework or rubrik” option. More like a rubrik than anything else, but with some fun nuance here and there. Work would be done by humans but all following the same rules.
There are a number of ways that I would like to use it in the future, but in the immediate most practical sense what I’m working on is a plan to create internet content that answers people’s questions (via google. Siri, Alexa, etc) but makes declarative statements about the quality of information used to create those answers.
So for example, right now (02/08/20) if somebody asks google “does the MMR vaccine cause autism?” you get this page:
Which is a series of articles from various sites all pointing you in the direction of the right answer, but ultimately dancing around it and really just inviting you to make up your on mind.
What I would want to do is to create content that directly answers even difficult questions and trades the satisfaction of directness of the answer for the intellectual work of making you think about the quality rating we give it.
Creating a series of rules that gets to the heart of how the quality of evidence varies for different types of claims is obviously quite difficult. I think I’ve found a way to do it, but I would really like to know if it’s been tried before and failed for some reason, or if someone has a better or faster way than mine.
I think that my way around the problems mentioned in the above replies is just conceding from the start that my model is not and can never be a perfect representation of the world. However, if it’s done well enough it could bring a lot of clarity to a lot of problems.
I’m glad i managed to finally be understandable. Part of the problem is that my enthusiasm for the project leads me to be a bit coy about revealing too much detail on the internet. The other problem is that I’m frequently straying into academic territories I don’t know that well so I think I tend to use words to describe it that are probably not be the correct ones.
Thanks for those, it was interesting to see how some other people have approached the problem and if nothing else it tells me that other people are trying to take the epistemology of everyday discourse seriously so hopefully there will be an appetite for my version.
my enthusiasm for the project leads me to be a bit coy about revealing too much detail on the internet
FWIW, it may be worth keeping in mind the Silicon Valley maxim that ideas are cheap, and execution is what matters. In most cases you’re far more likely to make progress on the idea if you get it out into the open, especially if execution at all depends on having collaborators or other supporters. (Also helpful to get feedback on the idea.) The probability that someone else successfully executes on an idea that you came up with is low.
One thing that might help, in terms of understanding what you’re looking for, is—how do you expect to be able to use this “model of ranking”?
It’s not quite clear to me whether you’re looking for something like an algorithm—where somebody could code it up as a computer program and you could feed in sentences and it will spit out scores, or something more like a framework or rubrik—where the work of understanding and evaluating sentences will still be done by people, but they can use the framework/rubrik as a guide to decide how to rate the sentences, or something else.
Definitely the “framework or rubrik” option. More like a rubrik than anything else, but with some fun nuance here and there. Work would be done by humans but all following the same rules.
There are a number of ways that I would like to use it in the future, but in the immediate most practical sense what I’m working on is a plan to create internet content that answers people’s questions (via google. Siri, Alexa, etc) but makes declarative statements about the quality of information used to create those answers.
So for example, right now (02/08/20) if somebody asks google “does the MMR vaccine cause autism?” you get this page:
https://www.google.com/search?q=does+the+MMR+vaccine+cause+autism%3F&oq=does+the+MMR+vaccine+cause+autism%3F&aqs=chrome..69i57j0.9592j1j8&sourceid=chrome&ie=UTF-8
Which is a series of articles from various sites all pointing you in the direction of the right answer, but ultimately dancing around it and really just inviting you to make up your on mind.
What I would want to do is to create content that directly answers even difficult questions and trades the satisfaction of directness of the answer for the intellectual work of making you think about the quality rating we give it.
Creating a series of rules that gets to the heart of how the quality of evidence varies for different types of claims is obviously quite difficult. I think I’ve found a way to do it, but I would really like to know if it’s been tried before and failed for some reason, or if someone has a better or faster way than mine.
I think that my way around the problems mentioned in the above replies is just conceding from the start that my model is not and can never be a perfect representation of the world. However, if it’s done well enough it could bring a lot of clarity to a lot of problems.
Ah! It’s much clearer to me now what you’re looking for.
Two things that come to mind as vaguely similar:
1) The habit of some rationalist bloggers of flagging claims with “epistemic status”. (E.g. here or here)
2) Wikipedia’s guidelines for verifiability (and various other guidelines that they have)
Of course, neither is exactly what you’re talking about, but perhaps they could serve as inspiration.
I’m glad i managed to finally be understandable. Part of the problem is that my enthusiasm for the project leads me to be a bit coy about revealing too much detail on the internet. The other problem is that I’m frequently straying into academic territories I don’t know that well so I think I tend to use words to describe it that are probably not be the correct ones.
Thanks for those, it was interesting to see how some other people have approached the problem and if nothing else it tells me that other people are trying to take the epistemology of everyday discourse seriously so hopefully there will be an appetite for my version.
FWIW, it may be worth keeping in mind the Silicon Valley maxim that ideas are cheap, and execution is what matters. In most cases you’re far more likely to make progress on the idea if you get it out into the open, especially if execution at all depends on having collaborators or other supporters. (Also helpful to get feedback on the idea.) The probability that someone else successfully executes on an idea that you came up with is low.
I’ve heard similar things and agree completely. It’s just difficult to fight the impulse to bury away the details!