“there is an abundance of extreme strong evidence (experimental and mathematical)” means that we find the story that somebody actually performed some kind of interaction with the universe to hold the belief (very) plausible. Contrast this with faked results where somebody types up a scientific paper but actually forges or doesn’t do the experiments claimed. One of the main methods of “busting” it is doing it ourselfs ie replication.
There are crises where research communities spend time and effort on the assumption that a scientific paper holds true. We could say that if this “fandom” does not involve replication then their methodology is something other than science and thus they are not scientists. However the enthusiasim or the time spent on the paper by itbelief doesn’t make it any more scientific or reliable. If the “founding paper” and derivative concepts and systems are forgeries it taints the whole body of activity to be epistemologically of low quality even if the “derivating steps” (work done after the forged paper) were logically sound.
However “what knowledge you should be a fan of?” is not a scientific question. Given all the hypotheses out these there is no right or wrong choice what to research. The focus is more that if you decide to target some field it would be proper to produce knowledge rather than nonsense. “If I knew exactly what I was doing it would not be called research, now would it?”. There can be no before the fact guarantees what the outcome will be.
Asking whether you should bother to figure something out totally misses the epistemology of it. Someone who sees inherent value in knowledge will see moderate pain to attain it. But typically this is not the only relevant motivator. In the limit where this is the only motivation there is never a question of “hey we could do this to try figure out X” that would be answered in the “don’t do it” variety. Such a system of morales could for example think that whether it is moral to ever have leisure as that time could be used for more research or how much more than fullitme one should be a researcher and how much one can push overtime to to not risk being burned out and not being able to function as a researcher in the future.
A hybrid model that has other interests than knowledge can and will often say “it would be nice to know but its too expensive—that knowledge is not worth the 3 lives lost those resources could be alternatively used for to save”, “no we can’t do human expeirments as the information is not worth the suffering of the test subjects” (Nazis get here a science boost as they are not slowed down by ethics boards). However sometimes it is legit to enter patients into a double blind experiement where 50% of patients will go untreated (or with only placebo) as the knowledge attained can then be used to for the benefit of other patients (“suffering” metric comes as net positive despite having positive and negative components). So large and important tool knowledge gains can offset other considerations. But asking the reverse of “how small a knowledge gain can be assumed to be overridden by other considerations” can’t not really be answered without knowledge of what would override it. Or we can safely say that there is no knowledge small enough that it wouldn’t be valuable if obtainable freely.
I do not object to what you are saying. You are describing the rational approach. But my point is completely different. It is outside rational justification. What I tried to show in my original post, is that that there is a space of experiential knowledge, that a rational assessment such as the one in your comment can not reach. You can think of it as the space of inquiry being constrained by our current scientific knowledge. To follow on my original example, if someone suggested to a rational thinker, 250 years ago, to try lucid dreaming, he would not do it cause their cost/benefit analysis will indicate that it was not worth their time. Today, this would not be a problem cause the evidence for lucid dreaming is rationally acceptable. It logically follows that there is a space of experiential knowledge for which, at certain times, rationality is a disadvantage.
So seeing many white swans makes you less prepared for black swans than someone who has seen 0 swans?
I do think that someone who seriously understands the difference between inductive and deducative reasoning won’t be completely cut out, but I get that in practise this will be so.
It has less to do with rationality and more to do with “stuff that I believe now”. If you believe in something it will mean you will disbelieve in something else.
So seeing many white swans makes you less prepared for black swans than someone who has seen 0 swans?
Yes, but it is even more complex than that as it is pointing to the communication of private experiences. Let’s say that you see the swans as one category but someone tells you that there are some swans which he calls ‘black’ and are different. You look closely but you only see white swans. Then he tells you that this is because your perception is not trained correctly. You ask for evidence and he says that he doesn’t have any but he can tell you the steps for seeing the difference.
The steps have to do with a series of exercises that you have to do every day for a year. You then have to look at the swans at a certain time of the day and from a certain angle so that the light is just right. You look at the person and wonder if they are crazy. They, on the other hand are in a position where they do not know how to convince you. You rationally decide that the evidence is just not enough to justify the time investment.
After a number of years that person comes up with a way to prove that the result is genuine and also demonstrates how the difference has consequences in the way the swans are organized. You suddenly have enough rational evidence to try! You do and you get the expected results.
The knowledge was available before the evidence crossed the threshold of rational justification.
I guess with swans you can just say “go look at swans in africa” which gives a recipe for public experience that would reproduce the category-boundary.
It is the case with seaguls that they are mostly white but males and female have different ultraviolet patterns. Here someone who doesn’t see ultraviolet can easily tell the classes apart but someone who sees only 3 colors will be nearly impossible. Then if throught some kind of (convulted) training you could make your eye see ultraviolet (people with and without all the natural optics respond slightly differently to ultraviolet light sources so there is a theorethical chance some kind of extreme alteration of the eyes could yield it to clearly recognisable levels).
Now ultraviolet cameras can be produces and those pretty much produce public experiences (ie the output can be read by a 3 color person too). Now I am wondering whether the difference between “private sensors” and constructed instruments is merely that with constructed instruments we do have a theory how they work but with “black box sensors” we might only know how to (re)produce them but don’t know how they actually work. However it would seem that sentences like “this and this kind of machine will classify X into two distinct groups Y and Z” would be interesting challenges to your theory of experiment setting and would warrant research. That is any kind of theory that doesn’t believe “in the training” would have to claim something other on what the classification would be (that all X would be marked Y, that the groups would not be distinct, that the classifier would inconsistently label the same X Y one time and Z the next time). But I guess those are only of indrect interest if the direct interest is whether groups Y and Z can be established at all.
Hehe, I didn’t mean it that literal just trying to get the idea across :)
Nevertheless, your analysis is correct for the case where alternative ways of confirmation are available. There is of course the possibility that at the current stage of technological development the knowledge is only accessible through experience like in my lucid dreaming example in the original post.
“there is an abundance of extreme strong evidence (experimental and mathematical)” means that we find the story that somebody actually performed some kind of interaction with the universe to hold the belief (very) plausible. Contrast this with faked results where somebody types up a scientific paper but actually forges or doesn’t do the experiments claimed. One of the main methods of “busting” it is doing it ourselfs ie replication.
There are crises where research communities spend time and effort on the assumption that a scientific paper holds true. We could say that if this “fandom” does not involve replication then their methodology is something other than science and thus they are not scientists. However the enthusiasim or the time spent on the paper by itbelief doesn’t make it any more scientific or reliable. If the “founding paper” and derivative concepts and systems are forgeries it taints the whole body of activity to be epistemologically of low quality even if the “derivating steps” (work done after the forged paper) were logically sound.
However “what knowledge you should be a fan of?” is not a scientific question. Given all the hypotheses out these there is no right or wrong choice what to research. The focus is more that if you decide to target some field it would be proper to produce knowledge rather than nonsense. “If I knew exactly what I was doing it would not be called research, now would it?”. There can be no before the fact guarantees what the outcome will be.
Asking whether you should bother to figure something out totally misses the epistemology of it. Someone who sees inherent value in knowledge will see moderate pain to attain it. But typically this is not the only relevant motivator. In the limit where this is the only motivation there is never a question of “hey we could do this to try figure out X” that would be answered in the “don’t do it” variety. Such a system of morales could for example think that whether it is moral to ever have leisure as that time could be used for more research or how much more than fullitme one should be a researcher and how much one can push overtime to to not risk being burned out and not being able to function as a researcher in the future.
A hybrid model that has other interests than knowledge can and will often say “it would be nice to know but its too expensive—that knowledge is not worth the 3 lives lost those resources could be alternatively used for to save”, “no we can’t do human expeirments as the information is not worth the suffering of the test subjects” (Nazis get here a science boost as they are not slowed down by ethics boards). However sometimes it is legit to enter patients into a double blind experiement where 50% of patients will go untreated (or with only placebo) as the knowledge attained can then be used to for the benefit of other patients (“suffering” metric comes as net positive despite having positive and negative components). So large and important tool knowledge gains can offset other considerations. But asking the reverse of “how small a knowledge gain can be assumed to be overridden by other considerations” can’t not really be answered without knowledge of what would override it. Or we can safely say that there is no knowledge small enough that it wouldn’t be valuable if obtainable freely.
I do not object to what you are saying. You are describing the rational approach. But my point is completely different. It is outside rational justification. What I tried to show in my original post, is that that there is a space of experiential knowledge, that a rational assessment such as the one in your comment can not reach. You can think of it as the space of inquiry being constrained by our current scientific knowledge. To follow on my original example, if someone suggested to a rational thinker, 250 years ago, to try lucid dreaming, he would not do it cause their cost/benefit analysis will indicate that it was not worth their time. Today, this would not be a problem cause the evidence for lucid dreaming is rationally acceptable. It logically follows that there is a space of experiential knowledge for which, at certain times, rationality is a disadvantage.
Hope that helps :)
So seeing many white swans makes you less prepared for black swans than someone who has seen 0 swans?
I do think that someone who seriously understands the difference between inductive and deducative reasoning won’t be completely cut out, but I get that in practise this will be so.
It has less to do with rationality and more to do with “stuff that I believe now”. If you believe in something it will mean you will disbelieve in something else.
Yes, but it is even more complex than that as it is pointing to the communication of private experiences. Let’s say that you see the swans as one category but someone tells you that there are some swans which he calls ‘black’ and are different. You look closely but you only see white swans. Then he tells you that this is because your perception is not trained correctly. You ask for evidence and he says that he doesn’t have any but he can tell you the steps for seeing the difference.
The steps have to do with a series of exercises that you have to do every day for a year. You then have to look at the swans at a certain time of the day and from a certain angle so that the light is just right. You look at the person and wonder if they are crazy. They, on the other hand are in a position where they do not know how to convince you. You rationally decide that the evidence is just not enough to justify the time investment.
After a number of years that person comes up with a way to prove that the result is genuine and also demonstrates how the difference has consequences in the way the swans are organized. You suddenly have enough rational evidence to try! You do and you get the expected results.
The knowledge was available before the evidence crossed the threshold of rational justification.
I guess with swans you can just say “go look at swans in africa” which gives a recipe for public experience that would reproduce the category-boundary.
It is the case with seaguls that they are mostly white but males and female have different ultraviolet patterns. Here someone who doesn’t see ultraviolet can easily tell the classes apart but someone who sees only 3 colors will be nearly impossible. Then if throught some kind of (convulted) training you could make your eye see ultraviolet (people with and without all the natural optics respond slightly differently to ultraviolet light sources so there is a theorethical chance some kind of extreme alteration of the eyes could yield it to clearly recognisable levels).
Now ultraviolet cameras can be produces and those pretty much produce public experiences (ie the output can be read by a 3 color person too). Now I am wondering whether the difference between “private sensors” and constructed instruments is merely that with constructed instruments we do have a theory how they work but with “black box sensors” we might only know how to (re)produce them but don’t know how they actually work. However it would seem that sentences like “this and this kind of machine will classify X into two distinct groups Y and Z” would be interesting challenges to your theory of experiment setting and would warrant research. That is any kind of theory that doesn’t believe “in the training” would have to claim something other on what the classification would be (that all X would be marked Y, that the groups would not be distinct, that the classifier would inconsistently label the same X Y one time and Z the next time). But I guess those are only of indrect interest if the direct interest is whether groups Y and Z can be established at all.
Hehe, I didn’t mean it that literal just trying to get the idea across :)
Nevertheless, your analysis is correct for the case where alternative ways of confirmation are available. There is of course the possibility that at the current stage of technological development the knowledge is only accessible through experience like in my lucid dreaming example in the original post.