I think it needs clarification. It’s clearly vague enough that it’s not a valid reason by itself. However it is reasonable to think that part of the “bad vibe” would be the type why political meshing is bad while part of it could be relevant.
For example it could be that there is worry that constantly mentioning a specific point goes for “mere exposure” where just being exposed to a viewpoint increases ones belief in it without actual argumentation for it. Zack_M_Davis could then argue that the posting doesn’t get exposure more than would have been gotten by legimate means.
But we can’t go that far because there is no clear image what is the worry and unpacking the whole context would probably derail into the political point or otherwise be out-of-scope for epistemology.
For example if some crazy scientist like a nazi-scientist was burning people (I am assuming that burning people is ethically very bad) to see what happens I would probably want to make sure that the results that he produces contains actual reusable information. Yet I would probably vote against burning people. If I just contain myself to the epistemological sphere I might know to advice that larger sample-sizes lead to more realiable results. However being acutely aware that the trivial way to increase the sample size would lead to significant activity I oppose (ie my advice burns more people) I would probably think a little harder whether there is a lives-spent efficient way to get reliability. Sure refusing any cooperation ensures that I don’t cause any burned people. But it is likely that left to their own devices they would end up burning more people than if they were supplied with basic statistics and how to get maximum data from each trial. On one hand value is fragile and small epistemology improvements might correspond to big dips in average well-being. On the other hand taking the ethical dimension effectively into account it will seemingly “corrupt” the cold-hearted data processing. From lives-saved ambivalent viewpoint those nudges are needless inefficiencies, “errors”. Now I don’t know whether the worry about this case is that big but I would in general be interested when small linkages are likely to have big impacts. I guess from a pure epistemological viewpoint it would be “value chaoticness” where small formulation differences have big or unpredictable implications for values.
I think it needs clarification. It’s clearly vague enough that it’s not a valid reason by itself. However it is reasonable to think that part of the “bad vibe” would be the type why political meshing is bad while part of it could be relevant.
For example it could be that there is worry that constantly mentioning a specific point goes for “mere exposure” where just being exposed to a viewpoint increases ones belief in it without actual argumentation for it. Zack_M_Davis could then argue that the posting doesn’t get exposure more than would have been gotten by legimate means.
But we can’t go that far because there is no clear image what is the worry and unpacking the whole context would probably derail into the political point or otherwise be out-of-scope for epistemology.
For example if some crazy scientist like a nazi-scientist was burning people (I am assuming that burning people is ethically very bad) to see what happens I would probably want to make sure that the results that he produces contains actual reusable information. Yet I would probably vote against burning people. If I just contain myself to the epistemological sphere I might know to advice that larger sample-sizes lead to more realiable results. However being acutely aware that the trivial way to increase the sample size would lead to significant activity I oppose (ie my advice burns more people) I would probably think a little harder whether there is a lives-spent efficient way to get reliability. Sure refusing any cooperation ensures that I don’t cause any burned people. But it is likely that left to their own devices they would end up burning more people than if they were supplied with basic statistics and how to get maximum data from each trial. On one hand value is fragile and small epistemology improvements might correspond to big dips in average well-being. On the other hand taking the ethical dimension effectively into account it will seemingly “corrupt” the cold-hearted data processing. From lives-saved ambivalent viewpoint those nudges are needless inefficiencies, “errors”. Now I don’t know whether the worry about this case is that big but I would in general be interested when small linkages are likely to have big impacts. I guess from a pure epistemological viewpoint it would be “value chaoticness” where small formulation differences have big or unpredictable implications for values.