I read this post named Flinching away from truth is often about protecting the epistemology. The post reminds me of familiar psychological biases such as catastrophic thinking. In catastrophic thinking individual events are seen as having further, ill nature consequences than they de facto usually have. I see these two approaches (the bucket error and the catastrophic thinking) have qualitatively different approaches to the same thing. The kid in the story is engaging in catastrophic thinking when they equate the writing mistake with not being allowed to be or to try to be an aspiring writer. Catastrophic thinking could be “deconstructed” or modeled being essentially a bucket error. I believe that if we dug enough psychological literature concerning cognitive biases such as catastrophic thinking, we would eventually come across a model similar to the bucket error -model (be it as graphic or using same exact wordings or not).
My main question is: in a general sense, how many different models or other pieces of information (for the sake of simplicity I’ll from now on talk only about models) are beneficial?
Points that come to mind:
It seems like waste of energy to develop new models if the work has already been done
If people “adopt” different models that in essence are approaches to the same thing, communication between them can prove more difficult than it should be, thus slowing down overall knowledge formation (which probably is not of anybody’s primary interests?). E.g. people might not notice they’re talking about the same thing to begin with.
In the way of synthesizing models can also stand the notion that discovering already existing models takes expertise; already existing model is less likely to be found if the existence of one doesn’t strike as a possibility
Are there contexts where implementing a new model is acceptable even if a similar model already existed? For example, would it be healthy for a new discipline to “try out its wings” more freely, without the baggage of having to wholly fit other disciplines’ pre-existing knowledge?
A problematic situation that comes to mind could be one like this: if it’s assumed that new disciplines should have the freedom to “try out their wings” and in doing so they don’t give credit to pre-existing models, this can be frustrating for people who’ve developed those pre-existing models. In many cases, such “wheel reinventings” can’t be filed under plagiarism because of the problematic level of expertise it would take to know the wheel already exists. Then, what would be the ethical position to take in this situation?
What I’m asking could perhaps be simplified to two questions: acknowledging different contexts, 1. how much “searching” of already existing work would be requisite before one implements a new model, and 2. what should be done when two “different but essentially the same” -models have been implemented?
I read this post named Flinching away from truth is often about protecting the epistemology. The post reminds me of familiar psychological biases such as catastrophic thinking. In catastrophic thinking individual events are seen as having further, ill nature consequences than they de facto usually have. I see these two approaches (the bucket error and the catastrophic thinking) have qualitatively different approaches to the same thing. The kid in the story is engaging in catastrophic thinking when they equate the writing mistake with not being allowed to be or to try to be an aspiring writer. Catastrophic thinking could be “deconstructed” or modeled being essentially a bucket error. I believe that if we dug enough psychological literature concerning cognitive biases such as catastrophic thinking, we would eventually come across a model similar to the bucket error -model (be it as graphic or using same exact wordings or not).
My main question is: in a general sense, how many different models or other pieces of information (for the sake of simplicity I’ll from now on talk only about models) are beneficial?
Points that come to mind:
It seems like waste of energy to develop new models if the work has already been done
If people “adopt” different models that in essence are approaches to the same thing, communication between them can prove more difficult than it should be, thus slowing down overall knowledge formation (which probably is not of anybody’s primary interests?). E.g. people might not notice they’re talking about the same thing to begin with.
In the way of synthesizing models can also stand the notion that discovering already existing models takes expertise; already existing model is less likely to be found if the existence of one doesn’t strike as a possibility
Are there contexts where implementing a new model is acceptable even if a similar model already existed? For example, would it be healthy for a new discipline to “try out its wings” more freely, without the baggage of having to wholly fit other disciplines’ pre-existing knowledge?
A problematic situation that comes to mind could be one like this: if it’s assumed that new disciplines should have the freedom to “try out their wings” and in doing so they don’t give credit to pre-existing models, this can be frustrating for people who’ve developed those pre-existing models. In many cases, such “wheel reinventings” can’t be filed under plagiarism because of the problematic level of expertise it would take to know the wheel already exists. Then, what would be the ethical position to take in this situation?
What I’m asking could perhaps be simplified to two questions: acknowledging different contexts, 1. how much “searching” of already existing work would be requisite before one implements a new model, and 2. what should be done when two “different but essentially the same” -models have been implemented?