I think I got a cumulative total of some 100 downvotes on this thread, so somehow I don’t believe that a top-level post would be welcome. However, if TheOtherDave were to write one as a description of an interesting ontology he does not subscribe to, this would probably go over much better. I doubt he would be interested, though.
As it happens, I agree with your position. I was actually thinking of making a post that pinpoints to all the important comments here without taking a position, while asking the discussion to continue there. However, making an argumentative post is also possible, although I might not be willing to expend to effort.
Cool. If you are motivated at some point to articulate an anti-realist account of how non-accidental correlations between inputs come to arise (in whatever format you see fit), I’d appreciate that.
As I understand it, the word “how” is used to demand a model for an event. Since I already have models for the correlations of my inputs, I don’t feel the need for further explanation. More concretely, should you ask “How does closing your eyes lead to a blackout of your vision?” I would answer “After I close my eyes, my eyelids block all of the light from getting into my eye.”, and I consider this answer satisfying. Just because I don’t believe in a ontologically fundamental reality, doesn’t mean I don’t believe in eyes and eyelids and light.
In M1, vision depends on light, which is blocked by eyelids. Therefore in M1, we predict that closing my eyes leads to a blackout of vision. In M2, vision depends on something else, which is not blocked by eyelids. Therefore in M2, we predict that closing my eyes does not lead to a blackout of vision.
At some later time, an event occurs in M1: specifically, I close my eyelids. At the same time, I have a blackout of vision. This increases my confidence in the predictive power of M1.
So far, so good.
At the same time, an identical event-pair occurs in M2: I close my eyes and my vision blacks out. This decreases my confidence in the predictive power of M2.
If I’ve understood you correctly, both the realist and the instrumentalist account of all of the above is “there are two models, M1 and M2, the same events occur in both, and as a consequence of those events we decide M1 is more accurate than M2.”
The realist account goes on to say “the reason the same events occur in both models is because they are both fed by the same set of externally realized events, which exist outside of either model.” The instrumentalist account, IIUC, says “the reason the same events occur in both models is not worth discussing; they just do.”
That’s still possible, for convenience purposes, even if shiminux is unwilling to describe their beliefs—your beliefs, apparently, I think a lot of people will have some questions to ask you now—in a top-level post.
Ooh, excellent point. I’d do it myself, but unfortunately my reason for suggesting it is that I want to understand your position better—my puny argument would be torn to shreds, I have too many holes in my understanding :(
I suggest we move the discussion to a top-level discussion thread. The comment tree here is huge and hard to navigate.
If shiminux could write an actual post on his beliefs, that might help a great deal, actually.
I think I got a cumulative total of some 100 downvotes on this thread, so somehow I don’t believe that a top-level post would be welcome. However, if TheOtherDave were to write one as a description of an interesting ontology he does not subscribe to, this would probably go over much better. I doubt he would be interested, though.
As it happens, I agree with your position. I was actually thinking of making a post that pinpoints to all the important comments here without taking a position, while asking the discussion to continue there. However, making an argumentative post is also possible, although I might not be willing to expend to effort.
Cool.
If you are motivated at some point to articulate an anti-realist account of how non-accidental correlations between inputs come to arise (in whatever format you see fit), I’d appreciate that.
As I understand it, the word “how” is used to demand a model for an event. Since I already have models for the correlations of my inputs, I don’t feel the need for further explanation. More concretely, should you ask “How does closing your eyes lead to a blackout of your vision?” I would answer “After I close my eyes, my eyelids block all of the light from getting into my eye.”, and I consider this answer satisfying. Just because I don’t believe in a ontologically fundamental reality, doesn’t mean I don’t believe in eyes and eyelids and light.
OK. So, say I have two models, M1 and M2.
In M1, vision depends on light, which is blocked by eyelids. Therefore in M1, we predict that closing my eyes leads to a blackout of vision. In M2, vision depends on something else, which is not blocked by eyelids. Therefore in M2, we predict that closing my eyes does not lead to a blackout of vision.
At some later time, an event occurs in M1: specifically, I close my eyelids. At the same time, I have a blackout of vision. This increases my confidence in the predictive power of M1.
So far, so good.
At the same time, an identical event-pair occurs in M2: I close my eyes and my vision blacks out. This decreases my confidence in the predictive power of M2.
If I’ve understood you correctly, both the realist and the instrumentalist account of all of the above is “there are two models, M1 and M2, the same events occur in both, and as a consequence of those events we decide M1 is more accurate than M2.”
The realist account goes on to say “the reason the same events occur in both models is because they are both fed by the same set of externally realized events, which exist outside of either model.” The instrumentalist account, IIUC, says “the reason the same events occur in both models is not worth discussing; they just do.”
Is that right?
That’s still possible, for convenience purposes, even if shiminux is unwilling to describe their beliefs—your beliefs, apparently, I think a lot of people will have some questions to ask you now—in a top-level post.
Ooh, excellent point. I’d do it myself, but unfortunately my reason for suggesting it is that I want to understand your position better—my puny argument would be torn to shreds, I have too many holes in my understanding :(