To my way of thinking, it’s quite possible for me to be fully responsible for a chain of events (for example, if they would not have occurred if not for my action, and I was aware of the likelihood of them occurring given my action, and no external forces constrained my choice so as to preclude acting differently) and for other people upstream and downstream of me to also be fully responsible for that chain of events. This is no more contradictory than my belief that object A is to the left of object B from one perspective and simultaneously to the right of object A from another. Responsibility is not some mysterious fluid out there in the world that gets portioned out to individuals, it’s an attribute that we assign to entities in a mental and/or social model.
You seem to be claiming that models wherein total responsibility for an event is conserved across the entire known causal chain are superior to mental models where it isn’t, but I don’t quite see why i ought to believe that.
You seem to be claiming that models wherein total responsibility for an event is conserved across the entire known causal chain are superior to mental models where it isn’t, but I don’t quite see why i ought to believe that.
My instinct tells me that dividing 1 responsibility per outcome throughout responsible actors is doomed to reduce to “The full responsibility is equally divided across the entire states of the Universe leading up to this point, since any small difference could have led to a different outcome”. This would make it awfully similar to the argument that no human can be responsible for any crime in a deterministic universe since they did not have control over their actions.
To me, it feels anti-bayesian, but I lack the expertise to verify this.
I don’t endorse the model of “1 responsibility per outcome” that can be divided. Neither do I endorse the idea that responsibility is incompatible with a deterministic universe. Also, I have no idea what you mean by “anti-bayesian” here.
Yes, the post was in agreement with you, and attempting to visualize / illustrate / imagine a potential way the model could be shown to be flawed.
As for feeling “anti-bayesian”, the idea that a set amount of responsibility exists to be distributed over actors for any event seems completely uncorrelated with reality and independent of any evidence. It feels just like an arbitrary system of categorization, like using “golborf” as a new term for “LessWrong users that own a house, don’t brush their teeth daily, drink milk daily, enjoy classical music and don’t work in IT-related fields”.
That little feeling somewhere that “This thing doesn’t belong here in my model.”, that there are freeloading nodes that need to be purged.
Some guilt also falls onto those who are not eager enough to verify those opinions or the money they circulate.
The man on the top (at the beginning) is NOT guilty for everything.
To my way of thinking, it’s quite possible for me to be fully responsible for a chain of events (for example, if they would not have occurred if not for my action, and I was aware of the likelihood of them occurring given my action, and no external forces constrained my choice so as to preclude acting differently) and for other people upstream and downstream of me to also be fully responsible for that chain of events. This is no more contradictory than my belief that object A is to the left of object B from one perspective and simultaneously to the right of object A from another. Responsibility is not some mysterious fluid out there in the world that gets portioned out to individuals, it’s an attribute that we assign to entities in a mental and/or social model.
You seem to be claiming that models wherein total responsibility for an event is conserved across the entire known causal chain are superior to mental models where it isn’t, but I don’t quite see why i ought to believe that.
My instinct tells me that dividing 1 responsibility per outcome throughout responsible actors is doomed to reduce to “The full responsibility is equally divided across the entire states of the Universe leading up to this point, since any small difference could have led to a different outcome”. This would make it awfully similar to the argument that no human can be responsible for any crime in a deterministic universe since they did not have control over their actions.
To me, it feels anti-bayesian, but I lack the expertise to verify this.
I don’t endorse the model of “1 responsibility per outcome” that can be divided.
Neither do I endorse the idea that responsibility is incompatible with a deterministic universe.
Also, I have no idea what you mean by “anti-bayesian” here.
It took me a while, but his post made much more sense to me once I realized he was agreeing with you.
Oh!
Huh.
Yeah, I see what you mean.
Heh, sorry, kind of skipped the preamble there.
Yes, the post was in agreement with you, and attempting to visualize / illustrate / imagine a potential way the model could be shown to be flawed.
As for feeling “anti-bayesian”, the idea that a set amount of responsibility exists to be distributed over actors for any event seems completely uncorrelated with reality and independent of any evidence. It feels just like an arbitrary system of categorization, like using “golborf” as a new term for “LessWrong users that own a house, don’t brush their teeth daily, drink milk daily, enjoy classical music and don’t work in IT-related fields”.
That little feeling somewhere that “This thing doesn’t belong here in my model.”, that there are freeloading nodes that need to be purged.