I can understand thinking of yourself as having evil intentions, but I don’t understand believing you’re a partly-demonic entity.
I think the way that the global market and culture can respond to ideas is strange and surprising, with people you don’t know taking major undertakings based on your ideas, with lots of copying and imitation and whole organizations or people changing their lives around something you did without them ever knowing you. Like the way that Elon Musk met a girlfriend of his via a Roko’s Basilisk meme, or one time someone on reddit I don’t know believed that an action I’d taken was literally “the AGI” acting in their life (which was weird for me). I think that one can make straightforward mistakes in earnestly reasoning about strange things (as is argued in this Astral Codex Ten post that IIRC argues that conspiracy theories often have surprisingly good arguments for them that a typical person would find persuasive on their own merits). So I’m not saying that really trying to act on a global scale on a difficult problem couldn’t cause you to have supernatural beliefs.
But you said it’s what would happen to a ‘typical-ish person’. If you believe a ‘typical-ish person’ trying to have an epistemology will reliably fail in ways that lead to them believing in conspiracies, then I guess yes, they may also come to have supernatural beliefs if they try to take action that has massive consequences in the world. But I think a person with just a little more perspective can be self-aware about conspiracy theories and similarly be self-aware about whatever other hypotheses they form, and try to stick to fairly grounded ones. It turns out that when you poke civilization the right way does a lot of really outsized and overpowered things sometimes.
I imagine it was a trip for Doug Engelbart to watch everyone in the world get a personal computer, with a computer mouse and a graphical user-interface that he had invented. But I think it would have been a mistake for him to think anything supernatural was going on, even if he were trying to personally take responsibility for directing the world in as best he could, and I expect most people would be able to see that (from the outside).
If you think you’re responsible for everything, that means you’re responsible for everything bad that happens. That’s a lot of very bad stuff, some of which is motivated by bad intentions. An entity who’s responsible for that much bad stuff couldn’t be like a typical person, who is responsible for a modest amount of bad stuff. It’s hard to conceptualize just how much bad stuff this hypothetical person is responsible for without supernatural metaphors; it’s far beyond what a mere genocidal dictator like Hitler or Stalin is responsible for (at least, if you aren’t attributing heroic responsibility to them). At that point, “well, I’m responsible for more bad stuff than I previously thought Hitler was responsible for” doesn’t come close to grasping the sheer magnitude, and supernatural metaphors like God or Satan come closer. The conclusion is insane and supernatural because the premise, that you are personally responsible for everything that happens, is insane and supernatural.
I’m not really sure how typical this particular response would be. But I think it’s incredibly rare to actually take heroic responsibility literally and seriously. So even if I only rarely see evidence of people thinking they’re demonic (which is surprisingly common, even if rare in absolute terms), that doesn’t say much about the conditional likelihood of that response on taking heroic responsibility seriously.
I have a version of heroic responsibility in my head that I don’t think causes one to have false beliefs about supernatural phenomena, so I’m interested in engaging on whether the version in my head makes sense, though I don’t mean to invalidate your strongly negative personal experiences with the idea.
I think there’s a difference between causing something and taking responsibility for it. There’s a notion of “I didn’t cause this mess but I am going to clean it up.” In my team often a problem arises that we didn’t cause and weren’t expecting. A few months ago there were heavy rains in Berkeley and someone had to step up and make sure they didn’t cause serious water damage to our property. Further beyond the organization’s remit, one time Scott Aaronson’s computational complexity wiki was set to go down, and a team member said they’d step forward to fix it and take responsibility for keeping it up in the future. These were situations where the person who took them on didn’t cause them and hadn’t said that they were responsible for the class of things ahead of time, but increasingly took on more responsibility because they could and because it was good.
When Harry is speaking to McGonagall in that quote, I believe he’s saying “No, I’m actually taking responsibility for what happened to my friend. I’m asking myself what it would’ve looked like for me to actually take responsibility for it earlier, rather than the default state of nature where we’re all just bumbling around. Where the standard is ‘this terrible thing doesn’t happen’ as opposed to ‘well I’m deontologically in the clear and nobody blames me but the thing still happens’.”
I don’t think this gives Harry false magical beliefs that he personally caused a horrendous thing to happen to his friend (though I think that magical beliefs of the sort so have a higher prior in his universe).
I think you can “take responsibility” for civilization not going extinct in this manner, without believing you personally caused the extinction. (It will suck a bit for you because it’s very hard and you will probably fail in your responsibilities.) I think there’s reasons to give up responsibility if you’ve done a poor job, but I think failure is not deontologically bad especially in a world where few others are going to take responsibility for it.
If I try to imagine what happened with jessicata, what I get is this: taking responsibility means that you’re trying to apply your agency to everything; you’re clamping the variable of “do I consider this event as being within the domain of things I try to optimize” to “yes”. Even if you didn’t even think about X before X has already happened, doesn’t matter; you clamped the variable to yes. If you consider X as being within the domain of things you try to optimize, then it starts to make sense to ask whether you caused X. If you add in this “no excuses” thing, you’re saying: even if supposedly there was no way you could have possibly stopped X, it’s still your responsibility. This is just another instance of the variable being clamped; just because you supposedly couldn’t do anything, doesn’t make you not consider X as something that you’re applying your agency to. (This can be extremely helpful, which is why heroic responsibility has good features; it makes you broaden your search, go meta, look harder, think outside the box, etc., without excuses like “oh but it’s impossible, there’s nothing I can do”; and it makes you look in retrospect at what, in retrospect, you could have done, so that you can pre-retrospect in the future.)
If you’re applying your agency to X “as though you could affect it”, then you’re basically thinking of X as being determined in part by your actions. Yes, other stuff makes X happen, but one of the necessary conditions for X to happen is that you don’t personally prevent it. So every X is partly causally/agentially dependent on you, and so is partly your fault. You could have done more sooner.
A few months ago there were heavy rains in Berkeley and someone had to step up and make sure they didn’t cause serious water damage to our property. Further beyond the organization’s remit, one time Scott Aaronson’s computational complexity wiki was set to go down, and a team member said they’d step forward to fix it and take responsibility for keeping it up in the future.
This sounds like a positive form of ‘take responsibility’ I can agree with.
However, I’m not sure about this whole discussion in regards to ‘the world’, ‘civilization’, etc.
What does ‘take responsibility’ mean for an individual across the span of the entire Earth?
For a very specific sub-sub-sub area, such as imparting some useful knowledge to a fraction of online fan-fiction readers of a specific fandom, it’s certainly possible to make a tangible, measurable, difference, even without some special super-genius.
But beyond that I think it gets exponentially more difficult.
Even a modestly larger goal of imparting some useful knowledge to a majority of online fan-fiction readers would practically be a life’s effort, assuming the individual already has moderately above average talents in writing and so on.
There’s nothing special about taking responsibility for something big or small. It’s the same meaning.
Within teams I’ve worked in it has meant:
You can be confident that someone is personally optimizing to achieve the goal
Both the shame of failing and the glory of succeeding will primarily accrue to them
There is a single point of contact for checking in about any aspect of the problem.
For instance, if you have an issue with how a problem is being solved, there is a single person you can go to to complain
Or if you want to make sure that something you’re doing does not obstruct this other problem from being solved, you can go to them and ask their opinion.
And more things.
I think this applies straightforwardly beyond single organizations.
Various public utilities like water and electricity have government departments who are attempting to actually take responsibility for the problem of everyone having reliable and cheap access to these products. These are the people responsible when the national grid goes out in the UK, which is different from countries with no such government department.
NASA was broadly working on space rockets, but now Elon Musk has stepped forward to make sure our civilization actually becomes multi-planetary in this century. If I was considering some course of action (e.g. taxing imports from India) but wanted to know if it could somehow prevent us from becoming multi planetary, he is basically the top person on my list of people to go to to ask whether it would prevent him from succeeding. (Other people and organizations are also trying to take responsibility for this problem as well and get nonzero credit allocation. In general it’s great if there’s a problem domain where multiple people can attempt to take responsibility for the problem being solved.)
I think there are quite a lot of people trying to take responsibility for improving the public discourse, or preventing it from deteriorating in certain ways, e.g. defending attacks on freedom of speech from particular attack vectors. I think Sam Harris thinks of part of his career as defending the freedom to openly criticize religions like Islam and Christianity, and if I felt like I was concerned that such freedoms would be lost, he’d be one of the first people I’d want to turn to read or reach out to to ask how to help and what the attack vectors are.
You can apply this to particular extinction threats (e.g. asteroids, pandemics, AGI, etc) or to the overall class of such threats. (For instance I’ve historically thought of MIRI as focused on AI and the FHI as interested in the whole class.)
Extinction-level threats seem like a perfectly natural kind of problem someone could try to take responsibility for, thinking about how the entire civilization would respond to a particular attack vector, asking what that person could do in order to prevent extinction (or similar) in that situation, and then implementing such an improvement.
I can understand thinking of yourself as having evil intentions, but I don’t understand believing you’re a partly-demonic entity.
I think the way that the global market and culture can respond to ideas is strange and surprising, with people you don’t know taking major undertakings based on your ideas, with lots of copying and imitation and whole organizations or people changing their lives around something you did without them ever knowing you. Like the way that Elon Musk met a girlfriend of his via a Roko’s Basilisk meme, or one time someone on reddit I don’t know believed that an action I’d taken was literally “the AGI” acting in their life (which was weird for me). I think that one can make straightforward mistakes in earnestly reasoning about strange things (as is argued in this Astral Codex Ten post that IIRC argues that conspiracy theories often have surprisingly good arguments for them that a typical person would find persuasive on their own merits). So I’m not saying that really trying to act on a global scale on a difficult problem couldn’t cause you to have supernatural beliefs.
But you said it’s what would happen to a ‘typical-ish person’. If you believe a ‘typical-ish person’ trying to have an epistemology will reliably fail in ways that lead to them believing in conspiracies, then I guess yes, they may also come to have supernatural beliefs if they try to take action that has massive consequences in the world. But I think a person with just a little more perspective can be self-aware about conspiracy theories and similarly be self-aware about whatever other hypotheses they form, and try to stick to fairly grounded ones. It turns out that when you poke civilization the right way does a lot of really outsized and overpowered things sometimes.
I imagine it was a trip for Doug Engelbart to watch everyone in the world get a personal computer, with a computer mouse and a graphical user-interface that he had invented. But I think it would have been a mistake for him to think anything supernatural was going on, even if he were trying to personally take responsibility for directing the world in as best he could, and I expect most people would be able to see that (from the outside).
If you think you’re responsible for everything, that means you’re responsible for everything bad that happens. That’s a lot of very bad stuff, some of which is motivated by bad intentions. An entity who’s responsible for that much bad stuff couldn’t be like a typical person, who is responsible for a modest amount of bad stuff. It’s hard to conceptualize just how much bad stuff this hypothetical person is responsible for without supernatural metaphors; it’s far beyond what a mere genocidal dictator like Hitler or Stalin is responsible for (at least, if you aren’t attributing heroic responsibility to them). At that point, “well, I’m responsible for more bad stuff than I previously thought Hitler was responsible for” doesn’t come close to grasping the sheer magnitude, and supernatural metaphors like God or Satan come closer. The conclusion is insane and supernatural because the premise, that you are personally responsible for everything that happens, is insane and supernatural.
I’m not really sure how typical this particular response would be. But I think it’s incredibly rare to actually take heroic responsibility literally and seriously. So even if I only rarely see evidence of people thinking they’re demonic (which is surprisingly common, even if rare in absolute terms), that doesn’t say much about the conditional likelihood of that response on taking heroic responsibility seriously.
I have a version of heroic responsibility in my head that I don’t think causes one to have false beliefs about supernatural phenomena, so I’m interested in engaging on whether the version in my head makes sense, though I don’t mean to invalidate your strongly negative personal experiences with the idea.
I think there’s a difference between causing something and taking responsibility for it. There’s a notion of “I didn’t cause this mess but I am going to clean it up.” In my team often a problem arises that we didn’t cause and weren’t expecting. A few months ago there were heavy rains in Berkeley and someone had to step up and make sure they didn’t cause serious water damage to our property. Further beyond the organization’s remit, one time Scott Aaronson’s computational complexity wiki was set to go down, and a team member said they’d step forward to fix it and take responsibility for keeping it up in the future. These were situations where the person who took them on didn’t cause them and hadn’t said that they were responsible for the class of things ahead of time, but increasingly took on more responsibility because they could and because it was good.
When Harry is speaking to McGonagall in that quote, I believe he’s saying “No, I’m actually taking responsibility for what happened to my friend. I’m asking myself what it would’ve looked like for me to actually take responsibility for it earlier, rather than the default state of nature where we’re all just bumbling around. Where the standard is ‘this terrible thing doesn’t happen’ as opposed to ‘well I’m deontologically in the clear and nobody blames me but the thing still happens’.”
I don’t think this gives Harry false magical beliefs that he personally caused a horrendous thing to happen to his friend (though I think that magical beliefs of the sort so have a higher prior in his universe).
I think you can “take responsibility” for civilization not going extinct in this manner, without believing you personally caused the extinction. (It will suck a bit for you because it’s very hard and you will probably fail in your responsibilities.) I think there’s reasons to give up responsibility if you’ve done a poor job, but I think failure is not deontologically bad especially in a world where few others are going to take responsibility for it.
If I try to imagine what happened with jessicata, what I get is this: taking responsibility means that you’re trying to apply your agency to everything; you’re clamping the variable of “do I consider this event as being within the domain of things I try to optimize” to “yes”. Even if you didn’t even think about X before X has already happened, doesn’t matter; you clamped the variable to yes. If you consider X as being within the domain of things you try to optimize, then it starts to make sense to ask whether you caused X. If you add in this “no excuses” thing, you’re saying: even if supposedly there was no way you could have possibly stopped X, it’s still your responsibility. This is just another instance of the variable being clamped; just because you supposedly couldn’t do anything, doesn’t make you not consider X as something that you’re applying your agency to. (This can be extremely helpful, which is why heroic responsibility has good features; it makes you broaden your search, go meta, look harder, think outside the box, etc., without excuses like “oh but it’s impossible, there’s nothing I can do”; and it makes you look in retrospect at what, in retrospect, you could have done, so that you can pre-retrospect in the future.)
If you’re applying your agency to X “as though you could affect it”, then you’re basically thinking of X as being determined in part by your actions. Yes, other stuff makes X happen, but one of the necessary conditions for X to happen is that you don’t personally prevent it. So every X is partly causally/agentially dependent on you, and so is partly your fault. You could have done more sooner.
This sounds like a positive form of ‘take responsibility’ I can agree with.
However, I’m not sure about this whole discussion in regards to ‘the world’, ‘civilization’, etc.
What does ‘take responsibility’ mean for an individual across the span of the entire Earth?
For a very specific sub-sub-sub area, such as imparting some useful knowledge to a fraction of online fan-fiction readers of a specific fandom, it’s certainly possible to make a tangible, measurable, difference, even without some special super-genius.
But beyond that I think it gets exponentially more difficult.
Even a modestly larger goal of imparting some useful knowledge to a majority of online fan-fiction readers would practically be a life’s effort, assuming the individual already has moderately above average talents in writing and so on.
There’s nothing special about taking responsibility for something big or small. It’s the same meaning.
Within teams I’ve worked in it has meant:
You can be confident that someone is personally optimizing to achieve the goal
Both the shame of failing and the glory of succeeding will primarily accrue to them
There is a single point of contact for checking in about any aspect of the problem.
For instance, if you have an issue with how a problem is being solved, there is a single person you can go to to complain
Or if you want to make sure that something you’re doing does not obstruct this other problem from being solved, you can go to them and ask their opinion.
And more things.
I think this applies straightforwardly beyond single organizations.
Various public utilities like water and electricity have government departments who are attempting to actually take responsibility for the problem of everyone having reliable and cheap access to these products. These are the people responsible when the national grid goes out in the UK, which is different from countries with no such government department.
NASA was broadly working on space rockets, but now Elon Musk has stepped forward to make sure our civilization actually becomes multi-planetary in this century. If I was considering some course of action (e.g. taxing imports from India) but wanted to know if it could somehow prevent us from becoming multi planetary, he is basically the top person on my list of people to go to to ask whether it would prevent him from succeeding. (Other people and organizations are also trying to take responsibility for this problem as well and get nonzero credit allocation. In general it’s great if there’s a problem domain where multiple people can attempt to take responsibility for the problem being solved.)
I think there are quite a lot of people trying to take responsibility for improving the public discourse, or preventing it from deteriorating in certain ways, e.g. defending attacks on freedom of speech from particular attack vectors. I think Sam Harris thinks of part of his career as defending the freedom to openly criticize religions like Islam and Christianity, and if I felt like I was concerned that such freedoms would be lost, he’d be one of the first people I’d want to turn to read or reach out to to ask how to help and what the attack vectors are.
You can apply this to particular extinction threats (e.g. asteroids, pandemics, AGI, etc) or to the overall class of such threats. (For instance I’ve historically thought of MIRI as focused on AI and the FHI as interested in the whole class.)
Extinction-level threats seem like a perfectly natural kind of problem someone could try to take responsibility for, thinking about how the entire civilization would respond to a particular attack vector, asking what that person could do in order to prevent extinction (or similar) in that situation, and then implementing such an improvement.