Overall I agree with this. I give most of my money for global health organizations, but I do give some of my money for AGI safety too because I do think it makes sense with a variety of worldviews. I gave some of my thoughts on the subject in this comment on the Effective Altruism Forum. To summarize: if there’s a continuation of consciousness after death then AGI killing lots of people is not as bad as it would otherwise be and there might be some unknown aspects about the relationship between consciousness and the physical universe that might have an effect on the odds.
In the line that ends with “even if God would not allow complete extinction.”, my impulse is to include ” (or other forms of permanent doom)” before the period, but I suspect that this is due to my tendency to include excessive details/notes/etc. and probably best not to actually include in that sentence.
(Like, for example, if there were no more adult humans, only billions of babies grown in artificial wombs (in a way staggered in time) and then kept in a state of chemically induced euphoria until the age of 1, and then killed, that technically wouldn’t be human extinction, but, that scenario would still count as doom.)
Regarding the part about “it is secular scientific-materialists who are doing the research which is a threat to my values” part: I think it is good that it discusses this! (and I hadn’t thought about including it) But, I’m personally somewhat skeptical that CEV really works as a solution to this problem? Or at least, in the simpler ways of it being described. Like, I imagine there being a lot of path-dependence in how a culture’s values would “progress” over time, and I see little reason why a sequence of changes of the form “opinion/values changing in response to an argument that seems to make sense” would be that unlikely to produce values that the initial values would deem horrifying? (or, which would seem horrifying to those in an alternate possible future that just happened to take a difference branch in how their values evolved)
[EDIT: at this point, I start going off on a tangent which is a fair bit less relevant to the question of improving Stampy’s response, so, you might want to skip reading it, idk]
My preferred solution is closer to, “we avoid applying large amounts of optimization pressure to most topics, instead applying it only to topics where there is near-unanimous agreement on what kinds of outcomes are better (such as, “humanity doesn’t get wiped out by a big space rock”, “it is better for people to not have terrible diseases”, etc.), while avoiding these optimizations having much effect on other areas where there is much disagreement as to what-is-good.
Though, it does seem plausible to me, as a somewhat scary idea, that the thing I just described is perhaps not exactly coherent?
(that being said, even though I have my doubts about CEV, at least in the form described in the simpler ways it is described, I do think it would of course be better than doom. Also, it is quite possible that I’m just misunderstanding the idea of CEV in a way that causes my concerns, and maybe it was always meant to exclude the kinds of things I describe being concerned about?)
@drocta @Cookiecarver We started writing up an answer to this question for Stampy. If you have any suggestions to make it better I would really appreciate it. Are there important factors we are leaving out? Something that sounds off? We would be happy for any feedback you have either here or on the document itself https://docs.google.com/document/d/1tbubYvI0CJ1M8ude-tEouI4mzEI5NOVrGvFlMboRUaw/edit#
Overall I agree with this. I give most of my money for global health organizations, but I do give some of my money for AGI safety too because I do think it makes sense with a variety of worldviews. I gave some of my thoughts on the subject in this comment on the Effective Altruism Forum. To summarize: if there’s a continuation of consciousness after death then AGI killing lots of people is not as bad as it would otherwise be and there might be some unknown aspects about the relationship between consciousness and the physical universe that might have an effect on the odds.
In the line that ends with “even if God would not allow complete extinction.”, my impulse is to include ” (or other forms of permanent doom)” before the period, but I suspect that this is due to my tendency to include excessive details/notes/etc. and probably best not to actually include in that sentence.
(Like, for example, if there were no more adult humans, only billions of babies grown in artificial wombs (in a way staggered in time) and then kept in a state of chemically induced euphoria until the age of 1, and then killed, that technically wouldn’t be human extinction, but, that scenario would still count as doom.)
Regarding the part about “it is secular scientific-materialists who are doing the research which is a threat to my values” part: I think it is good that it discusses this! (and I hadn’t thought about including it)
But, I’m personally somewhat skeptical that CEV really works as a solution to this problem? Or at least, in the simpler ways of it being described.
Like, I imagine there being a lot of path-dependence in how a culture’s values would “progress” over time, and I see little reason why a sequence of changes of the form “opinion/values changing in response to an argument that seems to make sense” would be that unlikely to produce values that the initial values would deem horrifying? (or, which would seem horrifying to those in an alternate possible future that just happened to take a difference branch in how their values evolved)
[EDIT: at this point, I start going off on a tangent which is a fair bit less relevant to the question of improving Stampy’s response, so, you might want to skip reading it, idk]
My preferred solution is closer to, “we avoid applying large amounts of optimization pressure to most topics, instead applying it only to topics where there is near-unanimous agreement on what kinds of outcomes are better (such as, “humanity doesn’t get wiped out by a big space rock”, “it is better for people to not have terrible diseases”, etc.), while avoiding these optimizations having much effect on other areas where there is much disagreement as to what-is-good.
Though, it does seem plausible to me, as a somewhat scary idea, that the thing I just described is perhaps not exactly coherent?
(that being said, even though I have my doubts about CEV, at least in the form described in the simpler ways it is described, I do think it would of course be better than doom.
Also, it is quite possible that I’m just misunderstanding the idea of CEV in a way that causes my concerns, and maybe it was always meant to exclude the kinds of things I describe being concerned about?)