Sure, this is an argument ‘for AGI’, but rarely do people (on this forum at least) reject the deployment of AGI because they feel discomfort in not fully comprehending the trajectory of their decisions. I’m sure that this is something most of us ponder and would acknowledge is not optimal, but if you asked the average LW user to list the reasons they were not for the deployment of AGI, I think that this would be quite low on the list.
Reasons higher on the list for me for example would be “literally everyone might die.” In light of that; dismissing control loss as a worry seems quite miniscule. The reason people fear control loss is generally because losing control of something more intelligent than you with instrumental subgoals that if pursued would probably result in a bad outcome for you, but this doesn’t change the fact that “we shouldn’t fear not being in control for the above reasons” does not constitute sufficient reason to deploy AGI.
Also, although some of the analogies drawn here do have merit; I can’t help but gesture toward the giant mass of tentacles and eyes you are applying them to. To make this more visceral, picture a literal Shoggoth descending from a plane of Eldlitch horror and claiming decision-making supremacy and human-aligned goals. Do you accept its rule because of its superior decision making supremacy and claimed human-aligned, or do you seek an alternative arrangement?
I agree completely. I am not trying to feed accelerationism or downplay risks, but I am trying to make a few important arguments from the perspective of an 3rd party observer. I wanted to introduce the ‘divine move paradox’ along side the evolutionary ingrained flawed minds argument. I am trying to frame the situation in a slightly different light, far enough outside the general flow to be interesting, but not so far that it does not tie in. I am certainly not trying to say we just turn over control to the first thing that manipulates us properly.
I think my original title was poorly chosen when this is meant to bring forward ideas. I edited it to remove ‘The Case for AGI’
Sure, this is an argument ‘for AGI’, but rarely do people (on this forum at least) reject the deployment of AGI because they feel discomfort in not fully comprehending the trajectory of their decisions. I’m sure that this is something most of us ponder and would acknowledge is not optimal, but if you asked the average LW user to list the reasons they were not for the deployment of AGI, I think that this would be quite low on the list.
Reasons higher on the list for me for example would be “literally everyone might die.” In light of that; dismissing control loss as a worry seems quite miniscule. The reason people fear control loss is generally because losing control of something more intelligent than you with instrumental subgoals that if pursued would probably result in a bad outcome for you, but this doesn’t change the fact that “we shouldn’t fear not being in control for the above reasons” does not constitute sufficient reason to deploy AGI.
Also, although some of the analogies drawn here do have merit; I can’t help but gesture toward the giant mass of tentacles and eyes you are applying them to. To make this more visceral, picture a literal Shoggoth descending from a plane of Eldlitch horror and claiming decision-making supremacy and human-aligned goals. Do you accept its rule because of its superior decision making supremacy and claimed human-aligned, or do you seek an alternative arrangement?
I agree completely. I am not trying to feed accelerationism or downplay risks, but I am trying to make a few important arguments from the perspective of an 3rd party observer. I wanted to introduce the ‘divine move paradox’ along side the evolutionary ingrained flawed minds argument. I am trying to frame the situation in a slightly different light, far enough outside the general flow to be interesting, but not so far that it does not tie in. I am certainly not trying to say we just turn over control to the first thing that manipulates us properly.
I think my original title was poorly chosen when this is meant to bring forward ideas. I edited it to remove ‘The Case for AGI’