I’m a little surprised that doomerism could take off like this, dominate one’s thoughts, and yet fail to create resentment and anger toward of its apparent cause source. Is that something that was absent for you or was it not relevant to discuss here?
I wonder:
in the prediction of doom, as the threat seems to be growing closer, does that create resentment or anger at apparent sources of that doom? If I dwelled on AI existential risk, I would feel more resentment of sources of that risk.
do the responses to that doom, or desperation of measures, become wilder as one thinks about it more? Just a passing thought about AI doom immediately brings to mind, “Lets stop these companies and agencies from making such dangerous technology!” In other words, lets give up on AI safety and try the other approach.
is there still appeal to a future of AGI? I can see some of the excitement or tension around the topic coming from the ambiguity of the path toward AGI and their consequences. I’ve seen the hype about AGI to be that it saves humanity from itself, advances science radically, turbo-charges economic growth, etc. Is that vision, alternated with a vision of horrible suffering doom, a cause of cognitive dissonance? I would think so.
Factors that might be protecting me from this include:
I take a wait and see approach about AGI, and favor use of older, simpler technologies like expert systems or even simpler cognitive aids relying on simple knowledgebases. In the area of robots, I favor simpler, task-specific robots (such as manufacturing robot arms) without, for example, self-learning abilities or radically smart language recognition or production. It’s helpful to me to have something specific to advocate for, and think about, as an alternative, rather than thinking that it’s AGI or nothing.
I assume that AGI development is overall, a negative outcome, simply more risk to people (including the AGI themselves, sure to be exploited if they are created). I don’t accept that AGI development offers necessary opportunities for human technological advancement. In that way, I am resigned to AGI development as a mistake others make My hopes are not in any way invested in AGI. That saves me some cognitive dissonance.
Thank you for sharing this piece, I found it thought-provoking.
I’m not sure I understand your question. Do you mean, why wouldn’t someone who’s running the engine I describe end up resenting things like OpenAI that seem to be accelerating AI risk?
For one, I think they often do.
But also, it’s worth noticing that the purpose of the obsession is to distract from the inner pain. Kind of like alcoholics probably aren’t always upset at liquor stores for existing.
And in a weird twist, alcoholics can even come to seek out relationships and situations that upset them in familiar ways. Why? Because they know how to control that upset with alcohol, which means they can use the external upset as a trigger to numb out instead of waiting for glimmers of the internal pain to show up inside them.
Not all addiction designs do this. But it’s a common enough pattern output to be worth acknowledging.
I’m not sure if that’s what you were asking about though.
I’m not sure I understand your question. Do you mean, why wouldn’t someone who’s running the engine I describe end up resenting things like OpenAI that seem to be accelerating AI risk?
For one, I think they often do.
Oh. Good! I’m a bit relieved to read that. Yes, that was the fundamental question that I had. I think that shows common-sense.
I’m curious what you think a sober response to AGI research is for someone whose daily job is working on AI Safety, if you want to discuss that in more detail. Otherwise, thank you for your answer.
I’m curious what you think a sober response to AGI research is for someone who’s daily job is working on AI Safety, if you want to discuss that in more detail. Otherwise, thank you for your answer.
Quite welcome.
I’m not really up for surmising about this right now. It’s too tactical. I think the clarity about what to do arises as the VR goggles come off and the body-level withdrawal wears off. If I knew what it made sense for people to do after that point, we wouldn’t need their agentic nodes in the distributed computation network. We’d just be using them for more processing power. If that makes sense.
I bet I could come up with some general guesses. But that feels more like a musing conversation to have in a different context.
I’m a little surprised that doomerism could take off like this, dominate one’s thoughts, and yet fail to create resentment and anger toward
ofits apparent causesource. Is that something that was absent for you or was it not relevant to discuss here?I wonder:
in the prediction of doom, as the threat seems to be growing closer, does that create resentment or anger at apparent sources of that doom? If I dwelled on AI existential risk, I would feel more resentment of sources of that risk.
do the responses to that doom, or desperation of measures, become wilder as one thinks about it more? Just a passing thought about AI doom immediately brings to mind, “Lets stop these companies and agencies from making such dangerous technology!” In other words, lets give up on AI safety and try the other approach.
is there still appeal to a future of AGI? I can see some of the excitement or tension around the topic coming from the ambiguity of the path toward AGI and their consequences. I’ve seen the hype about AGI to be that it saves humanity from itself, advances science radically, turbo-charges economic growth, etc. Is that vision, alternated with a vision of horrible suffering doom, a cause of cognitive dissonance? I would think so.
Factors that might be protecting me from this include:
I take a wait and see approach about AGI, and favor use of older, simpler technologies like expert systems or even simpler cognitive aids relying on simple knowledgebases. In the area of robots, I favor simpler, task-specific robots (such as manufacturing robot arms) without, for example, self-learning abilities or radically smart language recognition or production. It’s helpful to me to have something specific to advocate for, and think about, as an alternative, rather than thinking that it’s AGI or nothing.
I assume that AGI development is overall, a negative outcome, simply more risk to people (including the AGI themselves, sure to be exploited if they are created). I don’t accept that AGI development offers necessary opportunities for human technological advancement. In that way, I am resigned to AGI development as a mistake others make My hopes are not in any way invested in AGI. That saves me some cognitive dissonance.
Thank you for sharing this piece, I found it thought-provoking.
I’m not sure I understand your question. Do you mean, why wouldn’t someone who’s running the engine I describe end up resenting things like OpenAI that seem to be accelerating AI risk?
For one, I think they often do.
But also, it’s worth noticing that the purpose of the obsession is to distract from the inner pain. Kind of like alcoholics probably aren’t always upset at liquor stores for existing.
And in a weird twist, alcoholics can even come to seek out relationships and situations that upset them in familiar ways. Why? Because they know how to control that upset with alcohol, which means they can use the external upset as a trigger to numb out instead of waiting for glimmers of the internal pain to show up inside them.
Not all addiction designs do this. But it’s a common enough pattern output to be worth acknowledging.
I’m not sure if that’s what you were asking about though.
You wrote:
Oh. Good! I’m a bit relieved to read that. Yes, that was the fundamental question that I had. I think that shows common-sense.
I’m curious what you think a sober response to AGI research is for someone whose daily job is working on AI Safety, if you want to discuss that in more detail. Otherwise, thank you for your answer.
Quite welcome.
I’m not really up for surmising about this right now. It’s too tactical. I think the clarity about what to do arises as the VR goggles come off and the body-level withdrawal wears off. If I knew what it made sense for people to do after that point, we wouldn’t need their agentic nodes in the distributed computation network. We’d just be using them for more processing power. If that makes sense.
I bet I could come up with some general guesses. But that feels more like a musing conversation to have in a different context.