I’m not sure I understand your question. Do you mean, why wouldn’t someone who’s running the engine I describe end up resenting things like OpenAI that seem to be accelerating AI risk?
For one, I think they often do.
But also, it’s worth noticing that the purpose of the obsession is to distract from the inner pain. Kind of like alcoholics probably aren’t always upset at liquor stores for existing.
And in a weird twist, alcoholics can even come to seek out relationships and situations that upset them in familiar ways. Why? Because they know how to control that upset with alcohol, which means they can use the external upset as a trigger to numb out instead of waiting for glimmers of the internal pain to show up inside them.
Not all addiction designs do this. But it’s a common enough pattern output to be worth acknowledging.
I’m not sure if that’s what you were asking about though.
I’m not sure I understand your question. Do you mean, why wouldn’t someone who’s running the engine I describe end up resenting things like OpenAI that seem to be accelerating AI risk?
For one, I think they often do.
Oh. Good! I’m a bit relieved to read that. Yes, that was the fundamental question that I had. I think that shows common-sense.
I’m curious what you think a sober response to AGI research is for someone whose daily job is working on AI Safety, if you want to discuss that in more detail. Otherwise, thank you for your answer.
I’m curious what you think a sober response to AGI research is for someone who’s daily job is working on AI Safety, if you want to discuss that in more detail. Otherwise, thank you for your answer.
Quite welcome.
I’m not really up for surmising about this right now. It’s too tactical. I think the clarity about what to do arises as the VR goggles come off and the body-level withdrawal wears off. If I knew what it made sense for people to do after that point, we wouldn’t need their agentic nodes in the distributed computation network. We’d just be using them for more processing power. If that makes sense.
I bet I could come up with some general guesses. But that feels more like a musing conversation to have in a different context.
I’m not sure I understand your question. Do you mean, why wouldn’t someone who’s running the engine I describe end up resenting things like OpenAI that seem to be accelerating AI risk?
For one, I think they often do.
But also, it’s worth noticing that the purpose of the obsession is to distract from the inner pain. Kind of like alcoholics probably aren’t always upset at liquor stores for existing.
And in a weird twist, alcoholics can even come to seek out relationships and situations that upset them in familiar ways. Why? Because they know how to control that upset with alcohol, which means they can use the external upset as a trigger to numb out instead of waiting for glimmers of the internal pain to show up inside them.
Not all addiction designs do this. But it’s a common enough pattern output to be worth acknowledging.
I’m not sure if that’s what you were asking about though.
You wrote:
Oh. Good! I’m a bit relieved to read that. Yes, that was the fundamental question that I had. I think that shows common-sense.
I’m curious what you think a sober response to AGI research is for someone whose daily job is working on AI Safety, if you want to discuss that in more detail. Otherwise, thank you for your answer.
Quite welcome.
I’m not really up for surmising about this right now. It’s too tactical. I think the clarity about what to do arises as the VR goggles come off and the body-level withdrawal wears off. If I knew what it made sense for people to do after that point, we wouldn’t need their agentic nodes in the distributed computation network. We’d just be using them for more processing power. If that makes sense.
I bet I could come up with some general guesses. But that feels more like a musing conversation to have in a different context.