I’m curious what you think a sober response to AGI research is for someone who’s daily job is working on AI Safety, if you want to discuss that in more detail. Otherwise, thank you for your answer.
Quite welcome.
I’m not really up for surmising about this right now. It’s too tactical. I think the clarity about what to do arises as the VR goggles come off and the body-level withdrawal wears off. If I knew what it made sense for people to do after that point, we wouldn’t need their agentic nodes in the distributed computation network. We’d just be using them for more processing power. If that makes sense.
I bet I could come up with some general guesses. But that feels more like a musing conversation to have in a different context.
Quite welcome.
I’m not really up for surmising about this right now. It’s too tactical. I think the clarity about what to do arises as the VR goggles come off and the body-level withdrawal wears off. If I knew what it made sense for people to do after that point, we wouldn’t need their agentic nodes in the distributed computation network. We’d just be using them for more processing power. If that makes sense.
I bet I could come up with some general guesses. But that feels more like a musing conversation to have in a different context.