I think your diagnosis of the problem is right on the money, and I’m glad you wrote it.
As for your advice on what a person should do about this, it has a strong flavor of: quit doing what you’re doing and go in the opposite direction. I think this is going to be good for some people but not others. Sometimes it’s best to start where you are. Like, one can keep thinking about AI risk while also trying to become more aware of the distortions that are being introduced by these personal and collective fear patterns.
That’s the individual level though, and I don’t want that to deflect from the fact that there is this huge problem at the collective level. (I think rationalist discourse has a libertarian-derived tendency to focus on the former and ignore the latter.)
I also think that the fact that AI safety thinking is so much driven by these fear + distraction patterns, is what’s behind the general flail-y nature of so much AI safety work. There’s a lot of, “I have to do something! This is something! Therefore, I will do this!”
I agree… and also, I want to be careful of stereotypes here.
Like, I totally saw a lot of flail nature in what folk were doing when I was immersed in this world years ago.
But I also saw a lot of faux calmness and reasonableness. That’s another face of this engine.
And I saw some glimmers of what I consider to be clear lucidity.
And I saw a bunch that I wasn’t lucid enough at the time to pay proper attention to, and as such I don’t have a clear opinion about now. I just lack data because I wasn’t paying attention to the people standing in front of me. :-o
I hesitate to ever give a rationalist the advice to keep thinking about something that’s causing them to disembody while they work on embodiment. Even if there’s a good way for them to do so, my impression is that most who would be inclined to try cannot do that. They’ll overthink. It’s like suggesting an alcoholic not stop cold-turkey but leaving them to decide how much to wean back.
But I do think there’s a balance point that if it could be enacted would actually be healthier for quite a few people.
I’m just not holding most folks’ hands here! So the “cold turkey” thing strikes me as better general advice for those going at it on their own with minimal support.
I think your diagnosis of the problem is right on the money, and I’m glad you wrote it.
As for your advice on what a person should do about this, it has a strong flavor of: quit doing what you’re doing and go in the opposite direction. I think this is going to be good for some people but not others. Sometimes it’s best to start where you are. Like, one can keep thinking about AI risk while also trying to become more aware of the distortions that are being introduced by these personal and collective fear patterns.
That’s the individual level though, and I don’t want that to deflect from the fact that there is this huge problem at the collective level. (I think rationalist discourse has a libertarian-derived tendency to focus on the former and ignore the latter.)
I also think that the fact that AI safety thinking is so much driven by these fear + distraction patterns, is what’s behind the general flail-y nature of so much AI safety work. There’s a lot of, “I have to do something! This is something! Therefore, I will do this!”
I agree… and also, I want to be careful of stereotypes here.
Like, I totally saw a lot of flail nature in what folk were doing when I was immersed in this world years ago.
But I also saw a lot of faux calmness and reasonableness. That’s another face of this engine.
And I saw some glimmers of what I consider to be clear lucidity.
And I saw a bunch that I wasn’t lucid enough at the time to pay proper attention to, and as such I don’t have a clear opinion about now. I just lack data because I wasn’t paying attention to the people standing in front of me. :-o
But with that caveat: yes, I agree.
I mostly just agree.
I hesitate to ever give a rationalist the advice to keep thinking about something that’s causing them to disembody while they work on embodiment. Even if there’s a good way for them to do so, my impression is that most who would be inclined to try cannot do that. They’ll overthink. It’s like suggesting an alcoholic not stop cold-turkey but leaving them to decide how much to wean back.
But I do think there’s a balance point that if it could be enacted would actually be healthier for quite a few people.
I’m just not holding most folks’ hands here! So the “cold turkey” thing strikes me as better general advice for those going at it on their own with minimal support.