So the idea is that if you get as many people in the AI business/research and as possible to read the sequences, then that will change their ideas in a way that will make them work in AI in a safer way, and that will avoid doom?
I’m just trying to understand how exactly the mechanism that will lead to the desired change is supposed to work.
If that is the case, I would say the critique made by OP is really on point. I don’t believe the current approach is convincing many people to read the sequences, and I also think reading the sequences won’t necessarily make people change their actions when business/economic/social incentives work otherwise. The latter being unavoidably a regulatory problem, and the former a communications strategy problem.
Or are you telling me to read the sequences? I intend to sometime, I just have a bunch of stuff to read already and I’m not exactly good at reading a lot consistently. I don’t deny having good material on the subject is not essential either.
More that you get as many people in general to read the sequences, which will change their thinking so they make fewer mistakes, which in turn will make more people aware both of the real risks underlying superintelligence, but also of the plausibility and utility of AI. I wasn’t around then, so this is just my interpretation of what I read post-facto, but I get the impression that people were a lot less doomish then. There was a hope that alignment was totally solvable.
The focus didn’t seem to be on getting people into alignment, as much as it generally being better for people to think better. AI isn’t pushed as something everyone should do—rather as what EY knows—but something worth investigating. There are various places where it’s said that everyone could use more rationality, that it’s an instrumental goal like earning more money. There’s an idea of creating Rationality Dojos, as places to learn rationality like people learn martial arts. I believe that’s the source of CFAR.
It’s not that the one and only goal of the rationalist community was to stop an unfriendly AGI. It’s just that is the obvious result of it. It’s a matter of taking the idea seriously, then shutting up and multiplying—assuming that AI risk is a real issue, it’s pretty obvious that it’s the most pressing problem facing humanity, which means that if you can actually help, you should step up.
Business/economic/social incentives can work, no doubt about that. The issue is that they only work as long as they’re applied. Actually caring about an issue (as in really care, like oppressed christian level, not performance cultural christian level) is a lot more lasting, in that if the incentives disappear, they’ll keep on doing what you want. Convincing is a lot harder, though, which I’m guessing is your point? I agree that convincing is less effective numerically speaking, but it seems a lot more good (in a moral sense), which also seems important. Though this is admittedly a lot more of an aesthetics thing...
I most certainly recommend reading the sequences, but by no means meant to imply that you must. Just that stopping an unfriendly AGI (or rather the desirability of creating an friendly AI) permeates the sequences. I don’t recall if it’s stated explicitly, but it’s obvious that they’re pushing you in that direction. I believe Scott Alexander described the sequences as being totally mind blowing the first time he read them, but totally obvious on rereading them—I don’t know which would be your reaction. You can try the highlights rather than the whole thing, which should be a lot quicker.
So the idea is that if you get as many people in the AI business/research and as possible to read the sequences, then that will change their ideas in a way that will make them work in AI in a safer way, and that will avoid doom?
I’m just trying to understand how exactly the mechanism that will lead to the desired change is supposed to work.
If that is the case, I would say the critique made by OP is really on point. I don’t believe the current approach is convincing many people to read the sequences, and I also think reading the sequences won’t necessarily make people change their actions when business/economic/social incentives work otherwise. The latter being unavoidably a regulatory problem, and the former a communications strategy problem.
Or are you telling me to read the sequences? I intend to sometime, I just have a bunch of stuff to read already and I’m not exactly good at reading a lot consistently. I don’t deny having good material on the subject is not essential either.
More that you get as many people in general to read the sequences, which will change their thinking so they make fewer mistakes, which in turn will make more people aware both of the real risks underlying superintelligence, but also of the plausibility and utility of AI. I wasn’t around then, so this is just my interpretation of what I read post-facto, but I get the impression that people were a lot less doomish then. There was a hope that alignment was totally solvable.
The focus didn’t seem to be on getting people into alignment, as much as it generally being better for people to think better. AI isn’t pushed as something everyone should do—rather as what EY knows—but something worth investigating. There are various places where it’s said that everyone could use more rationality, that it’s an instrumental goal like earning more money. There’s an idea of creating Rationality Dojos, as places to learn rationality like people learn martial arts. I believe that’s the source of CFAR.
It’s not that the one and only goal of the rationalist community was to stop an unfriendly AGI. It’s just that is the obvious result of it. It’s a matter of taking the idea seriously, then shutting up and multiplying—assuming that AI risk is a real issue, it’s pretty obvious that it’s the most pressing problem facing humanity, which means that if you can actually help, you should step up.
Business/economic/social incentives can work, no doubt about that. The issue is that they only work as long as they’re applied. Actually caring about an issue (as in really care, like oppressed christian level, not performance cultural christian level) is a lot more lasting, in that if the incentives disappear, they’ll keep on doing what you want. Convincing is a lot harder, though, which I’m guessing is your point? I agree that convincing is less effective numerically speaking, but it seems a lot more good (in a moral sense), which also seems important. Though this is admittedly a lot more of an aesthetics thing...
I most certainly recommend reading the sequences, but by no means meant to imply that you must. Just that stopping an unfriendly AGI (or rather the desirability of creating an friendly AI) permeates the sequences. I don’t recall if it’s stated explicitly, but it’s obvious that they’re pushing you in that direction. I believe Scott Alexander described the sequences as being totally mind blowing the first time he read them, but totally obvious on rereading them—I don’t know which would be your reaction. You can try the highlights rather than the whole thing, which should be a lot quicker.