You claim that the point of the rationalist community was to stop an unfriendly AGI. One thing that confuses me is exactly how it intends to do so, because that certainly wasn’t my impression of it. I can see the current strategy making sense if the goal is to develop some sort of Canon for AI Ethics that researchers and professionals in the field get exposed to, thus influencing their views and decreasing the probability of catastrophe. But is it really so?
If the goal is to do it by shifting public opinion in this particular issue, by making a majority of people rationalists, or by making political change and regulations, it isn’t immediately obvious to me. And I would bet against it because institutions that for a long time have been following those strategies with success, from marketing firms to political parties to lobbying firms to scientology, seem to operate very differently, as this post also implies.
If the goal is to do so by converting most people to rationalism (in a strict sense), I’d say I very much disagree with that being likely or maybe even a desirable effort. I’d love to discuss this subject here in more detail and have my ideas go through the grinder, but I’ve found this place to be very hard to penetrate, so I’m rarely here.
The answer is to read the sequences (I’m not being facetious). They were written with the explicit goal of producing people with EY’s rationality skills in order for them to go into producing Friendly AI (as it was called then). It provides a basis for people to realize why most approaches will by default lead to doom.
At the same time, it seems like a generally good thing for people to be as rational as possible, in order to avoid the myriad cognitive biases and problems that plague humanities thinking, and therefore actions. My impression is that the hope was to make the world more similar to Dath Ilan.
So the idea is that if you get as many people in the AI business/research and as possible to read the sequences, then that will change their ideas in a way that will make them work in AI in a safer way, and that will avoid doom?
I’m just trying to understand how exactly the mechanism that will lead to the desired change is supposed to work.
If that is the case, I would say the critique made by OP is really on point. I don’t believe the current approach is convincing many people to read the sequences, and I also think reading the sequences won’t necessarily make people change their actions when business/economic/social incentives work otherwise. The latter being unavoidably a regulatory problem, and the former a communications strategy problem.
Or are you telling me to read the sequences? I intend to sometime, I just have a bunch of stuff to read already and I’m not exactly good at reading a lot consistently. I don’t deny having good material on the subject is not essential either.
More that you get as many people in general to read the sequences, which will change their thinking so they make fewer mistakes, which in turn will make more people aware both of the real risks underlying superintelligence, but also of the plausibility and utility of AI. I wasn’t around then, so this is just my interpretation of what I read post-facto, but I get the impression that people were a lot less doomish then. There was a hope that alignment was totally solvable.
The focus didn’t seem to be on getting people into alignment, as much as it generally being better for people to think better. AI isn’t pushed as something everyone should do—rather as what EY knows—but something worth investigating. There are various places where it’s said that everyone could use more rationality, that it’s an instrumental goal like earning more money. There’s an idea of creating Rationality Dojos, as places to learn rationality like people learn martial arts. I believe that’s the source of CFAR.
It’s not that the one and only goal of the rationalist community was to stop an unfriendly AGI. It’s just that is the obvious result of it. It’s a matter of taking the idea seriously, then shutting up and multiplying—assuming that AI risk is a real issue, it’s pretty obvious that it’s the most pressing problem facing humanity, which means that if you can actually help, you should step up.
Business/economic/social incentives can work, no doubt about that. The issue is that they only work as long as they’re applied. Actually caring about an issue (as in really care, like oppressed christian level, not performance cultural christian level) is a lot more lasting, in that if the incentives disappear, they’ll keep on doing what you want. Convincing is a lot harder, though, which I’m guessing is your point? I agree that convincing is less effective numerically speaking, but it seems a lot more good (in a moral sense), which also seems important. Though this is admittedly a lot more of an aesthetics thing...
I most certainly recommend reading the sequences, but by no means meant to imply that you must. Just that stopping an unfriendly AGI (or rather the desirability of creating an friendly AI) permeates the sequences. I don’t recall if it’s stated explicitly, but it’s obvious that they’re pushing you in that direction. I believe Scott Alexander described the sequences as being totally mind blowing the first time he read them, but totally obvious on rereading them—I don’t know which would be your reaction. You can try the highlights rather than the whole thing, which should be a lot quicker.
You claim that the point of the rationalist community was to stop an unfriendly AGI. One thing that confuses me is exactly how it intends to do so, because that certainly wasn’t my impression of it. I can see the current strategy making sense if the goal is to develop some sort of Canon for AI Ethics that researchers and professionals in the field get exposed to, thus influencing their views and decreasing the probability of catastrophe. But is it really so?
If the goal is to do it by shifting public opinion in this particular issue, by making a majority of people rationalists, or by making political change and regulations, it isn’t immediately obvious to me. And I would bet against it because institutions that for a long time have been following those strategies with success, from marketing firms to political parties to lobbying firms to scientology, seem to operate very differently, as this post also implies.
If the goal is to do so by converting most people to rationalism (in a strict sense), I’d say I very much disagree with that being likely or maybe even a desirable effort. I’d love to discuss this subject here in more detail and have my ideas go through the grinder, but I’ve found this place to be very hard to penetrate, so I’m rarely here.
The answer is to read the sequences (I’m not being facetious). They were written with the explicit goal of producing people with EY’s rationality skills in order for them to go into producing Friendly AI (as it was called then). It provides a basis for people to realize why most approaches will by default lead to doom.
At the same time, it seems like a generally good thing for people to be as rational as possible, in order to avoid the myriad cognitive biases and problems that plague humanities thinking, and therefore actions. My impression is that the hope was to make the world more similar to Dath Ilan.
So the idea is that if you get as many people in the AI business/research and as possible to read the sequences, then that will change their ideas in a way that will make them work in AI in a safer way, and that will avoid doom?
I’m just trying to understand how exactly the mechanism that will lead to the desired change is supposed to work.
If that is the case, I would say the critique made by OP is really on point. I don’t believe the current approach is convincing many people to read the sequences, and I also think reading the sequences won’t necessarily make people change their actions when business/economic/social incentives work otherwise. The latter being unavoidably a regulatory problem, and the former a communications strategy problem.
Or are you telling me to read the sequences? I intend to sometime, I just have a bunch of stuff to read already and I’m not exactly good at reading a lot consistently. I don’t deny having good material on the subject is not essential either.
More that you get as many people in general to read the sequences, which will change their thinking so they make fewer mistakes, which in turn will make more people aware both of the real risks underlying superintelligence, but also of the plausibility and utility of AI. I wasn’t around then, so this is just my interpretation of what I read post-facto, but I get the impression that people were a lot less doomish then. There was a hope that alignment was totally solvable.
The focus didn’t seem to be on getting people into alignment, as much as it generally being better for people to think better. AI isn’t pushed as something everyone should do—rather as what EY knows—but something worth investigating. There are various places where it’s said that everyone could use more rationality, that it’s an instrumental goal like earning more money. There’s an idea of creating Rationality Dojos, as places to learn rationality like people learn martial arts. I believe that’s the source of CFAR.
It’s not that the one and only goal of the rationalist community was to stop an unfriendly AGI. It’s just that is the obvious result of it. It’s a matter of taking the idea seriously, then shutting up and multiplying—assuming that AI risk is a real issue, it’s pretty obvious that it’s the most pressing problem facing humanity, which means that if you can actually help, you should step up.
Business/economic/social incentives can work, no doubt about that. The issue is that they only work as long as they’re applied. Actually caring about an issue (as in really care, like oppressed christian level, not performance cultural christian level) is a lot more lasting, in that if the incentives disappear, they’ll keep on doing what you want. Convincing is a lot harder, though, which I’m guessing is your point? I agree that convincing is less effective numerically speaking, but it seems a lot more good (in a moral sense), which also seems important. Though this is admittedly a lot more of an aesthetics thing...
I most certainly recommend reading the sequences, but by no means meant to imply that you must. Just that stopping an unfriendly AGI (or rather the desirability of creating an friendly AI) permeates the sequences. I don’t recall if it’s stated explicitly, but it’s obvious that they’re pushing you in that direction. I believe Scott Alexander described the sequences as being totally mind blowing the first time he read them, but totally obvious on rereading them—I don’t know which would be your reaction. You can try the highlights rather than the whole thing, which should be a lot quicker.