Well, they are the decision-relevant question. At some point timelines get short enough that it’s pointless to save for retirement. At some point timelines get short enough that it may be morally irresponsible to have children. At some point timelines get short enough that it becomes worthwhile to engage in risky activities like skydiving, motorcycling, or taking drugs (because the less time you have left the fewer expected minutes of life you lose from risking your life). Etc.
This helped me understand why people interpret the Rationalists as just another apocalypse cult. The decision-relevant question is not, apparently, what’s going to happen as increasing amounts of cognitive labor are delegated to machines and what sort of strategies are available to make this transition go well and what role one might personally play in instantiating these strategies, but instead, simply how soon the inevitable apocalypse is going to happen and accordingly what degree of time preference we should have.
I’m confused by this response. I didn’t mention this to imply that I’m not interested in the model-building and strategic discussions—they seem clearly good and important—but I do think there’s a tendency for people to not connect their timelines with their actual daily lives, which are in fact nontrivially affected by their timelines, and I track this level of connection to see who’s putting their money where their mouth is in this particular way. E.g. I’ve been asking people for the last few years whether they’re saving for retirement for this reason.
Calling them the decision-relevant question implies that you think this sort of thing is more important than the other questions. This is very surprising—as is the broader emphasis on timelines over other strategic considerations that might help us favor some interventions over others—if you take the AI safety narrative literally. It’s a lot less surprising if you think explicit narratives are often cover stories people don’t mean literally, and notice that one of the main ways factions within a society become strategically distinct is by coordinating around distinct time preferences.
In particular, inducing high time preference in people already disposed to trusting you as a leader seems like a pretty generally applicable strategy for getting them to defer to your authority. “Emergency powers,” wartime authority and rallying behind the flag, doomsday cults, etc.
Yeah, “the” was much too strong and that is definitely not a thing I think. I don’t appreciate the indirect accusation that I’m trying to get people to defer to my authority by inducing high time preference in them.
For what it’s worth, I read Benquo as saying not “Qiaochu is trying to do this” but something more like “People who see rationalism as a cult are likely to think the cult leaders are trying to do this”. Though I can see arguments for reading it the other way.
I anticipated a third thing, which is that “at least some people who are talking about short timelines are at least somewhat unconsciously motivated to do so for this reason, Qiaochu is embedded in the social web that may be playing a role in shaping his intuitions.”
This helped me understand why people interpret the Rationalists as just another apocalypse cult. The decision-relevant question is not, apparently, what’s going to happen as increasing amounts of cognitive labor are delegated to machines and what sort of strategies are available to make this transition go well and what role one might personally play in instantiating these strategies, but instead, simply how soon the inevitable apocalypse is going to happen and accordingly what degree of time preference we should have.
I’m confused by this response. I didn’t mention this to imply that I’m not interested in the model-building and strategic discussions—they seem clearly good and important—but I do think there’s a tendency for people to not connect their timelines with their actual daily lives, which are in fact nontrivially affected by their timelines, and I track this level of connection to see who’s putting their money where their mouth is in this particular way. E.g. I’ve been asking people for the last few years whether they’re saving for retirement for this reason.
Calling them the decision-relevant question implies that you think this sort of thing is more important than the other questions. This is very surprising—as is the broader emphasis on timelines over other strategic considerations that might help us favor some interventions over others—if you take the AI safety narrative literally. It’s a lot less surprising if you think explicit narratives are often cover stories people don’t mean literally, and notice that one of the main ways factions within a society become strategically distinct is by coordinating around distinct time preferences.
In particular, inducing high time preference in people already disposed to trusting you as a leader seems like a pretty generally applicable strategy for getting them to defer to your authority. “Emergency powers,” wartime authority and rallying behind the flag, doomsday cults, etc.
Yeah, “the” was much too strong and that is definitely not a thing I think. I don’t appreciate the indirect accusation that I’m trying to get people to defer to my authority by inducing high time preference in them.
For what it’s worth, I read Benquo as saying not “Qiaochu is trying to do this” but something more like “People who see rationalism as a cult are likely to think the cult leaders are trying to do this”. Though I can see arguments for reading it the other way.
I anticipated a third thing, which is that “at least some people who are talking about short timelines are at least somewhat unconsciously motivated to do so for this reason, Qiaochu is embedded in the social web that may be playing a role in shaping his intuitions.”
This is what I meant.