Potentially interesting and important topics, but I can’t quite tell what those topics are—it’s not exactly concrete enough to be advice (or its more useful sibling, personal retrospective), not really objective and specific enough to be data, and not deep enough to be about agent and utility modeling. Writing is too far toward the persuasive style to fit well in LW.
Starting to talk about values and state of the world in terms of goals, then shifting to personal comfort (retirement) is confusing. I think you’re over-abstracting “resources”.
You can’t know all the answers to these questions, but we do know this much: acquire resources.
This assumes that you believe that individual personal control of resources is the best way to reach your goals. You don’t explore other options like “increase resources to other aligned agents” or “destroy resources being used against you”. You do talk about alliances and friendships with other humans, but only in the context of increasing your personal comfort or position, not in terms of actually furthering goals.
A lot more money! We all need more money!
Money is not resources—it’s a somewhat abstract representation of motivation for other humans to provide resources in the future. I fully agree that I can use more, and that most people can, but only if not everyone has more. In order for my money to be useful, there needs to be someone who wants some of it more than I do, and will exchange actual goods or services for the money.
Potentially interesting and important topics, but I can’t quite tell what those topics are—it’s not exactly concrete enough to be advice (or its more useful sibling, personal retrospective), not really objective and specific enough to be data, and not deep enough to be about agent and utility modeling.
This is only the introductory post to a new sequence. It will become more concrete and actionable later in the sequence, promise. But maybe not all that concrete by the next post, because inferential gaps. Knowledge acquired in the wrong order can be dangerous. I don’t want to explain the practicals of how to trade before explaining how to do it safely.
Writing is too far toward the persuasive style to fit well in LW.
This is something I worried about while writing this post. This is the hook post of the sequence to persuade you that reading and acting on the rest is worth the bother. The rest of the sequence should feel more informative, rather than persuasive as compared to this post.
That said, persuasion is not itself an evil. EY certainly had some in his sequences. Having been on the interwebs for a while, I very much understand the desire to resist politics as a failure mode. It can get ugly. Nevertheless, I want to see more instrumental rationality on LessWrong. Among other things, this means coordinating cooperation between people. I don’t know how to do that without using a little persuasion. May I gently suggest that we could use a little more cooperation on instrumental goals on LessWrong?
Starting to talk about values and state of the world in terms of goals, then shifting to personal comfort (retirement) is confusing.
Please don’t take the rest of this the wrong way. I welcome constructive criticism, so think of this as accepting your words and correcting myself in the comments, rather than defending my honor at the expense of rationality, due to the norms of not editing the substance of posts too much after publishing them.
I do not mean to emphasize “comfort” (although why not, on utilitarian grounds?) The main benefit I see to “retirement” is freeing up your valuable resource of free time to do more important things with it. I feel that 40-hour work weeks are not a good deal, considering the alternatives available.
That said, I can’t possibly hope to predict all the ways my words may be misinterpreted before it happens. However, I noticed (too late for this post) that it is possible to share drafts with individual LessWrong users before publishing them. But even had I known, I wouldn’t know who to ask. Since you seem willing to criticize now, would you also be willing to review drafts of future posts in this sequence before I publish them? It would allow me to make advance corrections without violating the post-publishing edit norms.
I think you’re over-abstracting “resources”. [...] This assumes that you believe that individual personal control of resources is the best way to reach your goals. You don’t explore other options like “increase resources to other aligned agents” or “destroy resources being used against you”. You do talk about alliances and friendships with other humans, but only in the context of increasing your personal comfort or position, not in terms of actually furthering goals.
Oh, but I did mention that I’m doing this to “enrich the rationalists” and had a section on EA. If I can make a lot of rationalists individually richer by sharing my knowledge of trading, then that is increasing the resources of other aligned agents. If they also donate to EA causes we can agree on, then we are again increasing the resources of other aligned agents. But this sequence isn’t about ends, besides this first post pointing out that you have them and they are important enough to take even more seriously than you are, it’s about means.
Destroying resources being used against us sounds interesting, but isn’t really the topic of this sequence. Did you have something in particular in mind?
Money is not resources—it’s a somewhat abstract representation of motivation for other humans to provide resources in the future. I fully agree that I can use more, and that most people can, but only if not everyone has more. In order for my money to be useful, there needs to be someone who wants some of it more than I do, and will exchange actual goods or services for the money.
The amount of money is not fixed for all time. The economy is not zero-sum, although one category of trading I will be teaching is (but not the other). And by “we” I mean us in the rationalist movement. Money by itself won’t help those who can’t use it wisely. I think we can find better uses for it than those we are taking it from. Money is to some degree a positional good, but if you resist buying bragging rights, due to current social inequality, you’ll need a lot more money before that starts to matter.
I don’t mean to say that money is the only resource that matters. Knowledge, credit, friends, and, especially time were other resources I mentioned. In our current society you really can’t avoid needing money, and if you have excess, you can trade it pretty directly for other resources.
Potentially interesting and important topics, but I can’t quite tell what those topics are—it’s not exactly concrete enough to be advice (or its more useful sibling, personal retrospective), not really objective and specific enough to be data, and not deep enough to be about agent and utility modeling. Writing is too far toward the persuasive style to fit well in LW.
Starting to talk about values and state of the world in terms of goals, then shifting to personal comfort (retirement) is confusing. I think you’re over-abstracting “resources”.
This assumes that you believe that individual personal control of resources is the best way to reach your goals. You don’t explore other options like “increase resources to other aligned agents” or “destroy resources being used against you”. You do talk about alliances and friendships with other humans, but only in the context of increasing your personal comfort or position, not in terms of actually furthering goals.
Money is not resources—it’s a somewhat abstract representation of motivation for other humans to provide resources in the future. I fully agree that I can use more, and that most people can, but only if not everyone has more. In order for my money to be useful, there needs to be someone who wants some of it more than I do, and will exchange actual goods or services for the money.
This is only the introductory post to a new sequence. It will become more concrete and actionable later in the sequence, promise. But maybe not all that concrete by the next post, because inferential gaps. Knowledge acquired in the wrong order can be dangerous. I don’t want to explain the practicals of how to trade before explaining how to do it safely.
This is something I worried about while writing this post. This is the hook post of the sequence to persuade you that reading and acting on the rest is worth the bother. The rest of the sequence should feel more informative, rather than persuasive as compared to this post.
That said, persuasion is not itself an evil. EY certainly had some in his sequences. Having been on the interwebs for a while, I very much understand the desire to resist politics as a failure mode. It can get ugly. Nevertheless, I want to see more instrumental rationality on LessWrong. Among other things, this means coordinating cooperation between people. I don’t know how to do that without using a little persuasion. May I gently suggest that we could use a little more cooperation on instrumental goals on LessWrong?
Please don’t take the rest of this the wrong way. I welcome constructive criticism, so think of this as accepting your words and correcting myself in the comments, rather than defending my honor at the expense of rationality, due to the norms of not editing the substance of posts too much after publishing them.
I do not mean to emphasize “comfort” (although why not, on utilitarian grounds?) The main benefit I see to “retirement” is freeing up your valuable resource of free time to do more important things with it. I feel that 40-hour work weeks are not a good deal, considering the alternatives available.
That said, I can’t possibly hope to predict all the ways my words may be misinterpreted before it happens. However, I noticed (too late for this post) that it is possible to share drafts with individual LessWrong users before publishing them. But even had I known, I wouldn’t know who to ask. Since you seem willing to criticize now, would you also be willing to review drafts of future posts in this sequence before I publish them? It would allow me to make advance corrections without violating the post-publishing edit norms.
Oh, but I did mention that I’m doing this to “enrich the rationalists” and had a section on EA. If I can make a lot of rationalists individually richer by sharing my knowledge of trading, then that is increasing the resources of other aligned agents. If they also donate to EA causes we can agree on, then we are again increasing the resources of other aligned agents. But this sequence isn’t about ends, besides this first post pointing out that you have them and they are important enough to take even more seriously than you are, it’s about means.
Destroying resources being used against us sounds interesting, but isn’t really the topic of this sequence. Did you have something in particular in mind?
The amount of money is not fixed for all time. The economy is not zero-sum, although one category of trading I will be teaching is (but not the other). And by “we” I mean us in the rationalist movement. Money by itself won’t help those who can’t use it wisely. I think we can find better uses for it than those we are taking it from. Money is to some degree a positional good, but if you resist buying bragging rights, due to current social inequality, you’ll need a lot more money before that starts to matter.
I don’t mean to say that money is the only resource that matters. Knowledge, credit, friends, and, especially time were other resources I mentioned. In our current society you really can’t avoid needing money, and if you have excess, you can trade it pretty directly for other resources.
Using a style that’s seen as too much focused on persuasion will lead to a lot of rationalists taking your posts less seriously.