Scattered thoughts on how the rationalist movement has helped me:
On the topic of rationalist self-improvement, I would like to raise the point that simply feeling as though there’s a community of people who get me and that I can access when I want to has been hugely beneficial to my sense of happiness and belonging in the world.
That generates a lot of hedons for me, which then on occasion allow me to “afford” doing other things I wouldn’t otherwise, like spend a little more time studying mathematics or running through Anki flashcards. There’s a part of me that feels like I’m not just building up this knowledge for myself, but for the future possible good of “my people”. I might tie together stuff in a way that other people find interesting, or insightful, or at least enjoy reading about, and that’s honestly fricking awesome and blows standard delayed-gratification “self improvement” tactics outta the water 10⁄10 would recommend.
Also there’s the whole thing that Ozy who is rat-almost-maybe-adjacent wrote the greatest summary of the greatest dating advice book I ever read, and I literally read that effortpost every day for like 8 months while I was learning how to be a half-decent romantic option, and holy SHIT is my life better for that. But again—nothing specific to the rationalist techniques themselves there; the value of the community was pointing me to someone who thinks and writes in a way my brain sees and says “mmm yes tasty good word soup i liek thanke” and then that person happened to write a post that played a big role in helping me with a problem that was causing me a ton of grief.
I have spent many years unintentionally dumbing myself down by not exercising my brain sufficiently. This place is somewhere I can come and flex a bit of mental muscle and get a bit of a dopamine reward for grasping a new concept or reading about how someone else worked their way through a problem and I am really glad it exists. The HPMOR series was especially useful for becoming more rational and since reading it my peers have noticed a change in the way I discuss difficult topics. I really enjoy recognising when the tools I’ve learnt here help me in my day to day stuff. In saying all that I feel I’m a rare ‘right of centre’ member here but because you are all rational it’s not such a big deal. Rational people are so much nicer to talk to eh!
This certainly seems important (I do think this is a key value the community provides). But it is importantly different from “the rationality content of the community is directly helpful for people-in-general.” If it were just “people who get you”, this wouldn’t obviously be more or differently important than other random subcultures.
Agreed on the difference. Different subcultures, I think, all try to push different narratives about how they are significantly different from other subcultures; they are in competition with other subcultures for brain-space. On that observation, my priors that rationalist content is importantly different to other subcultures in that regard are low.
I suppose my real point in writing this is to advise against a sort of subcultural Fear Of Being Ordinary—rationalism doesn’t have to be qualitatively different from other subcultures to be valuable. For people under its umbrella, it can be very valuable, for reasons that have almost nothing to do with the quirks of the subculture itself.
In economic science, the tragedy of the commons is a situation in which individual users, who have open access to a resource unhampered by shared social structures or formal rules that govern access and use, act independently according to their own self-interest and, contrary to the common good of all users, cause depletion of the resource through their uncoordinated action.
The usual example of a TotC is a fishing pond: Everyone wants to fish as much as possible, but fish are not infinite, and if you fish them faster than they can reproduce, you end up with less and less fish per catch.
AI development seems to have a similar dynamic: Everyone has an incentive to build more and more powerful AIs, because there is a lot of money to be made in doing so. But more and more powerful AIs being made increases the likelihood of an unstoppable AGI being made.
There are some differences, but I think this is the underlying dynamic driving AI development today. The biggest point of difference is that, whereas one person’s overfishing eventually causes a noticeable negative effect on other fishers, and at the least does not improve their own catches, one firm building a more powerful AI probably does improve the economic situation of the other people who leverage it, up until a critical point.
Are there other tragedies of the commons that exhibit such non-monotonic behavior?
With a little stretch, EVERY coordination problem is a tragedy of the commons. It’s only a matter of identifying the resource that is limited but has uncontrolled consumption.
In this case, it IS a stretch to think of “evil-AGI-free world” as a resource that’s being consumed. and it doesn’t really lead to solutions—many TotC problems can be addressed by defining property rights and figuring out who has the authority/ability to exclude uses in order to protect the long-term value of the resource.
An artificial general intelligence (AGI) is a computer program that can perform at least as good as an average human being can across a wide variety of tasks. The concept is closely linked to that of a general superintelligence, which can perform better than even the best human being can across a wide variety of tasks.
There are reasons to believe most, perhaps almost all, general superintelligences would end up causing human extinction. AI safety is a crossdisciplinary field of mathematics, economics, computer science, and philosophy which tackles the problem of how to stop such superintelligences.
AI alignment is a subfield of AI safety which studies theoretical conditions under which superintelligences aligned with human values can emerge. Another branch, which might be called AI deterrence, aims instead to make the production of unaligned superintelligences less likely in the first place.
One of the primary reasons why someone might want to create a superintelligence, even while understanding the risks involved, is because of the vast economic value such a program could generate. It makes sense then from a deterrence lens to look into the question of how this profit motive might be curtailed before catastrophe. Why not communism?
Unfortunately, this is almost certainly a bad move. Communism at almost every scale has to date never been able to escape the rampant black markets that appear due to the distortion of price signals. There is no reason to suspect such black markets wouldn’t have just as strong a profit motive to create stronger and stronger AGIs. Indeed, because black markets are already illegal, this may worsen the problem: Well funded teams of people producing AGI outside of the eyes of the broader public is likely to generate less pushback and to be better equipped to avoid deterrence oriented legislation than a clear market team such as OpenAI is.
1. It seems like: ‘Weaker Econ system’ → less human made x-risk with high development cost. (Natural pandemics can occur, so whether they would be difficult to make isn’t clear.) That’s not to say that overall x-risk is lower—if a meteor hits and wipes out earth’s entire population, then not being on other worlds is also an issue.
2. There is no reason to suspect such black markets wouldn’t have just as strong a profit motive to create stronger and stronger AGIs.
This seems surprising—developing to the level of ‘we’re working on AI’, it takes a while to get there.
3. I’d have guessed you’d mention ‘communism’ creating AGI. (These markets keep popping up! What should we do about them? We could allocate stuff using an AI...)
Indeed, because black markets are already illegal, this may worsen the problem: Well funded teams of people producing AGI outside of the eyes of the broader public is likely to generate less pushback and to be better equipped to avoid deterrence oriented legislation than a clear market team such as OpenAI is.
1a → Broadly agree. “Weaker” is an interesting word to pick here; I’m not sure whether an anarcho-primitivist society would be considered weaker or stronger than a communist one systemically. Maybe it depends on timescale. Of course, if this were the only size lever we had to move x-risk up and down, we’d be in a tough position—but I don’t think anyone takes that view seriously.
1b → Logically true, but I do see strong reason to think short term x-risk is mostly anthropogenic. That’s why we’re all here.
2 → I do agree it would probably take a while.
3a → Depends on how coarse or fine grained the distribution of resources is, a simple linear optimizer program would probably do the same job better for most coarser distribution schemes.
3b → Kind of. I’m looking into them as a curiosity.
Metcalfe’s (revised!) law states that the value of a communications network grows at about nlogn.
I frequently give my friends the advice that they should aim to become pretty good at 2 synergistic disciplines (CS and EE for me, for example), but I have wondered in the past why I don’t give them the advice to become okay at 4 or 5 synergistic disciplines instead.
It just struck me these ideas might be connected in some way, but I am having trouble figuring out exactly how.
Scattered thoughts on how the rationalist movement has helped me:
On the topic of rationalist self-improvement, I would like to raise the point that simply feeling as though there’s a community of people who get me and that I can access when I want to has been hugely beneficial to my sense of happiness and belonging in the world.
That generates a lot of hedons for me, which then on occasion allow me to “afford” doing other things I wouldn’t otherwise, like spend a little more time studying mathematics or running through Anki flashcards. There’s a part of me that feels like I’m not just building up this knowledge for myself, but for the future possible good of “my people”. I might tie together stuff in a way that other people find interesting, or insightful, or at least enjoy reading about, and that’s honestly fricking awesome and blows standard delayed-gratification “self improvement” tactics outta the water 10⁄10 would recommend.
Also there’s the whole thing that Ozy who is rat-almost-maybe-adjacent wrote the greatest summary of the greatest dating advice book I ever read, and I literally read that effortpost every day for like 8 months while I was learning how to be a half-decent romantic option, and holy SHIT is my life better for that. But again—nothing specific to the rationalist techniques themselves there; the value of the community was pointing me to someone who thinks and writes in a way my brain sees and says “mmm yes tasty good word soup i liek thanke” and then that person happened to write a post that played a big role in helping me with a problem that was causing me a ton of grief.
TLDR rationalists > rationalism
I have spent many years unintentionally dumbing myself down by not exercising my brain sufficiently. This place is somewhere I can come and flex a bit of mental muscle and get a bit of a dopamine reward for grasping a new concept or reading about how someone else worked their way through a problem and I am really glad it exists. The HPMOR series was especially useful for becoming more rational and since reading it my peers have noticed a change in the way I discuss difficult topics. I really enjoy recognising when the tools I’ve learnt here help me in my day to day stuff. In saying all that I feel I’m a rare ‘right of centre’ member here but because you are all rational it’s not such a big deal. Rational people are so much nicer to talk to eh!
Yeah, similar here. The existence of people with values similar to mine is emotionally comforting, and they also give good advice.
This certainly seems important (I do think this is a key value the community provides). But it is importantly different from “the rationality content of the community is directly helpful for people-in-general.” If it were just “people who get you”, this wouldn’t obviously be more or differently important than other random subcultures.
Agreed on the difference. Different subcultures, I think, all try to push different narratives about how they are significantly different from other subcultures; they are in competition with other subcultures for brain-space. On that observation, my priors that rationalist content is importantly different to other subcultures in that regard are low.
I suppose my real point in writing this is to advise against a sort of subcultural Fear Of Being Ordinary—rationalism doesn’t have to be qualitatively different from other subcultures to be valuable. For people under its umbrella, it can be very valuable, for reasons that have almost nothing to do with the quirks of the subculture itself.
Nod. I do agree with that.
Which post from Ozy do you mean?
https://thingofthings.wordpress.com/2018/05/25/models-a-summary/
AI development is a tragedy of the commons
Per Wikipedia:
The usual example of a TotC is a fishing pond: Everyone wants to fish as much as possible, but fish are not infinite, and if you fish them faster than they can reproduce, you end up with less and less fish per catch.
AI development seems to have a similar dynamic: Everyone has an incentive to build more and more powerful AIs, because there is a lot of money to be made in doing so. But more and more powerful AIs being made increases the likelihood of an unstoppable AGI being made.
There are some differences, but I think this is the underlying dynamic driving AI development today. The biggest point of difference is that, whereas one person’s overfishing eventually causes a noticeable negative effect on other fishers, and at the least does not improve their own catches, one firm building a more powerful AI probably does improve the economic situation of the other people who leverage it, up until a critical point.
Are there other tragedies of the commons that exhibit such non-monotonic behavior?
With a little stretch, EVERY coordination problem is a tragedy of the commons. It’s only a matter of identifying the resource that is limited but has uncontrolled consumption.
In this case, it IS a stretch to think of “evil-AGI-free world” as a resource that’s being consumed. and it doesn’t really lead to solutions—many TotC problems can be addressed by defining property rights and figuring out who has the authority/ability to exclude uses in order to protect the long-term value of the resource.
Why is it a stretch?
It’s hard to quantify the resource or define how it reduces with use or how it’s replenished. This makes it an imperfect match for the TotC analogy.
Would AGI still be an x-risk under communism?
1-bit verdict
Yes.
2-bit verdict
Absolutely, yes.
Explanation
An artificial general intelligence (AGI) is a computer program that can perform at least as good as an average human being can across a wide variety of tasks. The concept is closely linked to that of a general superintelligence, which can perform better than even the best human being can across a wide variety of tasks.
There are reasons to believe most, perhaps almost all, general superintelligences would end up causing human extinction. AI safety is a crossdisciplinary field of mathematics, economics, computer science, and philosophy which tackles the problem of how to stop such superintelligences.
AI alignment is a subfield of AI safety which studies theoretical conditions under which superintelligences aligned with human values can emerge. Another branch, which might be called AI deterrence, aims instead to make the production of unaligned superintelligences less likely in the first place.
One of the primary reasons why someone might want to create a superintelligence, even while understanding the risks involved, is because of the vast economic value such a program could generate. It makes sense then from a deterrence lens to look into the question of how this profit motive might be curtailed before catastrophe. Why not communism?
Unfortunately, this is almost certainly a bad move. Communism at almost every scale has to date never been able to escape the rampant black markets that appear due to the distortion of price signals. There is no reason to suspect such black markets wouldn’t have just as strong a profit motive to create stronger and stronger AGIs. Indeed, because black markets are already illegal, this may worsen the problem: Well funded teams of people producing AGI outside of the eyes of the broader public is likely to generate less pushback and to be better equipped to avoid deterrence oriented legislation than a clear market team such as OpenAI is.
1. It seems like: ‘Weaker Econ system’ → less human made x-risk with high development cost. (Natural pandemics can occur, so whether they would be difficult to make isn’t clear.) That’s not to say that overall x-risk is lower—if a meteor hits and wipes out earth’s entire population, then not being on other worlds is also an issue.
2. There is no reason to suspect such black markets wouldn’t have just as strong a profit motive to create stronger and stronger AGIs.
This seems surprising—developing to the level of ‘we’re working on AI’, it takes a while to get there.
3. I’d have guessed you’d mention ‘communism’ creating AGI. (These markets keep popping up! What should we do about them? We could allocate stuff using an AI...)
There’s deterrence oriented legislation?
1a → Broadly agree. “Weaker” is an interesting word to pick here; I’m not sure whether an anarcho-primitivist society would be considered weaker or stronger than a communist one systemically. Maybe it depends on timescale. Of course, if this were the only size lever we had to move x-risk up and down, we’d be in a tough position—but I don’t think anyone takes that view seriously.
1b → Logically true, but I do see strong reason to think short term x-risk is mostly anthropogenic. That’s why we’re all here.
2 → I do agree it would probably take a while.
3a → Depends on how coarse or fine grained the distribution of resources is, a simple linear optimizer program would probably do the same job better for most coarser distribution schemes.
3b → Kind of. I’m looking into them as a curiosity.
This could mean a few different things. What did you mean by it? (Specifically “That’s why we’re all here.”.)
Metcalfe’s (revised!) law states that the value of a communications network grows at about nlogn.
I frequently give my friends the advice that they should aim to become pretty good at 2 synergistic disciplines (CS and EE for me, for example), but I have wondered in the past why I don’t give them the advice to become okay at 4 or 5 synergistic disciplines instead.
It just struck me these ideas might be connected in some way, but I am having trouble figuring out exactly how.