I feel that human intelligence is not the gold standard of general intelligence; rather, I’ve begun thinking of it as the *minimum viable general intelligence*. In evolutionary timescales, virtually no time has elapsed since hominids began trading, utilizing complex symbolic thinking, making art, hunting large animals etc, and here we are, a blip later in high technology. The moment we reached minimum viable general intelligence, we started accelerating to dominate our environment on a global scale, despite increases in intelligence that are actually relatively megre within that time: evolution acts over much longer timescales and can’t keep pace with our environment, which we’re modifying at an ever-increasing rate. Moravec’s paradox suggests we are in fact highly adapted to the task of interacting with the physical world-as basically all animals are-and we have some half-baked logical thinking systems tacked on to this base.
Far from being the smartest possible biological species, we are probably better thought of as the stupidest possible biological species capable of starting a technological civilization—a niche we filled because we got there first, not because we are in any sense optimally adapted to it.
Re this:
In evolutionary timescales, virtually no time has elapsed since hominids began trading, utilizing complex symbolic thinking, making art, hunting large animals etc, and here we are, a blip later in high technology.
A bit nit-picky, but a recent paper studying West Eurasia found significant evolution over the last 14,000 years.
I agree that “general” isn’t such a good word for humans. But unless civilization was initiated right after the minimum viable threshold was crossed, it seems somewhat unlikely to me that humans were very representative of the minimum viable threshold.
If any evolutionary process other than civilization precursors formed the feedback loop that caused human intelligence, then civilization would hit full swing sooner if that feedback loop continued pushing human intelligence further. Whether Earth took a century or a millennia between the harnessing of electricity and the first computer was heavily affected by economics and genetic diversity (e.g. Babbage, Lovelace, Turing), but afaik a “minimum viable general intelligence” could plausibly have taken millions or even billions of years under ideal cultural conditions to cross that particular gap.
Has anyone here had therapy to help handle thoughts of AI doom? How did it go? What challenges did you face explaining it or being taken seriously, and what kind of therapy worked, if any?
I went to a therapist for 2 sessions and received nothing but blank looks when I tried to explain what I was trying to process. I think it was very unfamiliar ground for them and they didn’t know what to do with me. I’d like to try again but if anyone here has guideance on what worked for them, I’d be interested.
I’ve also started basic meditation, which continues to be a little helpful.
Consider that there are people with high P(doom) who don’t have any depression or anxiety. Emotions are not as much caused by our beliefs as we tend to assume. A therapist might be able to teach more productive thought patterns and behaviors, but they are unlikely to speak with competence on the object level issue of AI doom.
Independently I recommend trying to get a prescription for SSRIs. Most probably won’t help, but some might, and they tend to not have strong side effects in my experience, so trying them doesn’t hurt.
Only problem is that trying different SSRIs can take a very long time: usually you take one for several weeks, nothing happens, the doctor says “up the dosage”, weeks pass, still no effect, and the doctor might increase the dosage again. Only then may they switch you to a different SSRI, and the whole process begins anew. So persistence is required.
I think you might have been a little too quick to recommend SSRIs here. Whestler didn’t give any indication whether they’re having a particularly bad time vs. being particularly willing to go to a therapist for problems. Lots of subcultures in the US and Europe e.g. progressive leftism, hippies, etc go hard on encouraging men and people in general to rely on therapists to fix problems, and these cultures don’t account at all e.g. for how therapists are mainly accustomed to doomy people worrying about climate change doom (there are millions of them) and how the standard treatment is supposed to be persuading them to stop worrying about climate change (because environmentalism is intensely polarized), and this does not work at all for AI doom because AI harvests available matter.
Furthermore, this is a market, less competent therapists have more open slots, so if Whestler tried once and it went pretty badly (especially while expecting the therapist to be competent instead of look competent), that also doesn’t tell us much. We do know, however, that Whestler only tried one therapist, and then came here. That tells us a lot; before coming here, he didn’t try multiple therapists to get a better sense of how competence varies between therapists (though creepy vibes from one therapist might discourage someone from trying more).
We can also see that Whestler was not competent enough to seriously consider tinkering with the therapy session’s underlying dynamics (including but not limited to: reading good books about therapy, and then knowing ahead of time that if you aren’t careful, respectful, and deferential in the correct ways, some proportion of health experts will vividly feel like you’re challenging their authority and strategically conceal those feelings), so along with expecting to be taken seriously by the therapist in the first place, these are our upper bounds of competence, but they were competent enough to notice that the standard interface wasn’t working and ask elsewhere for information, which is our lower bound. Although SSRIs are great for anxiety, I don’t think this comment gave us enough information to beeline towards them, as it might have been more about the knowledge they were endowed with rather than their traits (especially because we don’t know what medication Whestler was already put on).
particularly bad time vs. being particularly willing to go to a therapist
I’m having a particularly bad time. I chose to try therapy because it was the standard advice for depression and anxiety. I find it difficult talking to people I don’t know about my emotions and internal life, so I was expecting that to be difficult, but others had reported good experiences with therapy so I thought I shouldn’t dismiss it. Booking therapy is a big outlay of energy for me, so if I’m making an obvious mistake or there’s some basic thing that worked for someone else I thought I’d check before trying again.
less competent therapists have more open slots
This seems very plausibly what happened. I filtered out a lot of therapists who I would have chosen over this one because they had no availability. The person I saw was very unresponsive and I had a real hard time getting anything approaching a conversation going, much more difficult even than with a complete stranger at a party. Perhaps it was just their style of therapy? It’s hard to tell. As I say, it was only a (painful and energy-expensive) first attempt and want to give it another shot, but any input about types of therapy that others have found useful, or approaches they took to the process of getting therapy could be helpful to me.
I hadn’t considered that I might need to read some self help books and research how best to communicate with therapists ahead of time. Was I naive to think that a therapist would be above-average at putting a new client at ease or steering a conversation?
Suggest the following for independent reading: https://linktr.ee/modelsofmind, especially the first one and the multiagent models of mind sequence.
Emotions are not as much caused by our beliefs as we tend to assume
I agree to an extent. I think the emotions may not be directly caused by my beliefs, but more driven by the things I spend a lot of time thinking about. There have been times where I intentionally avoid AI risk as a topic of thought, and experience more positive emotions. In that time my beliefs haven’t changed, but what I’m actively thinking about has changed. This essay may also be a factor. I also read this essay recently which is perhaps talking along similar lines to what you’re saying, and was an interesting framing.
I’m not sure how easy it would be for me to get prescribed SSRIs, but it’s something I’d consider.
Thanks for posting. I’ve had some of the same thoughts especially about honesty and the therapist’s ability to support you in doing something that they either don’t understand the significance of or may actively morally oppose. It’s a very difficult thing to require a person to try to do.
I’m not sure if this is the right place to post, but where can I find details on the Petrov day event/website feature?
I don’t want to sign up to participate if (for example) I am not going to be available during the time of the event, but I get selected to play a role.
I have no idea what the event will be, but Petrov Day itself is the 26th of September, and given that LW users are in many timezones my expectation is that there will be no specific time you need to be available on that day.
I feel that human intelligence is not the gold standard of general intelligence; rather, I’ve begun thinking of it as the *minimum viable general intelligence*.
In evolutionary timescales, virtually no time has elapsed since hominids began trading, utilizing complex symbolic thinking, making art, hunting large animals etc, and here we are, a blip later in high technology. The moment we reached minimum viable general intelligence, we started accelerating to dominate our environment on a global scale, despite increases in intelligence that are actually relatively megre within that time: evolution acts over much longer timescales and can’t keep pace with our environment, which we’re modifying at an ever-increasing rate.
Moravec’s paradox suggests we are in fact highly adapted to the task of interacting with the physical world-as basically all animals are-and we have some half-baked logical thinking systems tacked on to this base.
Cf this Bostrom quote.
Re this:
A bit nit-picky, but a recent paper studying West Eurasia found significant evolution over the last 14,000 years.
I agree that “general” isn’t such a good word for humans. But unless civilization was initiated right after the minimum viable threshold was crossed, it seems somewhat unlikely to me that humans were very representative of the minimum viable threshold.
If any evolutionary process other than civilization precursors formed the feedback loop that caused human intelligence, then civilization would hit full swing sooner if that feedback loop continued pushing human intelligence further. Whether Earth took a century or a millennia between the harnessing of electricity and the first computer was heavily affected by economics and genetic diversity (e.g. Babbage, Lovelace, Turing), but afaik a “minimum viable general intelligence” could plausibly have taken millions or even billions of years under ideal cultural conditions to cross that particular gap.
Has anyone here had therapy to help handle thoughts of AI doom? How did it go? What challenges did you face explaining it or being taken seriously, and what kind of therapy worked, if any?
I went to a therapist for 2 sessions and received nothing but blank looks when I tried to explain what I was trying to process. I think it was very unfamiliar ground for them and they didn’t know what to do with me. I’d like to try again but if anyone here has guideance on what worked for them, I’d be interested.
I’ve also started basic meditation, which continues to be a little helpful.
Try @Kaj_Sotala or another therapist / coach from the community. It’s easier when you don’t have to doompill your psychological support person.
Maybe try from the Psychat-list from ACX: https://psychiatlist.astralcodexten.com/
Thank you, I didn’t know this existed.
Consider that there are people with high P(doom) who don’t have any depression or anxiety. Emotions are not as much caused by our beliefs as we tend to assume. A therapist might be able to teach more productive thought patterns and behaviors, but they are unlikely to speak with competence on the object level issue of AI doom.
Independently I recommend trying to get a prescription for SSRIs. Most probably won’t help, but some might, and they tend to not have strong side effects in my experience, so trying them doesn’t hurt.
Only problem is that trying different SSRIs can take a very long time: usually you take one for several weeks, nothing happens, the doctor says “up the dosage”, weeks pass, still no effect, and the doctor might increase the dosage again. Only then may they switch you to a different SSRI, and the whole process begins anew. So persistence is required.
I think you might have been a little too quick to recommend SSRIs here. Whestler didn’t give any indication whether they’re having a particularly bad time vs. being particularly willing to go to a therapist for problems. Lots of subcultures in the US and Europe e.g. progressive leftism, hippies, etc go hard on encouraging men and people in general to rely on therapists to fix problems, and these cultures don’t account at all e.g. for how therapists are mainly accustomed to doomy people worrying about climate change doom (there are millions of them) and how the standard treatment is supposed to be persuading them to stop worrying about climate change (because environmentalism is intensely polarized), and this does not work at all for AI doom because AI harvests available matter.
Furthermore, this is a market, less competent therapists have more open slots, so if Whestler tried once and it went pretty badly (especially while expecting the therapist to be competent instead of look competent), that also doesn’t tell us much. We do know, however, that Whestler only tried one therapist, and then came here. That tells us a lot; before coming here, he didn’t try multiple therapists to get a better sense of how competence varies between therapists (though creepy vibes from one therapist might discourage someone from trying more).
We can also see that Whestler was not competent enough to seriously consider tinkering with the therapy session’s underlying dynamics (including but not limited to: reading good books about therapy, and then knowing ahead of time that if you aren’t careful, respectful, and deferential in the correct ways, some proportion of health experts will vividly feel like you’re challenging their authority and strategically conceal those feelings), so along with expecting to be taken seriously by the therapist in the first place, these are our upper bounds of competence, but they were competent enough to notice that the standard interface wasn’t working and ask elsewhere for information, which is our lower bound. Although SSRIs are great for anxiety, I don’t think this comment gave us enough information to beeline towards them, as it might have been more about the knowledge they were endowed with rather than their traits (especially because we don’t know what medication Whestler was already put on).
I’ll clear this up first:
I’m having a particularly bad time. I chose to try therapy because it was the standard advice for depression and anxiety. I find it difficult talking to people I don’t know about my emotions and internal life, so I was expecting that to be difficult, but others had reported good experiences with therapy so I thought I shouldn’t dismiss it. Booking therapy is a big outlay of energy for me, so if I’m making an obvious mistake or there’s some basic thing that worked for someone else I thought I’d check before trying again.
This seems very plausibly what happened. I filtered out a lot of therapists who I would have chosen over this one because they had no availability. The person I saw was very unresponsive and I had a real hard time getting anything approaching a conversation going, much more difficult even than with a complete stranger at a party. Perhaps it was just their style of therapy? It’s hard to tell. As I say, it was only a (painful and energy-expensive) first attempt and want to give it another shot, but any input about types of therapy that others have found useful, or approaches they took to the process of getting therapy could be helpful to me.
I hadn’t considered that I might need to read some self help books and research how best to communicate with therapists ahead of time. Was I naive to think that a therapist would be above-average at putting a new client at ease or steering a conversation?
Suggest the following for independent reading: https://linktr.ee/modelsofmind, especially the first one and the multiagent models of mind sequence.
I agree to an extent. I think the emotions may not be directly caused by my beliefs, but more driven by the things I spend a lot of time thinking about. There have been times where I intentionally avoid AI risk as a topic of thought, and experience more positive emotions. In that time my beliefs haven’t changed, but what I’m actively thinking about has changed. This essay may also be a factor. I also read this essay recently which is perhaps talking along similar lines to what you’re saying, and was an interesting framing.
I’m not sure how easy it would be for me to get prescribed SSRIs, but it’s something I’d consider.
https://samuelshadrach.com/raw/text_english_html/unimportant/therapy_effectiveness_personal.html
Thanks for posting. I’ve had some of the same thoughts especially about honesty and the therapist’s ability to support you in doing something that they either don’t understand the significance of or may actively morally oppose. It’s a very difficult thing to require a person to try to do.
Good to know it helped
I’m not sure if this is the right place to post, but where can I find details on the Petrov day event/website feature?
I don’t want to sign up to participate if (for example) I am not going to be available during the time of the event, but I get selected to play a role.
Maybe the lack of information is intentional?
I have no idea what the event will be, but Petrov Day itself is the 26th of September, and given that LW users are in many timezones my expectation is that there will be no specific time you need to be available on that day.
I feel like this should be a lw question post. and maybe an lw admin should be tagged?
I’m also wondering the same question.