My best guess is that this “missing mood” effect is a defensive reaction to a lack of plausible actionable steps: upon first being convinced people get upset/worried/sad, but they fail to find anything useful to do about the problem, so they move on and build up some psychological defenses against it.
Maybe? But if they are doing AI research, it shouldn’t be too hard for them to either (a) stop doing AI research, and thereby stop contributing to the problem, or (b) pivot their AI research more towards safety rather than capabilities, or at least (c) help raise awareness about the problem so that it becomes common knowledge so that everyone can stop all together and/or suitable regulation can be designed.
Edit: Oops, my comment shouldn’t be a direct reply here (but it fits into this general comment section, which is why I’m not deleting it). I didn’t read the parent comment that Daniel was replying to above and assumed he was replying in a totally different context (Musk not necessarily acting rationally on his non-missing mood, as opposed to Daniel talking about AI researchers and their missing mood.)
--
Yeah. I watched a Q&A on youtube after a talk by Sam Altman, roughly a year or two ago (or so), where Altman alluded that Musk had wanted some of OpenAI’s top AI scientists because Tesla needed them. It’s possible that the reason he left OpenAI was simply related to that, not to anything about strategically thinking about AI futures, missing moods, etc.
More generally, I feel like a lot of people seem to think that if you run a successful company, you must be brilliant and dedicated in every possible way. No, that’s not how it works. You can be a genius at founding and running companies and making lots of money without necessarily being good at careful reasoning about paths to impact other than “making money.” Probably these skills even come apart at the tails.
Agreed that it shouldn’t be hard to do that, but I expect that people will often continue to do what they find intrinsically motivating, or what they’re good at, even if it’s not overall a good idea. If this article can be believed, a senior researcher said that they work on capabilities because “the prospect of discovery is too sweet”.
An interesting missing mood I’ve observed in discussions of AI safety: When a new idea for achieving safe AI is proposed, you might expect that people concerned with AI risk would show a glimmer of eager curiosity. Perhaps the AI safety problem is actually solvable!
But I’ve pretty much never observed this. A more common reaction seems to be a sort of an uneasy defensiveness, sometimes in combination with changing the subject.
Another response I occasionally see is someone mentioning a potential problem in a manner that practically sounds like they are rebuking the person who shared the new idea.
I eventually came to the conclusion that there is some level on which many people in the AI safety community actually don’t want to see the problem of AI safety solved, because too much of their self-concept is wrapped up in AI safety being a super difficult problem. I highly doubt this occurs on a conscious level, it’s probably due to the same sort of subconscious psychological defenses you describe, e.g. embarrassment at not having seen the solution oneself.
My best guess is that this “missing mood” effect is a defensive reaction to a lack of plausible actionable steps: upon first being convinced people get upset/worried/sad, but they fail to find anything useful to do about the problem, so they move on and build up some psychological defenses against it.
Maybe? But if they are doing AI research, it shouldn’t be too hard for them to either (a) stop doing AI research, and thereby stop contributing to the problem, or (b) pivot their AI research more towards safety rather than capabilities, or at least (c) help raise awareness about the problem so that it becomes common knowledge so that everyone can stop all together and/or suitable regulation can be designed.
Edit: Oops, my comment shouldn’t be a direct reply here (but it fits into this general comment section, which is why I’m not deleting it). I didn’t read the parent comment that Daniel was replying to above and assumed he was replying in a totally different context (Musk not necessarily acting rationally on his non-missing mood, as opposed to Daniel talking about AI researchers and their missing mood.)
--
Yeah. I watched a Q&A on youtube after a talk by Sam Altman, roughly a year or two ago (or so), where Altman alluded that Musk had wanted some of OpenAI’s top AI scientists because Tesla needed them. It’s possible that the reason he left OpenAI was simply related to that, not to anything about strategically thinking about AI futures, missing moods, etc.
More generally, I feel like a lot of people seem to think that if you run a successful company, you must be brilliant and dedicated in every possible way. No, that’s not how it works. You can be a genius at founding and running companies and making lots of money without necessarily being good at careful reasoning about paths to impact other than “making money.” Probably these skills even come apart at the tails.
Agreed that it shouldn’t be hard to do that, but I expect that people will often continue to do what they find intrinsically motivating, or what they’re good at, even if it’s not overall a good idea. If this article can be believed, a senior researcher said that they work on capabilities because “the prospect of discovery is too sweet”.
An interesting missing mood I’ve observed in discussions of AI safety: When a new idea for achieving safe AI is proposed, you might expect that people concerned with AI risk would show a glimmer of eager curiosity. Perhaps the AI safety problem is actually solvable!
But I’ve pretty much never observed this. A more common reaction seems to be a sort of an uneasy defensiveness, sometimes in combination with changing the subject.
Another response I occasionally see is someone mentioning a potential problem in a manner that practically sounds like they are rebuking the person who shared the new idea.
I eventually came to the conclusion that there is some level on which many people in the AI safety community actually don’t want to see the problem of AI safety solved, because too much of their self-concept is wrapped up in AI safety being a super difficult problem. I highly doubt this occurs on a conscious level, it’s probably due to the same sort of subconscious psychological defenses you describe, e.g. embarrassment at not having seen the solution oneself.