Maybe? But if they are doing AI research, it shouldn’t be too hard for them to either (a) stop doing AI research, and thereby stop contributing to the problem, or (b) pivot their AI research more towards safety rather than capabilities, or at least (c) help raise awareness about the problem so that it becomes common knowledge so that everyone can stop all together and/or suitable regulation can be designed.
Edit: Oops, my comment shouldn’t be a direct reply here (but it fits into this general comment section, which is why I’m not deleting it). I didn’t read the parent comment that Daniel was replying to above and assumed he was replying in a totally different context (Musk not necessarily acting rationally on his non-missing mood, as opposed to Daniel talking about AI researchers and their missing mood.)
--
Yeah. I watched a Q&A on youtube after a talk by Sam Altman, roughly a year or two ago (or so), where Altman alluded that Musk had wanted some of OpenAI’s top AI scientists because Tesla needed them. It’s possible that the reason he left OpenAI was simply related to that, not to anything about strategically thinking about AI futures, missing moods, etc.
More generally, I feel like a lot of people seem to think that if you run a successful company, you must be brilliant and dedicated in every possible way. No, that’s not how it works. You can be a genius at founding and running companies and making lots of money without necessarily being good at careful reasoning about paths to impact other than “making money.” Probably these skills even come apart at the tails.
Agreed that it shouldn’t be hard to do that, but I expect that people will often continue to do what they find intrinsically motivating, or what they’re good at, even if it’s not overall a good idea. If this article can be believed, a senior researcher said that they work on capabilities because “the prospect of discovery is too sweet”.
Maybe? But if they are doing AI research, it shouldn’t be too hard for them to either (a) stop doing AI research, and thereby stop contributing to the problem, or (b) pivot their AI research more towards safety rather than capabilities, or at least (c) help raise awareness about the problem so that it becomes common knowledge so that everyone can stop all together and/or suitable regulation can be designed.
Edit: Oops, my comment shouldn’t be a direct reply here (but it fits into this general comment section, which is why I’m not deleting it). I didn’t read the parent comment that Daniel was replying to above and assumed he was replying in a totally different context (Musk not necessarily acting rationally on his non-missing mood, as opposed to Daniel talking about AI researchers and their missing mood.)
--
Yeah. I watched a Q&A on youtube after a talk by Sam Altman, roughly a year or two ago (or so), where Altman alluded that Musk had wanted some of OpenAI’s top AI scientists because Tesla needed them. It’s possible that the reason he left OpenAI was simply related to that, not to anything about strategically thinking about AI futures, missing moods, etc.
More generally, I feel like a lot of people seem to think that if you run a successful company, you must be brilliant and dedicated in every possible way. No, that’s not how it works. You can be a genius at founding and running companies and making lots of money without necessarily being good at careful reasoning about paths to impact other than “making money.” Probably these skills even come apart at the tails.
Agreed that it shouldn’t be hard to do that, but I expect that people will often continue to do what they find intrinsically motivating, or what they’re good at, even if it’s not overall a good idea. If this article can be believed, a senior researcher said that they work on capabilities because “the prospect of discovery is too sweet”.