Thanks for the feedback, but I don’t think it’s about “cognitive rewiring.” It’s more about precision of language and comprehension. You said “AI optimists think AI will go well and be helpful,” but doesn’t everyone believe that is a possibility? The bigger question is what probability you assign to the “go well and be helpful” outcome. Is there anything we can do to increase the probability? What about specific policies? You say you’re an “AI optimist,” but I still don’t know the scope of what that entails w/ specific policies. Does that mean you support open source AI? Do you oppose all AI regulations? What about an AI pause in development for safety? The terms “AI optimist” and “AI pessimist” don’t tell me much on their own.
One inspiration for my post is the now infamous exchange that went on between Yann LeCun and Yoshua Bengio.
As I’m sure you saw, Yann LeCun posted this on his Facebook page (& reposted on X):
“The heretofore silent majority of AI scientists and engineers who
- do not believe in AI extinction scenarios or
- believe we have agency in making AI powerful, reliable, and safe and
- think the best way to do so is through open source AI platforms,
(1) It is not about ‘believing’ in specific scenarios. It is about prudence. Neither you nor anyone has given me any rational and credible argument to suggest that we would be safe with future unaligned powerful AIs and right now we do not know how to design such AIs. Furthermore, there are people like Rich Sutton who seem to want us humans to welcome our future overlords and may *give* the gift of self-preservation to future AI systems, so even if we did find a way to make safe AIs, we would still have a socio-political problem to avoid grave misuse, excessive power concentration and the emergence of entities smarter than us and with their own interests.
(2) Indeed we do have agency, but right now we invest 50 to 100x more on AI capabilities than in AI safety and governance. If we want to have a chance to solve this problem, we need major investments both from industry and governments/academia. Denying the risks is not going to help achieve that. Please realize what you are doing.
(3) Open-source is great in general and I am and have been for all my adult life a big supporter, but you have to consider other values when taking a decision. Future AI systems will definitely be more powerful and thus more dangerous in the wrong hands. Open-sourcing them would be like giving dangerous weapons to everyone. Your argument of allowing everyone to manipulate powerful AIs is like the libertarian argument that everyone should be allowed to own a machine-gun or whatever weapon they want. From memory, you disagreed with such policies. And things get worse as the power of the tool (hence of the weapons derived from it) increases. Do governments allow anyone to build nuclear bombs, manipulate dangerous pathogens, or drive passenger jets? No. These are heavily regulated by governments.
--
[I added spacing to Bengio’s post for readability.]
Media articles about this, along with commenters, have described LeGun as an “AI optimist” and Bengio as an “AI pessimist.”
Just like in how you and I communicated, I think these terms, and even the “good vs bad” dichotomy, radically simplify the nature of the situation. Meanwhile, if the general public were asked what they think the “AI optimist” (supposedly LeGun) or the “pessimist” (supposedly Bengio) believe here, I’m not sure anyone would come back with an accurate response. Thus, the terms are ineffective.
Obviously you can think of yourself with any term you like, but with respect to others, it seems the term “AI strategist” for Bengio—not to mention Eliezer—is more likely to call to mind something closer to what they actually believe.
And isn’t conveyance of accurate meaning the primary goal of communication?
Thanks for the feedback, but I don’t think it’s about “cognitive rewiring.” It’s more about precision of language and comprehension. You said “AI optimists think AI will go well and be helpful,” but doesn’t everyone believe that is a possibility? The bigger question is what probability you assign to the “go well and be helpful” outcome. Is there anything we can do to increase the probability? What about specific policies? You say you’re an “AI optimist,” but I still don’t know the scope of what that entails w/ specific policies. Does that mean you support open source AI? Do you oppose all AI regulations? What about an AI pause in development for safety? The terms “AI optimist” and “AI pessimist” don’t tell me much on their own.
One inspiration for my post is the now infamous exchange that went on between Yann LeCun and Yoshua Bengio.
As I’m sure you saw, Yann LeCun posted this on his Facebook page (& reposted on X):
“The heretofore silent majority of AI scientists and engineers who
- do not believe in AI extinction scenarios or
- believe we have agency in making AI powerful, reliable, and safe and
- think the best way to do so is through open source AI platforms,
NEED TO SPEAK UP !”
https://www.facebook.com/yann.lecun/posts/pfbid02We6SXvcqYkk34BETyTQwS1CFLYT7JmJ1gHg4YiFBYaW9Fppa3yMAgzfaov7zvgzWl
Yoshua Bengio replied as follows:
Let me consider your three points.
(1) It is not about ‘believing’ in specific scenarios. It is about prudence. Neither you nor anyone has given me any rational and credible argument to suggest that we would be safe with future unaligned powerful AIs and right now we do not know how to design such AIs. Furthermore, there are people like Rich Sutton who seem to want us humans to welcome our future overlords and may *give* the gift of self-preservation to future AI systems, so even if we did find a way to make safe AIs, we would still have a socio-political problem to avoid grave misuse, excessive power concentration and the emergence of entities smarter than us and with their own interests.
(2) Indeed we do have agency, but right now we invest 50 to 100x more on AI capabilities than in AI safety and governance. If we want to have a chance to solve this problem, we need major investments both from industry and governments/academia. Denying the risks is not going to help achieve that. Please realize what you are doing.
(3) Open-source is great in general and I am and have been for all my adult life a big supporter, but you have to consider other values when taking a decision. Future AI systems will definitely be more powerful and thus more dangerous in the wrong hands. Open-sourcing them would be like giving dangerous weapons to everyone. Your argument of allowing everyone to manipulate powerful AIs is like the libertarian argument that everyone should be allowed to own a machine-gun or whatever weapon they want. From memory, you disagreed with such policies. And things get worse as the power of the tool (hence of the weapons derived from it) increases. Do governments allow anyone to build nuclear bombs, manipulate dangerous pathogens, or drive passenger jets? No. These are heavily regulated by governments.
--
[I added spacing to Bengio’s post for readability.]
Media articles about this, along with commenters, have described LeGun as an “AI optimist” and Bengio as an “AI pessimist.”
Just like in how you and I communicated, I think these terms, and even the “good vs bad” dichotomy, radically simplify the nature of the situation. Meanwhile, if the general public were asked what they think the “AI optimist” (supposedly LeGun) or the “pessimist” (supposedly Bengio) believe here, I’m not sure anyone would come back with an accurate response. Thus, the terms are ineffective.
Obviously you can think of yourself with any term you like, but with respect to others, it seems the term “AI strategist” for Bengio—not to mention Eliezer—is more likely to call to mind something closer to what they actually believe.
And isn’t conveyance of accurate meaning the primary goal of communication?