Thanks for the feedback. I just need a little clarification though.
You say “The less-incorrect explanation is that observation in the double slit experiment fundamentally entangles the observing system with the observed particle because information is exchanged.”
So in the analogy, the observing system would be the iPhone? And Hugo/the universe wouldn’t need to be observing the observer, and differentiating between when it’s being observed and not being observed, in order to cause the information to become entangled in the first place? Is that right?
This is very helpful feedback to think about. It appears the paper you referenced will also be extremely helpful, although it will take me some time to digest it on account of its length (74 pages w/o the bibliography).
Regarding “AI optimists,” I had not yet seen the paper currently on arxiv, but “AI risk skeptics” is indeed far more precise than “AI optimists.” 100 percent agreed.
Regarding alternatives to “AI pessimists” or “doomers,” Nevin Freeman’s term “AI prepper” is definitely an improvement. I guess I have a slight preference for “strategist,” like I used above, over “prepper,” but I’m probably biased out of habit. “Risk mitigation advocate” or “risk mitigator” would also work but they are more unwieldy than a single term.
The “Taxonomy on AI-Risk Counterarguments” post is incredible in its analysis, precision and usefulness. I think that simply having some terminology is extremely useful, not just for dialog, but for thought as well.
As we know, historically repressive regimes like the Soviet Union and North Korea have eliminated terms from the lexicon, to effective end. (It’s hard for people to think of concepts for which they have no words.)
I think that discussing language, sharpening the precision of our language, and developing new terminology has the opposite effect, in that people can build new ideas when they work with more precise and more efficient building materials. Words definitely matter.
Thanks for the feedback, but I don’t think it’s about “cognitive rewiring.” It’s more about precision of language and comprehension. You said “AI optimists think AI will go well and be helpful,” but doesn’t everyone believe that is a possibility? The bigger question is what probability you assign to the “go well and be helpful” outcome. Is there anything we can do to increase the probability? What about specific policies? You say you’re an “AI optimist,” but I still don’t know the scope of what that entails w/ specific policies. Does that mean you support open source AI? Do you oppose all AI regulations? What about an AI pause in development for safety? The terms “AI optimist” and “AI pessimist” don’t tell me much on their own.
One inspiration for my post is the now infamous exchange that went on between Yann LeCun and Yoshua Bengio.
As I’m sure you saw, Yann LeCun posted this on his Facebook page (& reposted on X):
“The heretofore silent majority of AI scientists and engineers who
- do not believe in AI extinction scenarios or
- believe we have agency in making AI powerful, reliable, and safe and
- think the best way to do so is through open source AI platforms,
(1) It is not about ‘believing’ in specific scenarios. It is about prudence. Neither you nor anyone has given me any rational and credible argument to suggest that we would be safe with future unaligned powerful AIs and right now we do not know how to design such AIs. Furthermore, there are people like Rich Sutton who seem to want us humans to welcome our future overlords and may *give* the gift of self-preservation to future AI systems, so even if we did find a way to make safe AIs, we would still have a socio-political problem to avoid grave misuse, excessive power concentration and the emergence of entities smarter than us and with their own interests.
(2) Indeed we do have agency, but right now we invest 50 to 100x more on AI capabilities than in AI safety and governance. If we want to have a chance to solve this problem, we need major investments both from industry and governments/academia. Denying the risks is not going to help achieve that. Please realize what you are doing.
(3) Open-source is great in general and I am and have been for all my adult life a big supporter, but you have to consider other values when taking a decision. Future AI systems will definitely be more powerful and thus more dangerous in the wrong hands. Open-sourcing them would be like giving dangerous weapons to everyone. Your argument of allowing everyone to manipulate powerful AIs is like the libertarian argument that everyone should be allowed to own a machine-gun or whatever weapon they want. From memory, you disagreed with such policies. And things get worse as the power of the tool (hence of the weapons derived from it) increases. Do governments allow anyone to build nuclear bombs, manipulate dangerous pathogens, or drive passenger jets? No. These are heavily regulated by governments.
--
[I added spacing to Bengio’s post for readability.]
Media articles about this, along with commenters, have described LeGun as an “AI optimist” and Bengio as an “AI pessimist.”
Just like in how you and I communicated, I think these terms, and even the “good vs bad” dichotomy, radically simplify the nature of the situation. Meanwhile, if the general public were asked what they think the “AI optimist” (supposedly LeGun) or the “pessimist” (supposedly Bengio) believe here, I’m not sure anyone would come back with an accurate response. Thus, the terms are ineffective.
Obviously you can think of yourself with any term you like, but with respect to others, it seems the term “AI strategist” for Bengio—not to mention Eliezer—is more likely to call to mind something closer to what they actually believe.
And isn’t conveyance of accurate meaning the primary goal of communication?
amelia
AmyLouiseJohnson.com.
Thanks for the feedback. I just need a little clarification though.
You say “The less-incorrect explanation is that observation in the double slit experiment fundamentally entangles the observing system with the observed particle because information is exchanged.”
So in the analogy, the observing system would be the iPhone? And Hugo/the universe wouldn’t need to be observing the observer, and differentiating between when it’s being observed and not being observed, in order to cause the information to become entangled in the first place? Is that right?
I’ll check out the article. Thanks!
Excellent point, thanks!
Another helpful resource to digest. Many thanks!
This is very helpful feedback to think about. It appears the paper you referenced will also be extremely helpful, although it will take me some time to digest it on account of its length (74 pages w/o the bibliography).
Thanks so much. I appreciate it!
I find this analysis to be extremely useful. Obviously anything can be refined and expanded, but this is such a good foundation. Thank you.
Thank you for your thoughtful and useful comment.
Regarding “AI optimists,” I had not yet seen the paper currently on arxiv, but “AI risk skeptics” is indeed far more precise than “AI optimists.” 100 percent agreed.
Regarding alternatives to “AI pessimists” or “doomers,” Nevin Freeman’s term “AI prepper” is definitely an improvement. I guess I have a slight preference for “strategist,” like I used above, over “prepper,” but I’m probably biased out of habit. “Risk mitigation advocate” or “risk mitigator” would also work but they are more unwieldy than a single term.
The “Taxonomy on AI-Risk Counterarguments” post is incredible in its analysis, precision and usefulness. I think that simply having some terminology is extremely useful, not just for dialog, but for thought as well.
As we know, historically repressive regimes like the Soviet Union and North Korea have eliminated terms from the lexicon, to effective end. (It’s hard for people to think of concepts for which they have no words.)
I think that discussing language, sharpening the precision of our language, and developing new terminology has the opposite effect, in that people can build new ideas when they work with more precise and more efficient building materials. Words definitely matter.
Thanks again.
Thanks for the feedback, but I don’t think it’s about “cognitive rewiring.” It’s more about precision of language and comprehension. You said “AI optimists think AI will go well and be helpful,” but doesn’t everyone believe that is a possibility? The bigger question is what probability you assign to the “go well and be helpful” outcome. Is there anything we can do to increase the probability? What about specific policies? You say you’re an “AI optimist,” but I still don’t know the scope of what that entails w/ specific policies. Does that mean you support open source AI? Do you oppose all AI regulations? What about an AI pause in development for safety? The terms “AI optimist” and “AI pessimist” don’t tell me much on their own.
One inspiration for my post is the now infamous exchange that went on between Yann LeCun and Yoshua Bengio.
As I’m sure you saw, Yann LeCun posted this on his Facebook page (& reposted on X):
“The heretofore silent majority of AI scientists and engineers who
- do not believe in AI extinction scenarios or
- believe we have agency in making AI powerful, reliable, and safe and
- think the best way to do so is through open source AI platforms,
NEED TO SPEAK UP !”
https://www.facebook.com/yann.lecun/posts/pfbid02We6SXvcqYkk34BETyTQwS1CFLYT7JmJ1gHg4YiFBYaW9Fppa3yMAgzfaov7zvgzWl
Yoshua Bengio replied as follows:
Let me consider your three points.
(1) It is not about ‘believing’ in specific scenarios. It is about prudence. Neither you nor anyone has given me any rational and credible argument to suggest that we would be safe with future unaligned powerful AIs and right now we do not know how to design such AIs. Furthermore, there are people like Rich Sutton who seem to want us humans to welcome our future overlords and may *give* the gift of self-preservation to future AI systems, so even if we did find a way to make safe AIs, we would still have a socio-political problem to avoid grave misuse, excessive power concentration and the emergence of entities smarter than us and with their own interests.
(2) Indeed we do have agency, but right now we invest 50 to 100x more on AI capabilities than in AI safety and governance. If we want to have a chance to solve this problem, we need major investments both from industry and governments/academia. Denying the risks is not going to help achieve that. Please realize what you are doing.
(3) Open-source is great in general and I am and have been for all my adult life a big supporter, but you have to consider other values when taking a decision. Future AI systems will definitely be more powerful and thus more dangerous in the wrong hands. Open-sourcing them would be like giving dangerous weapons to everyone. Your argument of allowing everyone to manipulate powerful AIs is like the libertarian argument that everyone should be allowed to own a machine-gun or whatever weapon they want. From memory, you disagreed with such policies. And things get worse as the power of the tool (hence of the weapons derived from it) increases. Do governments allow anyone to build nuclear bombs, manipulate dangerous pathogens, or drive passenger jets? No. These are heavily regulated by governments.
--
[I added spacing to Bengio’s post for readability.]
Media articles about this, along with commenters, have described LeGun as an “AI optimist” and Bengio as an “AI pessimist.”
Just like in how you and I communicated, I think these terms, and even the “good vs bad” dichotomy, radically simplify the nature of the situation. Meanwhile, if the general public were asked what they think the “AI optimist” (supposedly LeGun) or the “pessimist” (supposedly Bengio) believe here, I’m not sure anyone would come back with an accurate response. Thus, the terms are ineffective.
Obviously you can think of yourself with any term you like, but with respect to others, it seems the term “AI strategist” for Bengio—not to mention Eliezer—is more likely to call to mind something closer to what they actually believe.
And isn’t conveyance of accurate meaning the primary goal of communication?