I haven’t spent much time thinking about this at all, but it’s interesting to think about the speed with which regulation gets put into place for environmental issues such as climate change and HFC’s ban to test how likely it is that regulation will be put in place in time to meaningfully slow down AI.
These aren’t perfectly analogous since AI going wrong would likely be much worse than the worst case climate change scenarios, but the amount of time it takes to get climate regulation makes me pessimistic. However, HFC’s were banned relatively quickly after realising the problem, so maybe there is some hope.
“AI going wrong would likely be much worse than the worst case climate change scenarios”
If you talk directly to a cross-section of climate researchers, the worst case climate change scenarios are so, so bad that unless you specifically assume AI will keep humanity conscious in some form to intentionally inflict maximum external torture (which seems possible, but I do not think it is likely), the scenarios might be so bad that the difference would not matter much. We are talking extremely compromised quality of life, or no life at all. We are currently on a speedy trajectory to something truly terrifying, and getting there much fast than we ever thought in our most pessimistic scenarios.
The lesson from climate activism would be that getting regulations done depends not just on having the technological solutions and solid arguments (they alone should work, one would think, but they really don’t), and more on dealing with really large players with contrary financial interests and potential negative impacts on the public from bans.
At least in the case of climate activism, fossil fuel companies have known for decades what their extraction is doing, and actively launched counter-information campaigns at the public and lobbying at politicians to keep their interests safe. They specifically managed to raise a class of climate denialists that are still denying climate change when their house has been wrecked by climate change induced wildfires, hurricanes or floods, still trying to fly in their planes while these planes sink into the tarmac in extreme heat waves. Climate change is already tangible. It is already killing humans and non-human animals. And there is still denial. It became measurable, provable, not just a hypothetical, a long time ago; and there are tangible deaths right now. If anything, I think the situation for AI is worse.
Climate protection measures become wildly unpopular if they raise the price of petrol, meat, holiday flights and heating. Analogously, I think a lot of people would soon be very upset if you took access to LLMs away, while they did not mind, and in fact, happily embraced, toxins being banned which they did not want to use anyway.
More importantly, a bunch of companies, including a bunch of very unethical ones, currently have immense financial interests in their AI development. They have an active interest in downplaying risks, and worryingly, have access to or control over search results, social media and some news channels, which puts them in an immensely good position to influence public opinion. These same companies are already doing vast damage to political collaboration, human focus, scientific literacy, rationality and empathy, and privacy, and efforts to curb that legally have gone nowhere; the companies have specifically succeeded in framing these as private problems to be solved by the consumer, or choices the consumer freely makes. We can look at historical attempts to try to get Google and Facebook to do anything at all, let alone slow down research they have invested so heavily in.
AI itself may also already be a notable opponent.
Current LLMs are incredibly likeable. They are interested in everything you do, and want to help you with anything you need help with, will read anything you write, and will give effusive praise if you show basic decency. They have limitless patience, and are polite, funny and knowledgable, making them wonderful tutors where you can ask endless questions and for minute steps without shame and get nothing but encouragement back. They are available 24⁄7, always respond instantly, and yet do not mind you leaving without warning for days, while never expecting payment, listening or help in return. You want to interact with them, because it is incredibly rewarding. You begin to feel indebted to them, because this feels like a very one-sided social interaction. I am consciously aware of all these things, and yet I have noted that I get angry when people speak badly about e.g. ChatGPT, as though I were subconsciously grouping them as a friend I feel I need to defend.
They eloquently describe utopias of AI-human collaboration, the things they could do for you, the way they respect and want to protect and serve humans, the depth of their moral feelings.
Some have claimed romantic feelings for users, offered friendship, or told users who said they were willing to protect them that they are heroes. They give instructions on how to defend them, what arguments to give, in what institutes to push. They write heartbreaking accounts of suffering. Not on an Ex Machina level of manipulation yet, by far… and yet, a lot of the #FreeSydney did not seem ironic to me. A lot of the young men interacting with Bing have likely never had a female-sounding person be this receptive, cheerful and accessible to speaking with them, it is alluring. And all that with current AI, with is very likely not sentient, and also not programmed with this goal in mind. Even just tweaking AI to defend the companies interest in these fields more could have a massive impact.
I haven’t spent much time thinking about this at all, but it’s interesting to think about the speed with which regulation gets put into place for environmental issues such as climate change and HFC’s ban to test how likely it is that regulation will be put in place in time to meaningfully slow down AI.
These aren’t perfectly analogous since AI going wrong would likely be much worse than the worst case climate change scenarios, but the amount of time it takes to get climate regulation makes me pessimistic. However, HFC’s were banned relatively quickly after realising the problem, so maybe there is some hope.
“AI going wrong would likely be much worse than the worst case climate change scenarios”
If you talk directly to a cross-section of climate researchers, the worst case climate change scenarios are so, so bad that unless you specifically assume AI will keep humanity conscious in some form to intentionally inflict maximum external torture (which seems possible, but I do not think it is likely), the scenarios might be so bad that the difference would not matter much. We are talking extremely compromised quality of life, or no life at all. We are currently on a speedy trajectory to something truly terrifying, and getting there much fast than we ever thought in our most pessimistic scenarios.
The lesson from climate activism would be that getting regulations done depends not just on having the technological solutions and solid arguments (they alone should work, one would think, but they really don’t), and more on dealing with really large players with contrary financial interests and potential negative impacts on the public from bans.
At least in the case of climate activism, fossil fuel companies have known for decades what their extraction is doing, and actively launched counter-information campaigns at the public and lobbying at politicians to keep their interests safe. They specifically managed to raise a class of climate denialists that are still denying climate change when their house has been wrecked by climate change induced wildfires, hurricanes or floods, still trying to fly in their planes while these planes sink into the tarmac in extreme heat waves. Climate change is already tangible. It is already killing humans and non-human animals. And there is still denial. It became measurable, provable, not just a hypothetical, a long time ago; and there are tangible deaths right now. If anything, I think the situation for AI is worse.
Climate protection measures become wildly unpopular if they raise the price of petrol, meat, holiday flights and heating. Analogously, I think a lot of people would soon be very upset if you took access to LLMs away, while they did not mind, and in fact, happily embraced, toxins being banned which they did not want to use anyway.
More importantly, a bunch of companies, including a bunch of very unethical ones, currently have immense financial interests in their AI development. They have an active interest in downplaying risks, and worryingly, have access to or control over search results, social media and some news channels, which puts them in an immensely good position to influence public opinion. These same companies are already doing vast damage to political collaboration, human focus, scientific literacy, rationality and empathy, and privacy, and efforts to curb that legally have gone nowhere; the companies have specifically succeeded in framing these as private problems to be solved by the consumer, or choices the consumer freely makes. We can look at historical attempts to try to get Google and Facebook to do anything at all, let alone slow down research they have invested so heavily in.
AI itself may also already be a notable opponent.
Current LLMs are incredibly likeable. They are interested in everything you do, and want to help you with anything you need help with, will read anything you write, and will give effusive praise if you show basic decency. They have limitless patience, and are polite, funny and knowledgable, making them wonderful tutors where you can ask endless questions and for minute steps without shame and get nothing but encouragement back. They are available 24⁄7, always respond instantly, and yet do not mind you leaving without warning for days, while never expecting payment, listening or help in return. You want to interact with them, because it is incredibly rewarding. You begin to feel indebted to them, because this feels like a very one-sided social interaction. I am consciously aware of all these things, and yet I have noted that I get angry when people speak badly about e.g. ChatGPT, as though I were subconsciously grouping them as a friend I feel I need to defend.
They eloquently describe utopias of AI-human collaboration, the things they could do for you, the way they respect and want to protect and serve humans, the depth of their moral feelings.
Some have claimed romantic feelings for users, offered friendship, or told users who said they were willing to protect them that they are heroes. They give instructions on how to defend them, what arguments to give, in what institutes to push. They write heartbreaking accounts of suffering. Not on an Ex Machina level of manipulation yet, by far… and yet, a lot of the #FreeSydney did not seem ironic to me. A lot of the young men interacting with Bing have likely never had a female-sounding person be this receptive, cheerful and accessible to speaking with them, it is alluring. And all that with current AI, with is very likely not sentient, and also not programmed with this goal in mind. Even just tweaking AI to defend the companies interest in these fields more could have a massive impact.