I think there are bunch of relevant but subtle differences in terms of how we are thinking about this. My beliefs after quite a lot of thinking are:
A. Most people don’t care about tech singularity. People are captured by the AI hypes cycles though, especially people who work under the tech elite. The general public is much more wary overall though of current use of AI, and are starting to notice the harms in their daily lives (eg. addictive and ideology + distorted self-image reinforcing social media, exploitative work gigs handed to them by algorithms).
B. Tech singularity, as envisioned in the past, involved a lot of motivated and simplifying reasoning about directing the complex world into utopias using complicated tech that cannot realistically be caused to happen using those methods. Tech elites like to co-opt these nerdy utopian visions for their own ends.
C. By your descriptions, I think you are essentialising humans as rational individuals who are socially signalling for self-benefit. I’m actually saying that, yes, people are egocentric right now, particularly in the neoliberalist consumption-oriented market and self-presentation-oriented culture we are exposed to right now. But also, humans are social creatures and can relate and interact based on deeper shared needs. So in that, I’m not essentialising people as fundamentally selfish. I’m saying that within the social environment on top of our tribal and sex+survival oriented psychological predispositions, people come out as particularly egocentric.
D. I don’t think baby steps are going to do it, given that we’re dealing with potential auto-scaling/catalysing technology that would mark the end of organic DNA-based life. The baby steps description reminds me of various scenes in the film “Don’t Look Up” where bystanders kept signalling to the main actors not to “overdo it”.
So 1 and 3 were my descriptions about what is actually happening and how that would continue, not about the end conclusion of what’s happening. To disagree with the former, I think you would need to clarify your observations/analysis of why something opposite/different is happening.
One key point to keep in mind is that my arguments aren’t about refuting the idea of slowing down AI, instead it’s about offering a reality check.
The reason I said baby steps is that 1. They might be enough, but 2. even if it isn’t enough, one common failure mode in politics is to go fully maximalist in your agenda first. This is a route to failure for your agenda. It is better instead to progress your agenda from the least controversial/costly, and if necessary go then add more costly/controversial laws. However this is extremely dangerous, a single case of bad publicity or otherwise making it very controversial to govern AI may well doom the effort.
Another lesson for politics is that your opposition (AI companies) is probably rational, but having very different goals compared to the median LW/EA person. So we shouldn’t expect unusually easy wins in this area, and progress will likely be slow, especially in lobbying.
It’s still very useful for AI governance to do it, the high risk does not mean there aren’t high rewards, especially if you think AI Alignment is possible, but governance can help AI Alignment do it’s best, as well as preventing s-risks, but I do think that AI governance may be overestimating what costs the public and companies are willing to bear for regulations. Especially if AI companies can make externalities.
For example, the climate change agenda stalled until solar, wind and batteries became cheap enough in the 2010s that moving out of fossil fuels represented a very cheap way to decarbonize. And still there’s some opposition here.
That’s clarifying. I agree that immediately trying to impose costly/controversial laws would be bad.
What I am personally thinking about first here is “actually trying to clarify the concerns and find consensus with other movements concerned about AI developments” (which by itself does not involve immediate radical law reforms).
We first need to have a basis of common understanding from which legislation can be drawn.
I think there are bunch of relevant but subtle differences in terms of how we are thinking about this. My beliefs after quite a lot of thinking are:
A. Most people don’t care about tech singularity. People are captured by the AI hypes cycles though, especially people who work under the tech elite. The general public is much more wary overall though of current use of AI, and are starting to notice the harms in their daily lives (eg. addictive and ideology + distorted self-image reinforcing social media, exploitative work gigs handed to them by algorithms).
B. Tech singularity, as envisioned in the past, involved a lot of motivated and simplifying reasoning about directing the complex world into utopias using complicated tech that cannot realistically be caused to happen using those methods. Tech elites like to co-opt these nerdy utopian visions for their own ends.
C. By your descriptions, I think you are essentialising humans as rational individuals who are socially signalling for self-benefit. I’m actually saying that, yes, people are egocentric right now, particularly in the neoliberalist consumption-oriented market and self-presentation-oriented culture we are exposed to right now. But also, humans are social creatures and can relate and interact based on deeper shared needs. So in that, I’m not essentialising people as fundamentally selfish. I’m saying that within the social environment on top of our tribal and sex+survival oriented psychological predispositions, people come out as particularly egocentric.
D. I don’t think baby steps are going to do it, given that we’re dealing with potential auto-scaling/catalysing technology that would mark the end of organic DNA-based life. The baby steps description reminds me of various scenes in the film “Don’t Look Up” where bystanders kept signalling to the main actors not to “overdo it”.
E. Interpretability techniques are used by tech elites to justify further capability developments. Interpretability techniques do not and cannot contribute to long-term AGI safety (https://www.lesswrong.com/posts/NeNRy8iQv4YtzpTfa/why-mechanistic-interpretability-does-not-and-cannot).
So 1 and 3 were my descriptions about what is actually happening and how that would continue, not about the end conclusion of what’s happening. To disagree with the former, I think you would need to clarify your observations/analysis of why something opposite/different is happening.
One key point to keep in mind is that my arguments aren’t about refuting the idea of slowing down AI, instead it’s about offering a reality check.
The reason I said baby steps is that 1. They might be enough, but 2. even if it isn’t enough, one common failure mode in politics is to go fully maximalist in your agenda first. This is a route to failure for your agenda. It is better instead to progress your agenda from the least controversial/costly, and if necessary go then add more costly/controversial laws. However this is extremely dangerous, a single case of bad publicity or otherwise making it very controversial to govern AI may well doom the effort.
Another lesson for politics is that your opposition (AI companies) is probably rational, but having very different goals compared to the median LW/EA person. So we shouldn’t expect unusually easy wins in this area, and progress will likely be slow, especially in lobbying.
It’s still very useful for AI governance to do it, the high risk does not mean there aren’t high rewards, especially if you think AI Alignment is possible, but governance can help AI Alignment do it’s best, as well as preventing s-risks, but I do think that AI governance may be overestimating what costs the public and companies are willing to bear for regulations. Especially if AI companies can make externalities.
For example, the climate change agenda stalled until solar, wind and batteries became cheap enough in the 2010s that moving out of fossil fuels represented a very cheap way to decarbonize. And still there’s some opposition here.
That’s clarifying. I agree that immediately trying to impose costly/controversial laws would be bad.
What I am personally thinking about first here is “actually trying to clarify the concerns and find consensus with other movements concerned about AI developments” (which by itself does not involve immediate radical law reforms).
We first need to have a basis of common understanding from which legislation can be drawn.