Open tolerance of the people involved with status quo and fear of alienating / making enemies of powerful groups is a core part of current EA culture! Steve’s top comment on this post is an example of enforcing/reiterating this norm.
It’s an unwritten rule that seems very strongly enforced yet never really explicitly acknowledged, much less discussed. People were shadow blacklisted by CEA from the Covid documentary they funded for being too disrespectful in their speech re: how governments have handled covid. That fits what I’d consider a taboo, something any socially savvy person would pick up on and internalize if they were around it.
Maybe this norm for open tolerance is downstream of the implications of truly considering some people to be your adversaries (which you might do if you thought delaying AI development by even an hour was a considerable moral victory, as the OP seems to). Doing so does expose you to danger. I would point out that while lc’s post analogizes their relationship with AI researchers to Isreal’s relationship with Iran. When I think of Israel’s resistance to Iran nonviolence is not the first thing that comes to mind.
People were shadow blacklisted by CEA from the Covid documentary they funded for being too disrespectful in their speech re: how governments have handled covid.
I agree. I also think this is a topic that needs to be seriously considered and discussed because not doing so may leave behind a hidden hindrance to accurate collective assessment and planning for AI risks. Because contrary to our conceits and aspirations, our judgements aren’t at all immune to the sway of biases, flawed assumptions, and human emotions. I’m not sure how to put this, but people on this forum don’t come off as very worldly, if that makes sense. A lot of people are in technical professions where understanding of political realities seem to be lacking. The US and China stand to be the two major drivers of AI development in the next decades. Increasingly they don’t see eye to eye, and an arm-race dynamic might develop. So I feel there’s been a lot of focus on the technical/theoretical side of things, but not enough concern over the practical side of development, the geopolitical implications, and all that might entail.
Open tolerance of the people involved with status quo and fear of alienating / making enemies of powerful groups is a core part of current EA culture! Steve’s top comment on this post is an example of enforcing/reiterating this norm.
It’s an unwritten rule that seems very strongly enforced yet never really explicitly acknowledged, much less discussed. People were shadow blacklisted by CEA from the Covid documentary they funded for being too disrespectful in their speech re: how governments have handled covid. That fits what I’d consider a taboo, something any socially savvy person would pick up on and internalize if they were around it.
Maybe this norm for open tolerance is downstream of the implications of truly considering some people to be your adversaries (which you might do if you thought delaying AI development by even an hour was a considerable moral victory, as the OP seems to). Doing so does expose you to danger. I would point out that while lc’s post analogizes their relationship with AI researchers to Isreal’s relationship with Iran. When I think of Israel’s resistance to Iran nonviolence is not the first thing that comes to mind.
???
I agree. I also think this is a topic that needs to be seriously considered and discussed because not doing so may leave behind a hidden hindrance to accurate collective assessment and planning for AI risks. Because contrary to our conceits and aspirations, our judgements aren’t at all immune to the sway of biases, flawed assumptions, and human emotions. I’m not sure how to put this, but people on this forum don’t come off as very worldly, if that makes sense. A lot of people are in technical professions where understanding of political realities seem to be lacking. The US and China stand to be the two major drivers of AI development in the next decades. Increasingly they don’t see eye to eye, and an arm-race dynamic might develop. So I feel there’s been a lot of focus on the technical/theoretical side of things, but not enough concern over the practical side of development, the geopolitical implications, and all that might entail.