I think I’m going to stake out a general disagreement position with this post, mainly because: 1) I mostly disagree with it (I am not simply being a devil’s advocate) and 2) I haven’t seen any rebuttals to it yet. Sorry if this response is too long, and I hope my tone does not sound confrontational.
When I first read Eliezer’s post, it made a lot of sense to me and seemed to match with points he’s emphasized many times in the past. I would make a general summarization of the points I’m referring to as: There have been many situations throughout history where policy makers, academics, or other authorities have made fairly strong statements about the future and actions we should collectively take without using any reliable models. This has had pretty disastrous consequences.
In regards to the example of Eliezer supposedly asking an unfair question, the context that I grabbed from his post was that this occurred during a very important summit on AI safety and policy between academics and other luminaries. This was supposed to be a conference where these influential people were actually trying to decide on specific courses of action to take, not merely a media-related press extravaganza, or some kind of informal social gathering between important people in a private context. I don’t remember if he actually states what conference it was but I’m guessing it was probably the Asilomar conference that occurred earlier this year.
If it was an informal social gathering, I think I would agree that it would be sort of unfair to ask random people tough questions like this and expect queued up answers, but as it stands, I’m fairly certain this was an important meeting that could influence the course of events for many years to come. AI safety is only just starting to become accepted in the mainstream, so whatever occurs at these events has to sort of nudge it in the right direction.
So we essentially have a few important reasons why it’s ok to be blunt here and ask tough questions to a panel of AI experts. Eliezer stood up and asked a question he probably expected not to receive a great answer to right away, and he did it in front of a bunch of luminaries who may have been embarrassed by this. So Eliezer broke a social norm because this could have been interpreted as disrespectful to these people, and he probably lowered his own status in the process. This is a risky move.
But in doing this, he forced them to display a weakness in their understanding of this specific subject. He mentioned that most of them accepted with high confidence that AGI was “really far away” (which I suppose means long enough from now that we don’t have to worry that much). So they must believe they have some model, but under more scrutiny it appears that they really don’t.
You say it’s unfair of Eliezer to expect them to have a good model, and to have a good answer queued up, but I also think it’s unfair to claim AGI is very far away without having any models to back that up. What they say on that stage probably matters a lot.
It’s technically true that the question he asked was unfair, because I am pretty sure he expected not to receive a good answer and that was why he asked it. So perhaps it was not asked purely in the spirit of intellectual discourse, it had rhetorical motivations as well. We can call that unfair if we must.
But I am also fairly certain that it was an important move to make from a consequentialist standpoint. It might have been disrespectful to the panelists, but it could have made them stop to think about it, or perhaps made others see that our understanding isn’t quite good enough to make claims about what definitely should or should not be done about AI safety.
I think he was totally considering social cues and incentive gradients when he did this, and it was precisely because of them that he did. Influential people under a lot of spotlight and public scrutiny will be more under pressure from these things. Therefore in order to give a “nudge”, if you’re someone who also happens to have a bit of influence, you might have to call them out in public a bit. It has a negative cost to you, but in the long run it might pay off.
I think it’s still a reasonable question whether or not this actually will pay off (will they try to more carefully consider their models of future AI development?) but I think his reasoning for doing this was pretty solid. I don’t get the impression that he’s demanding everyone has a solid model that makes predictions with hard numbers that they can query on demand, nor that he’s suggesting that we enforce negative social consequences for everyone for not having one.
Yes, you can always take into account everyone’s circumstances and incentives, but if those are generally pointing in a wrong enough direction for people who have real influence, I think it’s okay to do something about it.
I think I’m going to stake out a general disagreement position with this post, mainly because: 1) I mostly disagree with it (I am not simply being a devil’s advocate) and 2) I haven’t seen any rebuttals to it yet. Sorry if this response is too long, and I hope my tone does not sound confrontational.
When I first read Eliezer’s post, it made a lot of sense to me and seemed to match with points he’s emphasized many times in the past. I would make a general summarization of the points I’m referring to as: There have been many situations throughout history where policy makers, academics, or other authorities have made fairly strong statements about the future and actions we should collectively take without using any reliable models. This has had pretty disastrous consequences.
In regards to the example of Eliezer supposedly asking an unfair question, the context that I grabbed from his post was that this occurred during a very important summit on AI safety and policy between academics and other luminaries. This was supposed to be a conference where these influential people were actually trying to decide on specific courses of action to take, not merely a media-related press extravaganza, or some kind of informal social gathering between important people in a private context. I don’t remember if he actually states what conference it was but I’m guessing it was probably the Asilomar conference that occurred earlier this year.
If it was an informal social gathering, I think I would agree that it would be sort of unfair to ask random people tough questions like this and expect queued up answers, but as it stands, I’m fairly certain this was an important meeting that could influence the course of events for many years to come. AI safety is only just starting to become accepted in the mainstream, so whatever occurs at these events has to sort of nudge it in the right direction.
So we essentially have a few important reasons why it’s ok to be blunt here and ask tough questions to a panel of AI experts. Eliezer stood up and asked a question he probably expected not to receive a great answer to right away, and he did it in front of a bunch of luminaries who may have been embarrassed by this. So Eliezer broke a social norm because this could have been interpreted as disrespectful to these people, and he probably lowered his own status in the process. This is a risky move.
But in doing this, he forced them to display a weakness in their understanding of this specific subject. He mentioned that most of them accepted with high confidence that AGI was “really far away” (which I suppose means long enough from now that we don’t have to worry that much). So they must believe they have some model, but under more scrutiny it appears that they really don’t.
You say it’s unfair of Eliezer to expect them to have a good model, and to have a good answer queued up, but I also think it’s unfair to claim AGI is very far away without having any models to back that up. What they say on that stage probably matters a lot.
It’s technically true that the question he asked was unfair, because I am pretty sure he expected not to receive a good answer and that was why he asked it. So perhaps it was not asked purely in the spirit of intellectual discourse, it had rhetorical motivations as well. We can call that unfair if we must.
But I am also fairly certain that it was an important move to make from a consequentialist standpoint. It might have been disrespectful to the panelists, but it could have made them stop to think about it, or perhaps made others see that our understanding isn’t quite good enough to make claims about what definitely should or should not be done about AI safety.
I think he was totally considering social cues and incentive gradients when he did this, and it was precisely because of them that he did. Influential people under a lot of spotlight and public scrutiny will be more
under pressure from these things. Therefore in order to give a “nudge”, if you’re someone who also happens to have a bit of influence, you might have to call them out in public a bit. It has a negative cost to you, but in the long run it might pay off.
I think it’s still a reasonable question whether or not this actually will pay off (will they try to more carefully consider their models of future AI development?) but I think his reasoning for doing this was pretty solid. I don’t get the impression that he’s demanding everyone has a solid model that makes predictions with hard numbers that they can query on demand, nor that he’s suggesting that we enforce negative social consequences for everyone for not having one.
Yes, you can always take into account everyone’s circumstances and incentives, but if those are generally pointing in a wrong enough direction for people who have real influence, I think it’s okay to do something about it.