You didn’t refute his argument at all, you just said that other movements do the same thing. Isn’t the entire point of rationality that we’re meant to be truth-focused, and winning-focused, in ways that don’t manipulate others? Are we not meant to hold ourselves to the standard of “Aim to explain, not persuade”? Just because others in the reference class of “movements” do something doesn’t mean it’s immediately something we should replicate! Is that not the obvious, immediate response? Your comment proves too much; it could be used to argue for literally any popular behavior of movements, including canceling/exiling dissidents.
Do I think that this specific contest is non-trivially harmful at the margin? Probably not. I am, however, worried about the general attitude behind some of this type of recruitment, and the justifications used to defend it. I become really fucking worried when someone raises an entirely valid objection, and is met with “It’s only natural; most other movements do this”.
To the extent that rationality has a purpose, I would argue that it is to do what it takes to achieve our goals, if that includes creating “propaganda”, so be it. And the rules explicitly ask for submissions not to be deceiving, so if we use them to convince people it will be a pure epistemic gain.
Edit: If you are going to downvote this, at least argue why. I think that if this works like they expect, it truly is a net positive.
If you are going to downvote this, at least argue why.
Fair. Should’ve started with that.
To the extent that rationality has a purpose, I would argue that it is to do what it takes to achieve our goals,
I think there’s a difference between “rationality is systematized winning” and “rationality is doing whatever it takes to achieve our goals”. That difference requires more time to explain than I have right now.
if that includes creating “propaganda”, so be it.
I think that if this works like they expect, it truly is a net positive.
I think that the whole AI alignment thing requires extraordinary measures, and I’m not sure what specifically that would take; I’m not saying we shouldn’t do the contest. I doubt you and I have a substantial disagreement as to the severity of the problem or the effectiveness of the contest. My above comment was more “argument from ‘everyone does this’ doesn’t work”, not “this contest is bad and you are bad”.
Also, I wouldn’t call this contest propaganda. At the same time, if this contest was “convince EAs and LW users to have shorter timelines and higher chances of doom”, it would be reacted to differently. There is a difference, convincing someone to have a shorter timeline isn’t the same as trying to explain the whole AI alignment thing in the first place, but I worry that we could take that too far. I think that (most of) the responses John’s comment got were good, and reassure me that the OPs are actually aware of/worried about John’s concerns. I see no reason why this particular contest will be harmful, but I can imagine a future where we pivot to mainly strategies like this having some harmful second-order effects (which need their own post to explain).
You didn’t refute his argument at all, you just said that other movements do the same thing. Isn’t the entire point of rationality that we’re meant to be truth-focused, and winning-focused, in ways that don’t manipulate others? Are we not meant to hold ourselves to the standard of “Aim to explain, not persuade”? Just because others in the reference class of “movements” do something doesn’t mean it’s immediately something we should replicate! Is that not the obvious, immediate response? Your comment proves too much; it could be used to argue for literally any popular behavior of movements, including canceling/exiling dissidents.
Do I think that this specific contest is non-trivially harmful at the margin? Probably not. I am, however, worried about the general attitude behind some of this type of recruitment, and the justifications used to defend it. I become really fucking worried when someone raises an entirely valid objection, and is met with “It’s only natural; most other movements do this”.
To the extent that rationality has a purpose, I would argue that it is to do what it takes to achieve our goals, if that includes creating “propaganda”, so be it. And the rules explicitly ask for submissions not to be deceiving, so if we use them to convince people it will be a pure epistemic gain.
Edit: If you are going to downvote this, at least argue why. I think that if this works like they expect, it truly is a net positive.
Fair. Should’ve started with that.
I think there’s a difference between “rationality is systematized winning” and “rationality is doing whatever it takes to achieve our goals”. That difference requires more time to explain than I have right now.
I think that the whole AI alignment thing requires extraordinary measures, and I’m not sure what specifically that would take; I’m not saying we shouldn’t do the contest. I doubt you and I have a substantial disagreement as to the severity of the problem or the effectiveness of the contest. My above comment was more “argument from ‘everyone does this’ doesn’t work”, not “this contest is bad and you are bad”.
Also, I wouldn’t call this contest propaganda. At the same time, if this contest was “convince EAs and LW users to have shorter timelines and higher chances of doom”, it would be reacted to differently. There is a difference, convincing someone to have a shorter timeline isn’t the same as trying to explain the whole AI alignment thing in the first place, but I worry that we could take that too far. I think that (most of) the responses John’s comment got were good, and reassure me that the OPs are actually aware of/worried about John’s concerns. I see no reason why this particular contest will be harmful, but I can imagine a future where we pivot to mainly strategies like this having some harmful second-order effects (which need their own post to explain).