To the extent that rationality has a purpose, I would argue that it is to do what it takes to achieve our goals, if that includes creating “propaganda”, so be it. And the rules explicitly ask for submissions not to be deceiving, so if we use them to convince people it will be a pure epistemic gain.
Edit: If you are going to downvote this, at least argue why. I think that if this works like they expect, it truly is a net positive.
If you are going to downvote this, at least argue why.
Fair. Should’ve started with that.
To the extent that rationality has a purpose, I would argue that it is to do what it takes to achieve our goals,
I think there’s a difference between “rationality is systematized winning” and “rationality is doing whatever it takes to achieve our goals”. That difference requires more time to explain than I have right now.
if that includes creating “propaganda”, so be it.
I think that if this works like they expect, it truly is a net positive.
I think that the whole AI alignment thing requires extraordinary measures, and I’m not sure what specifically that would take; I’m not saying we shouldn’t do the contest. I doubt you and I have a substantial disagreement as to the severity of the problem or the effectiveness of the contest. My above comment was more “argument from ‘everyone does this’ doesn’t work”, not “this contest is bad and you are bad”.
Also, I wouldn’t call this contest propaganda. At the same time, if this contest was “convince EAs and LW users to have shorter timelines and higher chances of doom”, it would be reacted to differently. There is a difference, convincing someone to have a shorter timeline isn’t the same as trying to explain the whole AI alignment thing in the first place, but I worry that we could take that too far. I think that (most of) the responses John’s comment got were good, and reassure me that the OPs are actually aware of/worried about John’s concerns. I see no reason why this particular contest will be harmful, but I can imagine a future where we pivot to mainly strategies like this having some harmful second-order effects (which need their own post to explain).
To the extent that rationality has a purpose, I would argue that it is to do what it takes to achieve our goals, if that includes creating “propaganda”, so be it. And the rules explicitly ask for submissions not to be deceiving, so if we use them to convince people it will be a pure epistemic gain.
Edit: If you are going to downvote this, at least argue why. I think that if this works like they expect, it truly is a net positive.
Fair. Should’ve started with that.
I think there’s a difference between “rationality is systematized winning” and “rationality is doing whatever it takes to achieve our goals”. That difference requires more time to explain than I have right now.
I think that the whole AI alignment thing requires extraordinary measures, and I’m not sure what specifically that would take; I’m not saying we shouldn’t do the contest. I doubt you and I have a substantial disagreement as to the severity of the problem or the effectiveness of the contest. My above comment was more “argument from ‘everyone does this’ doesn’t work”, not “this contest is bad and you are bad”.
Also, I wouldn’t call this contest propaganda. At the same time, if this contest was “convince EAs and LW users to have shorter timelines and higher chances of doom”, it would be reacted to differently. There is a difference, convincing someone to have a shorter timeline isn’t the same as trying to explain the whole AI alignment thing in the first place, but I worry that we could take that too far. I think that (most of) the responses John’s comment got were good, and reassure me that the OPs are actually aware of/worried about John’s concerns. I see no reason why this particular contest will be harmful, but I can imagine a future where we pivot to mainly strategies like this having some harmful second-order effects (which need their own post to explain).