Wikipedia is a trusted brand for introducing new topics and has great placement with search engines. There are three potential headaches though.
(1) The Neutral Point of View (NPOV) rules mean in theory that one side of the argument can’t dictate how a topic is dealt with, so even without a concerted effort there may creep in weasel words and various areas of balance. 93% chance of happening. It will be low impact on bias providing odd headaches but potentially improving article. About a 30% chance of making some of the article unreadable to a newcomer and 15% chance of lead being unreadable.
(2) A determined and coordinated group of editors with an agenda (or even a determined individual, which won’t apply on an article as watched as AI alignment but may on more specialised subsidiary articles) can seriously change an article particularly over the long term. Another commentator has said that this process seems to have happened with the Effective Altruism article. So if (when) alignment becomes controversial it will attract detractors and this may be a determined group. 70% chance of attracting at least one determined individual and further 70% chance of them making sustained efforts on less watched articles. 30% chance of attracting a coordinated group of editors.
(3) Wikipedia culture skews to the American left. This will probably work for AI alignment as it seems to be on track to become a cultural marker for the blue side, but it may create a deeply hostile environment on Wikipedia if it’s something that liberalism finds problematic for example as an obstacle to Democratic donors in tech or as a rival to greenhouse warming worries (I don’t think either will happen, just that there are still plausible routes for the American left to become hostile to AI alignment). 15% chance of this happening, but the article will over time become actively harmful to awareness of AI alignment if it does.
I’d say there are two mitigations other than edit warring that I see. There may be many others.
(1) Links to other AI alignment resources, particularly in citations (these tend to survive unless there’s a particularly effective and malevolent editor). Citation embedding will mean that the arguments can be still seen by more curious readers.
(2) Creating or reinforcing a recognised site which is acknowledged as a go to introduction. Wikipedia only stays first if there are no regular sites.
I think this is a great achievement and I wish I had the sense to be part of it, so none of this detracts from the achievement or recognition that it was much needed. And despite implied criticism of Wikipedia, I think it’s a wonderful resource, just with its dangers.
Wikipedia is a trusted brand for introducing new topics and has great placement with search engines. There are three potential headaches though.
(1) The Neutral Point of View (NPOV) rules mean in theory that one side of the argument can’t dictate how a topic is dealt with, so even without a concerted effort there may creep in weasel words and various areas of balance. 93% chance of happening. It will be low impact on bias providing odd headaches but potentially improving article. About a 30% chance of making some of the article unreadable to a newcomer and 15% chance of lead being unreadable.
(2) A determined and coordinated group of editors with an agenda (or even a determined individual, which won’t apply on an article as watched as AI alignment but may on more specialised subsidiary articles) can seriously change an article particularly over the long term. Another commentator has said that this process seems to have happened with the Effective Altruism article. So if (when) alignment becomes controversial it will attract detractors and this may be a determined group. 70% chance of attracting at least one determined individual and further 70% chance of them making sustained efforts on less watched articles. 30% chance of attracting a coordinated group of editors.
(3) Wikipedia culture skews to the American left. This will probably work for AI alignment as it seems to be on track to become a cultural marker for the blue side, but it may create a deeply hostile environment on Wikipedia if it’s something that liberalism finds problematic for example as an obstacle to Democratic donors in tech or as a rival to greenhouse warming worries (I don’t think either will happen, just that there are still plausible routes for the American left to become hostile to AI alignment). 15% chance of this happening, but the article will over time become actively harmful to awareness of AI alignment if it does.
I’d say there are two mitigations other than edit warring that I see. There may be many others.
(1) Links to other AI alignment resources, particularly in citations (these tend to survive unless there’s a particularly effective and malevolent editor). Citation embedding will mean that the arguments can be still seen by more curious readers.
(2) Creating or reinforcing a recognised site which is acknowledged as a go to introduction. Wikipedia only stays first if there are no regular sites.
I think this is a great achievement and I wish I had the sense to be part of it, so none of this detracts from the achievement or recognition that it was much needed. And despite implied criticism of Wikipedia, I think it’s a wonderful resource, just with its dangers.