I was more thinking about the humans running the AI, not the AI itself having an advantage in the edit wars. If the project gets special privileges and bypasses the normal (and sometimes painful) oversight by human volunteers, it can end up putting incorrect or low-value information in that’s easier to create than to improve.
Agreed, if the AI is passing the “wikipedia contributor” turing test, then it’s all over anyway.
if the AI is passing the “wikipedia contributor” Turing test, then it’s all over anyway.
This is a very strong statement! Would you be willing to make a specific prediction conditional on an AI passing the ‘Wikipedia contributor’ Turing test? (something like “if that happens, I predict x will happen within [y unit of time] with z probability” or something of the sort)
Not that there’ll necessarily be anyone around to register it if you’re correct, but still...
I was more thinking about the humans running the AI, not the AI itself having an advantage in the edit wars. If the project gets special privileges and bypasses the normal (and sometimes painful) oversight by human volunteers, it can end up putting incorrect or low-value information in that’s easier to create than to improve.
Agreed, if the AI is passing the “wikipedia contributor” turing test, then it’s all over anyway.
This is a very strong statement! Would you be willing to make a specific prediction conditional on an AI passing the ‘Wikipedia contributor’ Turing test? (something like “if that happens, I predict x will happen within [y unit of time] with z probability” or something of the sort)
Not that there’ll necessarily be anyone around to register it if you’re correct, but still...
Actually, I’ll instead back off on my statement. Having seen some of the low-quality discussions in edit wars, it’s not actually a very high bar.
lol I feel you on that one! 🙃