I agree; it’s a coordination problem. But it doesn’t apply just to creators. Even if most of the audience is hating the platform, no one will switch to another—because there’s no content there.
I think something like generalized Kickstarter could solve that, if it itself got popular & understood. We would need to spread awareness of it & the problem they’re trying to solve across the population. Once significant portion of population is on such platform, one could start a campaign to, for example, switch from YouTube to DTube. All users agree to switch to DTube if/when X amount of people joined the campaign. All content creators being part of it would reupload their content there, and preferably temporarily unlist their videos on YouTube.
Or creators could coordinate among themselves in similar way. If there were enough of them, and they all unlisted their videos with info that they moved to platform X, audience should follow.
Also, assuming something like DTube can actually reliably work at massive scale, one could make decentralized app which is a wrapper for YouTube. It’d include both videos which are on the decentralized network, and videos hosted on YouTube, seamlessly. Then interested audience could switch one user at a time—it’d be like YouTube, but with extra content. And then people could slowly rip videos off YouTube onto the platform, and when there’s more users of the new platform—gradually suppress YouTube content.
If general audience or creators really care about censorship, then it should be doable.
About the first paragraph:
There is infinite amount of wrong answers to “What is six plus eight”, only one is correct. If GPT-3 answers it correctly in 3 or 10 tries, that means it *has* some understanding/knowledge. Through that’s moderated by numbers being very small—if it also replies with small numbers it has non-negligible chance of being correct solely by chance.
But it’s better than that.
And more complex questions, like these in the interview above are even more convincing, through the same line of reasoning. There might be (exact numbers pulled out of the air, they’re just for illustrative purposes), out of all sensible-English completions (so no “weoi123@!#*), 0.01% correct ones, 0.09% partially correct and 99% complete nonsense / off-topic etc.
Returning to arithmetic itself, for me GPT seems intent on providing off-by-one answers for some reason. Or even less wrong [heh]. When I was playing with Gwern’s prefix-confidence-rating prompt, I got this:
Q: What is half the result of the number 102?
A: [remote] 50.5
About confidence-rating prefixes, neat thing might be to experiment with “requesting” high (or low) confidence answer by making these tags part of the prompt. It worked when I tried it (for example, if it kept answering it doesn’t know the answer, I eventually tried to write question + “A: [highly likely] ”—and it answered sensibly! But I didn’t play all that much so it might’ve been a fluke.
Here’s more if anyone’s interested.