This seems very unlikely to me? While I’ve seen a few news stories about people being locked out (a) they’re rare enough to be news when they happen and (b) there are typically other factors, with borderline or abusive behaviors I wouldn’t do.
(Additionally, I’m less worried about this because I work for Google and know a lot of other people that work for Google, but that isn’t a factor for most people considering this)
Here is an example of how this can go awry. A slew of YouTube users had their Google accounts (not just their YouTube accounts) banned for “spamming” a video feed with Emojis. However, the YouTuber who created the video in question encouraged users to do just that, so Google didn’t have to go so far as to perform full account bans. To make matters worse, it took Google days before it reactivated everyone’s accounts. Even then, some experienced data loss.
Official Google Docs abuse policy does seem to indicate that banning people account for wrong speech is plausible:
We need to curb abuses that threaten our ability to provide these services, and we ask that everyone abide by the policies below to help us achieve this goal. After we are notified of a potential policy violation, we may review the content and take action, including restricting access to the content, removing the content, and limiting or terminating a user’s access to Google products.
[..]
Do not distribute content that deceives, misleads, or confuses users. This includes:
Misleading content related to civic and democratic processes: content that is demonstrably false and could significantly undermine participation or trust in civic or democratic processes. This includes information about public voting procedures, political candidate eligibility based on age / birthplace, election results, or census participation that contradicts official government records. It also includes incorrect claims that a political figure or government official has died, been involved in an accident, or is suffering from a sudden serious illness.
Misleading content related to harmful conspiracy theories: content that promotes or lends credibility to beliefs that individuals or groups are systematically committing acts that cause widespread harm. This content is contradicted by substantial evidence and has resulted in or incites violence.
Misleading content related to harmful health practices: misleading health or medical content that promotes or encourages others to engage in practices that may lead to serious physical or emotional harm to individuals, or serious public health harm.
Manipulated media: media that has been technically manipulated or doctored in a way that misleads users and may pose a serious risk of egregious harm.
In the YouTube emoji case, people were doing something that looked abusive to the automated system, and then the manual review got it wrong. Then, after additional review, YouTube acknowledged they got it wrong, put things back, and said they were going to work on making this less likely in the future. This doesn’t seem like enough of a risk to care?
In the case of the TOS, there are all sorts of worrying things in most TOS. In general, I don’t think this sort of thing is worth worrying about unless the company is actually doing something.
What happens when one’s Google account gets banned because someone algorithm feels like there’s a content violation?
The prospect of getting locked out of mobile phone service and my email at the same time seems frightening.
This seems very unlikely to me? While I’ve seen a few news stories about people being locked out (a) they’re rare enough to be news when they happen and (b) there are typically other factors, with borderline or abusive behaviors I wouldn’t do.
(Additionally, I’m less worried about this because I work for Google and know a lot of other people that work for Google, but that isn’t a factor for most people considering this)
Stories about banned Google accounts look like:
Official Google Docs abuse policy does seem to indicate that banning people account for wrong speech is plausible:
In the YouTube emoji case, people were doing something that looked abusive to the automated system, and then the manual review got it wrong. Then, after additional review, YouTube acknowledged they got it wrong, put things back, and said they were going to work on making this less likely in the future. This doesn’t seem like enough of a risk to care?
In the case of the TOS, there are all sorts of worrying things in most TOS. In general, I don’t think this sort of thing is worth worrying about unless the company is actually doing something.