It seems that either (a) the AI-powered sites will in fact give more useful answers to questions, in which case this change might actually be beneficial, or (b) they will give worse answers, in which case people won’t be likely to use them. Don’t you think people will stop trusting such sites after the first 5 times they try eating their own toenails to no avail? And for the purposes of finding plausible bullshit to support what you already think, I think gpt-powered sites have the key disadvantage of being poor evidence to show other people: it looks pretty bad for your case if your best source is a generated website(normal websites could also be generated but not advertise it, of course, but that’s a separate matter). You seem to be imagining a future in which Google does the most dystopian thing possible for no reason in particular.
It seems that either (a) the AI-powered sites will in fact give more useful answers to questions, in which case this change might actually be beneficial, or (b) they will give worse answers, in which case people won’t be likely to use them. Don’t you think people will stop trusting such sites after the first 5 times they try eating their own toenails to no avail? And for the purposes of finding plausible bullshit to support what you already think, I think gpt-powered sites have the key disadvantage of being poor evidence to show other people: it looks pretty bad for your case if your best source is a generated website(normal websites could also be generated but not advertise it, of course, but that’s a separate matter). You seem to be imagining a future in which Google does the most dystopian thing possible for no reason in particular.
Google already pivoted once to providing machine-curated answers that were often awful (e.g. https://searchengineland.com/googles-one-true-answer-problem-featured-snippets-270549). I’m just extrapolating.