In the first example it sounds like the engine is fabricating a false testimony. Was that an intentional attribute in the example? I guess fictionalizing will happen lots, but I don’t expect Google to use that particular method and jeopardize credibility.
For the second example, I assume there will be heavy selection against fabricating incorrect medical advice, at least for Google.
For genuine best-guess attempts to answer the question? I will be concerned if that doesn’t happen in a few years. What’s the matter?
Why do you think there will be heavy selection against things like made-up stories presented as fact, or fabricated/misrepresented medical baloney, when there doesn’t seem to be much such selection now?
I mean that Google themselves wouldn’t want something that could get them lawsuits, and if they generate stuff, yes they’ll have a selection for accuracy. If someone is interested in AI-Dr-Oz’s cures and searched for those, I’m sure Google will be happy to provide. The market for that will be huge, and I’m not predicting that crap will go away.
Yes Google does select, now. The ocean of garbage is that bad. For people making genuine inquiries, often the best search providers can do right now is defer to authority websites. If we’re talking specifically about interpreting medical papers, why don’t you think they’ll have a selection for accuracy?
In the first example it sounds like the engine is fabricating a false testimony. Was that an intentional attribute in the example? I guess fictionalizing will happen lots, but I don’t expect Google to use that particular method and jeopardize credibility.
For the second example, I assume there will be heavy selection against fabricating incorrect medical advice, at least for Google.
For genuine best-guess attempts to answer the question? I will be concerned if that doesn’t happen in a few years. What’s the matter?
Why do you think there will be heavy selection against things like made-up stories presented as fact, or fabricated/misrepresented medical baloney, when there doesn’t seem to be much such selection now?
I mean that Google themselves wouldn’t want something that could get them lawsuits, and if they generate stuff, yes they’ll have a selection for accuracy. If someone is interested in AI-Dr-Oz’s cures and searched for those, I’m sure Google will be happy to provide. The market for that will be huge, and I’m not predicting that crap will go away.
Yes Google does select, now. The ocean of garbage is that bad. For people making genuine inquiries, often the best search providers can do right now is defer to authority websites. If we’re talking specifically about interpreting medical papers, why don’t you think they’ll have a selection for accuracy?
I think their selection for authority will be bad because they perform significantly worse than kagi.