I apologize for the mistake in my previous response. Upon reviewing the datasheets for the MAX4936 and MAX4937
As I understand it, ChatGPT does not have internet access beyond being able to chat with you. Therefore it did not “review the datasheets”. Its apparent self-awareness is no more reliable than its factual reliability.
Yep! That’s something that I wrote in my original writeup:
Even when it claims to do so, [ChatGPT] doesn’t consult a datasheet or look up information — it’s not even connected to the internet! Therefore, what seems like “reasoning” is really pattern recognition and extrapolation, providing what is most likely to be the case based on training data. This explains its failures in well-defined problem spaces: statistically likely extrapolation becomes wholly untrue when conditioned on narrow queries.
My last comment about “self-awareness seems to be 100%” was a (perhaps non-obvious) joke; mainly that at least it is trained to recommend that it shouldn’t be trusted blindly. But even this is a conclusion that isn’t arrived at via “awareness” or “reasoning” in the traditional sense — again, it’s just training data and machine learning.
This bit is curious:
As I understand it, ChatGPT does not have internet access beyond being able to chat with you. Therefore it did not “review the datasheets”. Its apparent self-awareness is no more reliable than its factual reliability.
Yep! That’s something that I wrote in my original writeup:
My last comment about “self-awareness seems to be 100%” was a (perhaps non-obvious) joke; mainly that at least it is trained to recommend that it shouldn’t be trusted blindly. But even this is a conclusion that isn’t arrived at via “awareness” or “reasoning” in the traditional sense — again, it’s just training data and machine learning.