The machine’s ability to perceive the truth is dependent on the training set it had been given to produce the same. When we talk about the impartiality factor which is the AI’s ability to make decisions without any relevant external factors, it comes to light that the ‘bias’ of the AI is highly dependent on its core function and its intended goal.
For example , if GPT-3 was used for writing news articles based on information gathered from various human sources, the validity of the article is as good as its source and there is not much the AI can do on its own to distinguish that. In another case where the information is gathered by the AI itself is an area which requires a bit more exploration.
To conclude, I think that the ability to show human traits for a machine is inherently based on the source of information and the final result which is expected to be produced.
Hi Ashwin, thank you for your response. Yeah, I guess it does make sense that an AI will respond purely based on the dataset. It does mean that to make AGI we would require a massive amount of datasets and an equal or an even larger amount of time to process it.
“where the information is gathered by the AI itself is an area which requires a bit more exploration,” I think this line sums it all best. I am more confident in an answer now. Thank you!
The machine’s ability to perceive the truth is dependent on the training set it had been given to produce the same. When we talk about the impartiality factor which is the AI’s ability to make decisions without any relevant external factors, it comes to light that the ‘bias’ of the AI is highly dependent on its core function and its intended goal.
For example , if GPT-3 was used for writing news articles based on information gathered from various human sources, the validity of the article is as good as its source and there is not much the AI can do on its own to distinguish that. In another case where the information is gathered by the AI itself is an area which requires a bit more exploration.
To conclude, I think that the ability to show human traits for a machine is inherently based on the source of information and the final result which is expected to be produced.
Hi Ashwin, thank you for your response. Yeah, I guess it does make sense that an AI will respond purely based on the dataset. It does mean that to make AGI we would require a massive amount of datasets and an equal or an even larger amount of time to process it.
“where the information is gathered by the AI itself is an area which requires a bit more exploration,” I think this line sums it all best. I am more confident in an answer now. Thank you!