The race for integrating AI means there are a lot of decision makers and leaders making decisions about AI with very superficial knowledge and consideration. I believe the largest risk in AI is human mismanagement. The human brain deals in human logic which has its characteristic human biases. Just like it took us a while to figure out what fallacies even were and how not to fall for them, we’ll need to first define AI fallacies and human fallacies towards AI and then teach the public these condensed learnings so that we can easily identify 90% of bad uses of AI before they happen. There are no established best practices yet on how to successfully operate AIs. You’ll need to be one of the firsts to figure that out if you want to succeed in our new world to come.
The future of Humans: Operators of AI
Link post
The race for integrating AI means there are a lot of decision makers and leaders making decisions about AI with very superficial knowledge and consideration. I believe the largest risk in AI is human mismanagement. The human brain deals in human logic which has its characteristic human biases. Just like it took us a while to figure out what fallacies even were and how not to fall for them, we’ll need to first define AI fallacies and human fallacies towards AI and then teach the public these condensed learnings so that we can easily identify 90% of bad uses of AI before they happen. There are no established best practices yet on how to successfully operate AIs. You’ll need to be one of the firsts to figure that out if you want to succeed in our new world to come.