Simply put, I think this is a pretty concrete example of neglected areas of research that skew our rational understanding of the world, and calls into question the reasons why this blind spot exists. Studying the reason for this blind spot is important as it relates to legislation, academic priorities and the politics underlying the domain.
On a related topic, the implications for AI/ML analysis of this area of research, especially as it pertains to other areas of research which would rely on statistics generated from ‘all the available research’ could potentially be disastrous, especially if all of these types of blind spots in aggregate are taken into account.
If there simply isn’t research to analyze, and/or the research is so one sided ( as in this case there is plenty of research about excess of noise and the like, but little on Silence) than any conclusions AI might come to would be potentially dubious depending on how the analytical setup was implemented.
Simply having a human and or humans actually do the research to help flesh out the overall datasets AI and human researchers could consider would be far more useful in the long run, than just leaving things as they are and attempting to create something like a world model with infinite compute power to virtualize the entire universe and model the human brain so that the entirety of scientific endeavor could be done in a virtual world, apart from the world and humans that would ultimately be affected by it. For various reasons, depending on the output of this hypothetical AI, we probably wouldn’t understand it’s reasoning, or even be able to decide if it was correct or not. What use would that be?
Plus promoting human research into these blind spots would make more money available to more humans to do more research now, thereby offsetting some of the losses in the labor force that things like automation and robotics and AI and ML have caused. It would also cement into the actual neural wiring of humans, the ideas and concepts they were researching, so that they could still engage with the science and meaning making in a meaningful way.
The iterative capabilities of computers have sort of negated the need for human thinking and development in many areas, as people and their limited abilities are neglected for development that matters to them—significantly modifying their brain structure to create unique human capabilities and identities—instead of just meaning something to them—by simply adding one more abstract concept to their brain structures which recruits less of the wiring regarding the matter of the meaning.
Allowing humans the dignity to do some of the intellectual work still, is a concern I have for the future. The more we rely on AI to point out our flaws and compensate for them, the less we develop these abilities ourselves, and the less AI will depend on humans in the future at the same time making us more dependent on them. That’s a dangerous proposition.
Simply put, I think this is a pretty concrete example of neglected areas of research that skew our rational understanding of the world, and calls into question the reasons why this blind spot exists. Studying the reason for this blind spot is important as it relates to legislation, academic priorities and the politics underlying the domain.
On a related topic, the implications for AI/ML analysis of this area of research, especially as it pertains to other areas of research which would rely on statistics generated from ‘all the available research’ could potentially be disastrous, especially if all of these types of blind spots in aggregate are taken into account.
If there simply isn’t research to analyze, and/or the research is so one sided ( as in this case there is plenty of research about excess of noise and the like, but little on Silence) than any conclusions AI might come to would be potentially dubious depending on how the analytical setup was implemented.
Simply having a human and or humans actually do the research to help flesh out the overall datasets AI and human researchers could consider would be far more useful in the long run, than just leaving things as they are and attempting to create something like a world model with infinite compute power to virtualize the entire universe and model the human brain so that the entirety of scientific endeavor could be done in a virtual world, apart from the world and humans that would ultimately be affected by it. For various reasons, depending on the output of this hypothetical AI, we probably wouldn’t understand it’s reasoning, or even be able to decide if it was correct or not. What use would that be?
Plus promoting human research into these blind spots would make more money available to more humans to do more research now, thereby offsetting some of the losses in the labor force that things like automation and robotics and AI and ML have caused. It would also cement into the actual neural wiring of humans, the ideas and concepts they were researching, so that they could still engage with the science and meaning making in a meaningful way.
The iterative capabilities of computers have sort of negated the need for human thinking and development in many areas, as people and their limited abilities are neglected for development that matters to them—significantly modifying their brain structure to create unique human capabilities and identities—instead of just meaning something to them—by simply adding one more abstract concept to their brain structures which recruits less of the wiring regarding the matter of the meaning.
Allowing humans the dignity to do some of the intellectual work still, is a concern I have for the future. The more we rely on AI to point out our flaws and compensate for them, the less we develop these abilities ourselves, and the less AI will depend on humans in the future at the same time making us more dependent on them. That’s a dangerous proposition.