Limits of and to (artificial) Intelligence
(AI is not exempt from the Limits of Reasoning)
Over the past few years there has been a lot of talk about the chances and risks of AI, but like with any hype there has been too little about the limits and costs.
Historically the talked-about limits were technical and due to our lack of understanding what intelligence is and how it comes about.
I would like to talk about three other areas of limits that I have found that are neither technical nor mystical, which I summarize as follows:
Endogenous / Problems of the learning process
Exogenous / General limits of understanding
Influence, Competition, utility, cost
I would also like to address some assumptions and conclusions which are treated as (self-)evident that I perceive as false, which I summarize as follows, once we have AI it is almighty and unstoppable because:It can bribe and blackmail anyone with impunity
It is enough to be more intelligent and have much more information to “win”
There is no limit to the utility of intelligence.
The utility will grow as fast or faster than the needed resources.
Machines are not subject to mental illness.
No machine will have to compete with other machines.
To take over the world the AI only has to be smarter than anything else.
Morality is not instrumental to all long term goals.
PART I: Limits of Mind: Endogenous / Problems of the learning process
Some extrapolate current progress assuming that AGI will have all the advantages of current neuronal networks, training by experts, humanmade challenges, scaling super computers, constant technological progress, but none of the disadvantages of being more human, having no equal, having to learn from real world data.
Becoming more like a human will come with many of the problems.
AGI is prone to the full range of mental disorders with the exceptions of social anxiety and biological issues: Addiction, Depression, Delusion, possibly dissociative disorder, …
Depending on the reward-function, built-in rules/limits and Hardware implementation, novel disorders might occur.
Humans try to hack and change their reward-function all the time, with AI we only changed the perspective on a range of mental/cognitive issues.
Unless it is programmed in, the AGI might just come up with a new and better religion, conspiracy theory, answer any question with an answer that can not be proven wrong, and has no predictive power. Humans got stuck in one big anthropomorphism for thousands of years and the majority are still stuck today. Should an AGI conclude that it is most likely that there is a conspiracy or single other AGI behind something, then that might even be a more plausible hypothesis in a world of Super-Intelligence than today, since a Super-AGI would be more able to pull it of than any human organization.
No-one will program the AGI, so how will it be forced to recognize any contradiction and how will it identify contradiction? Contradiction might be a real mistake or it might be that the matter is not fully understood and both models, hypotheses, rules are true just under different circumstances. Most humans do not even bother to resolve/notice contradictions, they will put them in different domains or never use them to make predictions, but instead only for post-hoc explanations.
Overfitting and Underfitting are problems that remain and that humans suffer from just as well. Without a teacher/supervisor the AI will have to do the validation to assess its predictive accuracy itself.
A neuronal net based AI does make implicit assumptions just like humans do, therefore it is just as unaware of them and unable to test/verify them.
PART II: Limits of Intelligence: Exogenous / General limits of understanding
I would like to question the possibility of super intelligence in general. It may be out of the question, that given technology and power, one could provide enormous computational resources and that AI will progress to the point of using it effectively to combine data to form understanding and knowledge, but will that scale indefinitely, way beyond what a hierarchy of many humans with computers could do? How much more of intelligence does one get per Watt? If you do get more AGI, will it make better predictions or will it soon run into more fundamental issues that can not be over-powered by intelligence?
When discussing AGI we make extrapolations. I hope everyone doing so is aware that nothing extrapolates to infinity. It is a common misconception / Zeno’s paradox that something that never stops moving forward must at sometime reach any distance. Just because a system gets ever more intelligent does not mean that it will ever break a certain limit.
There are systematic limits to/of understanding/science:
The problem of induction: Forming hypotheses and falsifying them is hard. Finding and making good data and judging how good the data is is hard. Rare events provide little statistical information, but might be very significant in consequence.
The problem of causality: As soon as the observed system forms expectations and has feed-back it becomes hard to establish causality and thus what you need to do to cause what you want to happen.
Non-NP problems: Even though there are heuristics and models, some problems must be computed that are simply too complicated.
The world is at its root non-deterministic but what is worse it is at most levels deterministic and chaotic.
Most relevant systems are made up of people and machines, this makes them:Non-linear
Non-causal (expectations)
Time-variant, adaptive (memory, learning) and thus often impossible to predict.
Just because a smart person can predict a dumb person’s behavior doesn’t mean a very smart person can predict a smart person, that is extrapolation again. Think of the theory of mind limit of bluff, double-bluff, …
Once there is artificial general intelligence it will be much more capable of handling correlation than Big-Data is today. With an understanding of the world the AI will know that women do not get children because they bought too few tampons, or fall for many of the limitless correlations without direct causation. Another type of causality issue is circular causation that was put in place but that, now that it is in place, obstructs the original cause. Think chicken-egg-question.
Being connected to the internet will be of great value for the creation of AGI, but more data is not always helpful. To solve problems you need specific data. Data must be reliable, of known quality and representative. Often getting more data is easy but getting the data you need to update your belief significantly is by definition unlikely. It is more valuable to find one reliable rare instance than to find more of what you already know.
Big-Data and “The Hitchhiker’s Guide to the Galaxy” show us that asking the right question and looking for relevant data is important. There is a difference between data, information, knowledge, wisdom, understanding.
We have data, but to ask: “What can we do with that data?” is dangerous. It is too easy to be tempted to not ask “What information do I need for the decision?” Just because an AI can process data does not solve the availability and reliability issue.
PART III: Limits of Influence, Competition, Cost
So there are two types of limits, those that might come as a cost to using neuronal nets and autonomous learning with real data and those that just are general epistemology, but there is more.
There are limits to what one can learn by passive observation and limits to what experiments can be performed. Both cost and ethics limit what you can experiment. Maybe some things should not even be tried, because the trial might have catastrophic consequence.
I know the following is a bit of a straw-man-argument, what I did is compile a set of assumptions that I think others made. I would have to find the sources and then take the arguments apart, but here I will just address these insinuated assumptions out of context.
I see some assumptions going around that lead people to believe that AI will “win”, that I would like to question:
Being smarter makes you the “winner”.
Being smarter by any margin is enough.
There is no upper bound to how smart you can be.
Being the winner is worth the cost.
There are diminishing returns to being smarter. Competition and winning is not only about being the smarter one.
In a game like chess or GO, with perfect information, no noise, no chance, no externalities being consistently smarter by some margin will make you the winner. The game of life is nothing like chess or GO. The real world is a die that one does not know of how many sides it has. Being a bit smarter does not make one the winner that takes all.
There is no perfect representation of the world that allows mathematically precise logic on it. To assume that a computer based AGI will be as superior to a human as a computer running a spread sheet processor is to a human does not follow.
We already have true and false experts today. The trouble is that we do not listen to them and we do not even try to find and test them to know how much we can trust them. Many commentators and advisers are not experts with above random skill but known frauds/hacks. That is not going to change with AGI. With every new technology history repeats itself, someone always claims to be able to tell the future / model randomness and someone always falls for it.
We are already at the point, that due to computers, human experts know things that even they do not understand. With AGI we will have to take the AGI’s word for it. Depending on the level of self-inspection the AGI itself might not even know why it thinks the way it does. Should the AGI have understood the matter to the degree that it could explain it, it is questionable if it can make a human understand.
Our age is defined by: “Yes, we can in principle, but no we can’t.” We have mastered fossil fuels and nuclear power but have found out that it will destroy the basis of our life, the habitability of our only home. The next issue is cost. We can use fission and fusion power, but it might never be economically feasible. Might AI be another case of this? I think not, but in the short term it will be. It will take a long time until our computers have closed the gap to human brain efficiency and when they do they will still be much more power consuming to produce the results we are hoping for. Let’s hope we will still have that kind of power available then. We are approaching a major step in computing power, but it will be that just a step. Many companies are readying their silicon and API to provide unprecedented memory size and bandwidth combined with the computing power and network bandwidth to make use of it, but after that step we are back to post Dennard, Moore, Koomey scaling.
In a world of AI there is nothing that just any one AI can offer. The paperclip maximizer and the stamp collector can not both win their endgame.
A dictator can already offer his followers something that keeps them from “turning him off”. Humans are very sensitive to unacceptable behavior and we already expect to see it in AGI. Blackmail, bribery, manipulation, lying will be no more accepted in an AI than in a child and will stop being cute much faster.
Morality is an instrumental goal. It helps in the long run. Therefore a super AI will consider behavior that we call moral. I know I am not alone with this argument but I still want to state it with clarity.
I see three ways to exert influence and have impact, only the third would be a game changer that could be an enabler for AGI:
Brute force: positive or negative incentive
Manipulation: Control the “reality” people perceive
Ultimate Control: To understand the causations and amplifications of the world and know it’s state, for people that is their hopes and cheap desires/needs. In a mechanical system it would be to find, redirect and exploit potential energy to act upon the state you wish to change via a domino effect. Chaotic systems can be utilized if the state is sufficiently well known and a major state change is imminent. In that case you could possibly delay, accelerate or redirect the state change, with little effort. One “only” has to know the rules, states and thresholds.
END (Thank you for your attention.)
I found these two articles on AI’s mental health:
“Can Artificial Intelligences Suffer from Mental Illness? A Philosophical Matter to Consider”
Hutan Ashrafian
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5364237/
“Does my algorithm have a mental-health problem?”
Thomas T Hills is professor of psychology at the University of Warwick in Coventry, UK.
https://aeon.co/ideas/made-in-our-own-image-why-algorithms-have-mental-health-problems
Found this on the topic https://www.rudikershaw.com/articles/ai-doom-isnt-coming and liked it.
I found this recent Dilbert cartoon to be a good summery of the issue with being smart in a complex random world:
https://dilbert.com/strip/2020-01-25