It will likely be a gradual development that once it becomes sophisticated enough to pose a serious risk is also understood and controlled by countermeasures.
Indeed. Companies illustrate this. They are huge, superhuman powerful entities too.
A major upvote for this. The SIAI should create a sister organization to publicize the logical (and exceptionally) dangerous conclusion to the course that corporations are currently on. We have created powerful, superhuman entities with the sole top-level goal (required by LAW in for-profit corporations) of “Optimize money acquisition and retention”. My personal and professional opinion is that this is a far more immediate (and greater) risk than UnFriendly AI).
Companies are probably the number 1 bet for the type of organisation most likely to produce machine intelligence—with number 2 being governments. So, there’s a good chance that early machine intelligences will be embedded into the infrastructure of companies. So, these issues are probably linked.
Money is the nearest global equivalent of “utility”. Law-abiding maximisation of it does not seem unreasonable. There are some problems where it is difficult to measure and price things, though.
Money is the nearest global equivalent of “utility”. Law-abiding maximisation of it does not seem unreasonable.
On the other hand, maximization of money, including accurate terms for expected financial costs of legal penalties, can cause remarkable unreasonable behavior. As was repeated recently “It’s hard for the idea of an agent with different terminal values to really sink in”, in particular “something that could result in powerful minds that actually don’t care about morality”. A business that actually behaved as a pure profit maximizer would be such an entity.
Morality is represented by legal constraints. That results in a “negative” morality, and - arguably -not a very good one.
Fortunately companies are also subject to many of the same forces that produce cooperation and niceness in the rest of biology—including reputations, reciprocal altruism and kin selection.
Algorithmic trading is indeed an example for the kind of risks posed by complication (unmanageable) systems but also shows that we evolve our security measures with each small-scale catastrophe. There is no example of some existential risk from true runaway technological development yet although many people believe there are such risks, e.g. nuclear weapons. Unstoppable recursive self-improvement is just a hypothesis that you shouldn’t take as a foundation for a whole lot of further inductions.
An all-out nuclear war between Russia and the United States would be the worst catastrophe in history, a tragedy so huge it is difficult to comprehend. Even so, it would be far from the end of human life on earth. The dangers from nuclear weapons have been distorted and exaggerated, for varied reasons. These exaggerations have become demoralizing myths, believed by millions of Americans.
Indeed. Companies illustrate this. They are huge, superhuman powerful entities too.
A major upvote for this. The SIAI should create a sister organization to publicize the logical (and exceptionally) dangerous conclusion to the course that corporations are currently on. We have created powerful, superhuman entities with the sole top-level goal (required by LAW in for-profit corporations) of “Optimize money acquisition and retention”. My personal and professional opinion is that this is a far more immediate (and greater) risk than UnFriendly AI).
Companies are probably the number 1 bet for the type of organisation most likely to produce machine intelligence—with number 2 being governments. So, there’s a good chance that early machine intelligences will be embedded into the infrastructure of companies. So, these issues are probably linked.
Money is the nearest global equivalent of “utility”. Law-abiding maximisation of it does not seem unreasonable. There are some problems where it is difficult to measure and price things, though.
On the other hand, maximization of money, including accurate terms for expected financial costs of legal penalties, can cause remarkable unreasonable behavior. As was repeated recently “It’s hard for the idea of an agent with different terminal values to really sink in”, in particular “something that could result in powerful minds that actually don’t care about morality”. A business that actually behaved as a pure profit maximizer would be such an entity.
Morality is represented by legal constraints. That results in a “negative” morality, and - arguably -not a very good one.
Fortunately companies are also subject to many of the same forces that produce cooperation and niceness in the rest of biology—including reputations, reciprocal altruism and kin selection.
Algorithmic trading is indeed an example for the kind of risks posed by complication (unmanageable) systems but also shows that we evolve our security measures with each small-scale catastrophe. There is no example of some existential risk from true runaway technological development yet although many people believe there are such risks, e.g. nuclear weapons. Unstoppable recursive self-improvement is just a hypothesis that you shouldn’t take as a foundation for a whole lot of further inductions.
Dispelling Stupid Myths About Nuclear War