Companies are probably the number 1 bet for the type of organisation most likely to produce machine intelligence—with number 2 being governments. So, there’s a good chance that early machine intelligences will be embedded into the infrastructure of companies. So, these issues are probably linked.
Money is the nearest global equivalent of “utility”. Law-abiding maximisation of it does not seem unreasonable. There are some problems where it is difficult to measure and price things, though.
Money is the nearest global equivalent of “utility”. Law-abiding maximisation of it does not seem unreasonable.
On the other hand, maximization of money, including accurate terms for expected financial costs of legal penalties, can cause remarkable unreasonable behavior. As was repeated recently “It’s hard for the idea of an agent with different terminal values to really sink in”, in particular “something that could result in powerful minds that actually don’t care about morality”. A business that actually behaved as a pure profit maximizer would be such an entity.
Morality is represented by legal constraints. That results in a “negative” morality, and - arguably -not a very good one.
Fortunately companies are also subject to many of the same forces that produce cooperation and niceness in the rest of biology—including reputations, reciprocal altruism and kin selection.
Companies are probably the number 1 bet for the type of organisation most likely to produce machine intelligence—with number 2 being governments. So, there’s a good chance that early machine intelligences will be embedded into the infrastructure of companies. So, these issues are probably linked.
Money is the nearest global equivalent of “utility”. Law-abiding maximisation of it does not seem unreasonable. There are some problems where it is difficult to measure and price things, though.
On the other hand, maximization of money, including accurate terms for expected financial costs of legal penalties, can cause remarkable unreasonable behavior. As was repeated recently “It’s hard for the idea of an agent with different terminal values to really sink in”, in particular “something that could result in powerful minds that actually don’t care about morality”. A business that actually behaved as a pure profit maximizer would be such an entity.
Morality is represented by legal constraints. That results in a “negative” morality, and - arguably -not a very good one.
Fortunately companies are also subject to many of the same forces that produce cooperation and niceness in the rest of biology—including reputations, reciprocal altruism and kin selection.