Pinker: AI is being conceived as Alpha Males with high testosterone.
Omohundro: Alpha Males need resources to increase their reproductive potential. Any system with goals would do well to obtain more resources.
Pinker: Then why don’t we conceive of AI’s as females who have no such war instinct.
Omohundro: Females are satisficers, not optimizers, and even satisficers sometimes need more resources. The more powerful a woman is, in all domains, the most likely her goals are to be satisfied. We have bounded interests that have decreasing marginal returns upon acquiring resources. But there is no reason to think AI’s would have.
Pinker: The data shows that fear of world doom and apocalypse at time T1 doesn’t correlate with catastrophe at time T2.
Omohundro: This is false for any extinction scenario (where no data is available for anthropic reasons). Furthermore genetic beings, RNA based beings, and even memes or memeplexes are an existential proof that systems that would benefit from replicating tend to acquire resources over time, even when they are stupid. AI would have the same incentives, plus enormous intelligence on it’s side.
Pinker: Violence has declined. Whistleblowing and signs of the end of the world are more common now because there are always enough violent acts to fill a newspaper… not because we are approaching doom.
Omohundro: Basic AI drives are a qualitatively different shift in why we would be doomed, they are not amenable to historical scrutiny because they were never active in a general intelligence before.
Pinker: There may be some probability that AI will figure out some basic AI drives and be malignant, but the data doesn’t point to us worrying about this being the best allocation of resources. It is a leap of faith to think otherwise.
Omohundro: Faith based on arguments from theoretical understanding of cognitive processes, optimizing processes, game theory, social science, etc.… Also on expected value calculations.
Pinker: Still a leap of faith.
(Some of the above views were expressed by Pinker in his books and interviews, but don’t hold either of them accountable for my models of what they think).
Testosterone isn’t a good explanation for why humans accidentally make other species extinct.
Pinker has decent arguments for declining violence and for expecting people to overestimate some risks. That isn’t enough to imply we shouldn’t worry.
Pinker’s model of war says the expected number of deaths is unbounded, due to small chances of really bad wars. His animal rights thoughts suggest AIs will respect other species more than humans do, but “more than human” isn’t enough to imply no extinctions.
What would Pinker and Omohundro say to each other?
Pinker: AI is being conceived as Alpha Males with high testosterone.
Omohundro: Alpha Males need resources to increase their reproductive potential. Any system with goals would do well to obtain more resources.
Pinker: Then why don’t we conceive of AI’s as females who have no such war instinct.
Omohundro: Females are satisficers, not optimizers, and even satisficers sometimes need more resources. The more powerful a woman is, in all domains, the most likely her goals are to be satisfied. We have bounded interests that have decreasing marginal returns upon acquiring resources. But there is no reason to think AI’s would have.
Pinker: The data shows that fear of world doom and apocalypse at time T1 doesn’t correlate with catastrophe at time T2.
Omohundro: This is false for any extinction scenario (where no data is available for anthropic reasons). Furthermore genetic beings, RNA based beings, and even memes or memeplexes are an existential proof that systems that would benefit from replicating tend to acquire resources over time, even when they are stupid. AI would have the same incentives, plus enormous intelligence on it’s side.
Pinker: Violence has declined. Whistleblowing and signs of the end of the world are more common now because there are always enough violent acts to fill a newspaper… not because we are approaching doom.
Omohundro: Basic AI drives are a qualitatively different shift in why we would be doomed, they are not amenable to historical scrutiny because they were never active in a general intelligence before.
Pinker: There may be some probability that AI will figure out some basic AI drives and be malignant, but the data doesn’t point to us worrying about this being the best allocation of resources. It is a leap of faith to think otherwise.
Omohundro: Faith based on arguments from theoretical understanding of cognitive processes, optimizing processes, game theory, social science, etc.… Also on expected value calculations.
Pinker: Still a leap of faith.
(Some of the above views were expressed by Pinker in his books and interviews, but don’t hold either of them accountable for my models of what they think).
Testosterone isn’t a good explanation for why humans accidentally make other species extinct.
Pinker has decent arguments for declining violence and for expecting people to overestimate some risks. That isn’t enough to imply we shouldn’t worry.
Pinker’s model of war says the expected number of deaths is unbounded, due to small chances of really bad wars. His animal rights thoughts suggest AIs will respect other species more than humans do, but “more than human” isn’t enough to imply no extinctions.