Not sure if this is pure musing or a question. The, rather obvious, thought strikes me that this discussion could be held without any reference to AI at all. It is very clear that people with 150+ IQ are much more capable than those with 120 IQ and those with 120 are much more capable than those with sub 100 IQ.
For the most part we live in a market society driven my mass market demand, which seems like it will be dominated by a lower average IQ, which is designed and produced by the “smart” tail of the curve.
This has been the case (well perhaps not the market society claim) for most of human existence.
That suggests we might have evolutionary design patters that have been emerging to protect the masses of the human race from both their own (and perhaps misunderstood or even unknown) risky demand that are delivered by the smart minority of humans.
Is that line of thinking any part of the larger picture (AI alignment I suppose)?
I’m with you in that this seems much more general than AI, but I’m not sure what you mean by:
evolutionary design patters that have been emerging to protect the masses of the human race from both their own (and perhaps misunderstood or even unknown) risky demand that are delivered by the smart minority of humans
It sure seems to me like all humans, both “the masses” and “the smart minority”, are NOT protected generally, and our current (and future) existence is more due to luck and, until recently, extremely limited capabilities.
I have been wondering for a while whether ‘AI aligment == moral/ethical philsophy’, i.e. solving either are equivalent.
Well that comment was a while back so I’ll place a caveat on my response that I could have been thinking of something else.
While I was in grad school one of the papers read (by a professor I took a class with but was not part of that class) was “The Origins of Predictable Behavior” (Ron Heiner, AER circa 1984?). It’s interesting because it was largely a Bayesian analysis. Short summary, humans evolve rules that protect them from big, but often infrequent, risks.
I think the idea here is that social norms then set our priors about certain things that are a bit separate from our personal experience—and so are designed to resist the individual updates on priors because the actual evens are infrequent.
Not sure if this is pure musing or a question. The, rather obvious, thought strikes me that this discussion could be held without any reference to AI at all. It is very clear that people with 150+ IQ are much more capable than those with 120 IQ and those with 120 are much more capable than those with sub 100 IQ.
For the most part we live in a market society driven my mass market demand, which seems like it will be dominated by a lower average IQ, which is designed and produced by the “smart” tail of the curve.
This has been the case (well perhaps not the market society claim) for most of human existence.
That suggests we might have evolutionary design patters that have been emerging to protect the masses of the human race from both their own (and perhaps misunderstood or even unknown) risky demand that are delivered by the smart minority of humans.
Is that line of thinking any part of the larger picture (AI alignment I suppose)?
I’m with you in that this seems much more general than AI, but I’m not sure what you mean by:
It sure seems to me like all humans, both “the masses” and “the smart minority”, are NOT protected generally, and our current (and future) existence is more due to luck and, until recently, extremely limited capabilities.
I have been wondering for a while whether ‘AI aligment == moral/ethical philsophy’, i.e. solving either are equivalent.
Well that comment was a while back so I’ll place a caveat on my response that I could have been thinking of something else.
While I was in grad school one of the papers read (by a professor I took a class with but was not part of that class) was “The Origins of Predictable Behavior” (Ron Heiner, AER circa 1984?). It’s interesting because it was largely a Bayesian analysis. Short summary, humans evolve rules that protect them from big, but often infrequent, risks.
I think the idea here is that social norms then set our priors about certain things that are a bit separate from our personal experience—and so are designed to resist the individual updates on priors because the actual evens are infrequent.
Interesting! Thanks for replying.