I’m with you in that this seems much more general than AI, but I’m not sure what you mean by:
evolutionary design patters that have been emerging to protect the masses of the human race from both their own (and perhaps misunderstood or even unknown) risky demand that are delivered by the smart minority of humans
It sure seems to me like all humans, both “the masses” and “the smart minority”, are NOT protected generally, and our current (and future) existence is more due to luck and, until recently, extremely limited capabilities.
I have been wondering for a while whether ‘AI aligment == moral/ethical philsophy’, i.e. solving either are equivalent.
Well that comment was a while back so I’ll place a caveat on my response that I could have been thinking of something else.
While I was in grad school one of the papers read (by a professor I took a class with but was not part of that class) was “The Origins of Predictable Behavior” (Ron Heiner, AER circa 1984?). It’s interesting because it was largely a Bayesian analysis. Short summary, humans evolve rules that protect them from big, but often infrequent, risks.
I think the idea here is that social norms then set our priors about certain things that are a bit separate from our personal experience—and so are designed to resist the individual updates on priors because the actual evens are infrequent.
I’m with you in that this seems much more general than AI, but I’m not sure what you mean by:
It sure seems to me like all humans, both “the masses” and “the smart minority”, are NOT protected generally, and our current (and future) existence is more due to luck and, until recently, extremely limited capabilities.
I have been wondering for a while whether ‘AI aligment == moral/ethical philsophy’, i.e. solving either are equivalent.
Well that comment was a while back so I’ll place a caveat on my response that I could have been thinking of something else.
While I was in grad school one of the papers read (by a professor I took a class with but was not part of that class) was “The Origins of Predictable Behavior” (Ron Heiner, AER circa 1984?). It’s interesting because it was largely a Bayesian analysis. Short summary, humans evolve rules that protect them from big, but often infrequent, risks.
I think the idea here is that social norms then set our priors about certain things that are a bit separate from our personal experience—and so are designed to resist the individual updates on priors because the actual evens are infrequent.
Interesting! Thanks for replying.