I would disagree. The overwhelming majority of the average human’s life is spent peacefully. It is actually fairly remarkable how rarely we have significant conflict, especially considering the relatively overcrowded places that humans live. Not to mention that it is only a small proportion of the human population that engages other humans destructively (not by proxy).
The overwhelming majority of the average human’s life is also spent in conditions where they are on relatively even grounds with everyone else. But once you start looking at what happens when people end up in situations where they are clearly more powerful than others? And can treat those others the way they like and without fear of retribution? Ugly.
I disagree. While there are some spectacular examples of what you describe, and they are indeed ugly, by and large there is wide distribution of hierarchical disparity even in daily life which is more often than not mutually beneficent.
As an emperor I optimize my empire by ensuring that my subjects are philosophically and physically satisfied do I not? I think there is plenty of evidence to support this philosophy as the most sustainable (and positive) of hierarchical models; after all some of the most successful businesses are laterally organized.
A certain philosophy being the most sustainable and positive isn’t automatically the same as being the one people tend to adopt. Plus the answer to your question depends on what you’re trying to optimize.
Also, it sounds like you’re still talking about a situation where people don’t actually have ultimate power. If we’re discussing a potential hard takeoff scenario, then considerations such as “which models have been the most successful for businesses before” don’t really apply. Any entity genuinely undergoing a hard takeoff is one that isn’t afterwards bound by what’s successful for humans, any more than we are bound by the practices that work the best for ants.
A certain philosophy being the most sustainable and positive isn’t automatically the same as being the one people tend to adopt
I think there is more than ample evidence to suggest that those are significantly less likely to be adopted—however wouldn’t a group of people who know that and can correct for it be the best test case of implementing an optimized strategy?
Also, it sounds like you’re still talking about a situation where people don’t actually have ultimate power.
I hold the view that it is unnecessary to hold ultimate power over FAI. I certainly wouldn’t bind it to what has worked for humans thus far. Don’t fear the AI, find a way to assimilate.
I would disagree. The overwhelming majority of the average human’s life is spent peacefully. It is actually fairly remarkable how rarely we have significant conflict, especially considering the relatively overcrowded places that humans live. Not to mention that it is only a small proportion of the human population that engages other humans destructively (not by proxy).
The overwhelming majority of the average human’s life is also spent in conditions where they are on relatively even grounds with everyone else. But once you start looking at what happens when people end up in situations where they are clearly more powerful than others? And can treat those others the way they like and without fear of retribution? Ugly.
I disagree. While there are some spectacular examples of what you describe, and they are indeed ugly, by and large there is wide distribution of hierarchical disparity even in daily life which is more often than not mutually beneficent.
As an emperor I optimize my empire by ensuring that my subjects are philosophically and physically satisfied do I not? I think there is plenty of evidence to support this philosophy as the most sustainable (and positive) of hierarchical models; after all some of the most successful businesses are laterally organized.
A certain philosophy being the most sustainable and positive isn’t automatically the same as being the one people tend to adopt. Plus the answer to your question depends on what you’re trying to optimize.
Also, it sounds like you’re still talking about a situation where people don’t actually have ultimate power. If we’re discussing a potential hard takeoff scenario, then considerations such as “which models have been the most successful for businesses before” don’t really apply. Any entity genuinely undergoing a hard takeoff is one that isn’t afterwards bound by what’s successful for humans, any more than we are bound by the practices that work the best for ants.
I think there is more than ample evidence to suggest that those are significantly less likely to be adopted—however wouldn’t a group of people who know that and can correct for it be the best test case of implementing an optimized strategy?
I hold the view that it is unnecessary to hold ultimate power over FAI. I certainly wouldn’t bind it to what has worked for humans thus far. Don’t fear the AI, find a way to assimilate.