I think the problem is in the definition of `optimum’. In order to be able to call a state optimum, you must presuppose the laws of physics in order to rule out any better states that are physically impossible. Once we recognize this, it seems that any society must either achieve an optimum or suffer an existential disaster (not necessarily extinction). Value is fragile, but minds are powerful and if they ever get on the right track they will never get off, baring problems that are impossible to foresee.
The only cases that remain to be considered are extinction and non-extinction existential risk. I’m pretty sure that my value system in indifferent between the existence and nonexistence of a region with no conscious life, but there is no reason for other value systems to share that property. I am unsure how the average value system would judge its surroundings, partially because I am unsure what to average over. Even a group that manages to optimize its surroundings may describe its universe’s existence as bad due to the existence of variables that it cannot optimize or other factors, such as a general dislike of anything existing.
If an existential risk does not fully wipe out its species, there is a chance that an optimization process will survive, but with different values from its parent species. On average, the parent species would probably regard this as better than extinction, because the optimization process would share some of its values, while being indifferent to the rest. As weak evidence that this applies to our species, there are many fictional distopias that, while much worse than our current world, seem preferable to extinction.
I think the problem is in the definition of `optimum’. In order to be able to call a state optimum, you must presuppose the laws of physics in order to rule out any better states that are physically impossible. Once we recognize this, it seems that any society must either achieve an optimum or suffer an existential disaster (not necessarily extinction). Value is fragile, but minds are powerful and if they ever get on the right track they will never get off, baring problems that are impossible to foresee.
The only cases that remain to be considered are extinction and non-extinction existential risk. I’m pretty sure that my value system in indifferent between the existence and nonexistence of a region with no conscious life, but there is no reason for other value systems to share that property. I am unsure how the average value system would judge its surroundings, partially because I am unsure what to average over. Even a group that manages to optimize its surroundings may describe its universe’s existence as bad due to the existence of variables that it cannot optimize or other factors, such as a general dislike of anything existing.
If an existential risk does not fully wipe out its species, there is a chance that an optimization process will survive, but with different values from its parent species. On average, the parent species would probably regard this as better than extinction, because the optimization process would share some of its values, while being indifferent to the rest. As weak evidence that this applies to our species, there are many fictional distopias that, while much worse than our current world, seem preferable to extinction.