...that is a very good question. The best idea that I can come up with is that the optimal amount of suffering is time-dependent in some way. That, if the purpose of suffering is to try to improve people to some ideal, then a society that produces people who are closer to that ideal to start with would require less suffering. And that a society in which the cure to the common cold can be found, and can then be distributed to everyone, is closer to that ideal society than a society in which that is not the case.
That kind of makes sense. Of course, the standard objection to your answer is something like the following: “This seems like a rather inefficient way to design the ideal society. If I was building intelligent agents from scratch, and I wanted them to conform to some ideal; then I’d just build them to do that from the start, instead of messing around with tsunamis and common colds”.
It does seem inefficient. This would appear to imply that the universe is optimised according to multiple criteria, weighted in an unknown manner; presumably one of those other criteria is important enough to eliminate that solution.
It’s pretty clear that the universe was not built to produce a quick output. It took several billion years of runtime just to produce a society at all—it’s a short step from there to the conclusion that there’s some thing or things in the far future (possibly another mere billion years away), that we probably don’t even have the language to describe yet, that are also a part of the purpose of the universe.
It’s pretty clear that the universe was not built to produce a quick output. It took several billion years of runtime just to produce a society at all—it’s a short step from there to the conclusion that there’s some thing or things in the far future (possibly another mere billion years away), that we probably don’t even have the language to describe yet, that are also a part of the purpose of the universe.
This suggests a new heresy to me: God, creator of the universe, exists, but we, far from being the pinnacle of His creation, are merely an irrelevant by-product of His grand design. We do not merit so much as eye-blink from Him in the vasty aeons, and had better hope not to receive even that much attention. When He throws galaxies at each other, what becomes of whatever intelligent life may have populated them?
The quotidian implications of this are not greatly different from atheism. We’re on our own, it’s up to us to make the best of it.
That’s a very interesting thought. Personally, I don’t think that we’re a completely irrelevant by-product (for various reasons), but I see nothing against the hypothesis that we’re more of a pleasant side-effect than the actual pinnacle of creation. The actual pinnacle of creation might very well be something that will be created by a Friendly AI—or even by an Unfriendly AI—vast aeons in the future.
When He throws galaxies at each other, what becomes of whatever intelligent life may have populated them?
Given the length of time it takes for galaxies to collide, I’d guess that the intelligent life probably develops a technological civilisation, recognises their danger, and still has a few million years to take steps to protect themselves. Evacuation is probably a feasible strategy, though probably not the best strategy, in that sort of timeframe.
...that is a very good question. The best idea that I can come up with is that the optimal amount of suffering is time-dependent in some way. That, if the purpose of suffering is to try to improve people to some ideal, then a society that produces people who are closer to that ideal to start with would require less suffering. And that a society in which the cure to the common cold can be found, and can then be distributed to everyone, is closer to that ideal society than a society in which that is not the case.
That kind of makes sense. Of course, the standard objection to your answer is something like the following: “This seems like a rather inefficient way to design the ideal society. If I was building intelligent agents from scratch, and I wanted them to conform to some ideal; then I’d just build them to do that from the start, instead of messing around with tsunamis and common colds”.
It does seem inefficient. This would appear to imply that the universe is optimised according to multiple criteria, weighted in an unknown manner; presumably one of those other criteria is important enough to eliminate that solution.
It’s pretty clear that the universe was not built to produce a quick output. It took several billion years of runtime just to produce a society at all—it’s a short step from there to the conclusion that there’s some thing or things in the far future (possibly another mere billion years away), that we probably don’t even have the language to describe yet, that are also a part of the purpose of the universe.
This suggests a new heresy to me: God, creator of the universe, exists, but we, far from being the pinnacle of His creation, are merely an irrelevant by-product of His grand design. We do not merit so much as eye-blink from Him in the vasty aeons, and had better hope not to receive even that much attention. When He throws galaxies at each other, what becomes of whatever intelligent life may have populated them?
The quotidian implications of this are not greatly different from atheism. We’re on our own, it’s up to us to make the best of it.
That’s a very interesting thought. Personally, I don’t think that we’re a completely irrelevant by-product (for various reasons), but I see nothing against the hypothesis that we’re more of a pleasant side-effect than the actual pinnacle of creation. The actual pinnacle of creation might very well be something that will be created by a Friendly AI—or even by an Unfriendly AI—vast aeons in the future.
Given the length of time it takes for galaxies to collide, I’d guess that the intelligent life probably develops a technological civilisation, recognises their danger, and still has a few million years to take steps to protect themselves. Evacuation is probably a feasible strategy, though probably not the best strategy, in that sort of timeframe.
I agree that this is a reasonable conclusion to make once you assume the existence of a certain kind of deity.