If I build an AGI and get to choose its utility function, I could choose to copy my own (or what mine would be under reflection, a personal CEV), and that’s far from the worst outcome, but as a group we have solutions that we prefer a lot more and that don’t prioritize me overly much, such as CEV. The CEV of an effectively unbounded group of potential agents (yes, laws of physics bounded, but I assume we both agree that’s not a bound small enough to matter here) is effectively unbounded even if each individual agent’s function is tightly bounded.
This should (I think) make intuitive sense; a group of people who individually want basic human-level stuff discover the best way to do that is to coordinate to build a great nation/legacy/civilization, and break the bound on what they care about. The jumps we’re considering don’t seem different in kind to that.
People who don’t exist until after an AGI is created don’t have much influence over how that AGI is designed, and I don’t see any need to make concessions to them (except for the fact that we care about their preferences being satisfied, of course, but that will be reflected in our utility functions).
If you already care about a legacy as a human and you do something to make an advanced computer system aligned with you. Then the advanced computer system should also care about legacy. I don’t see anything as being lost .
I find attempts to reify the legacy as anything more than a shared agreement between near peers deeply disturbing. Mainly because our understanding of nature, humans and reality are tentative and leaky abstractions, and will remain the same way for the foreseeable. Any reification of civilisation will need a method of revising it in some way.
So much will be lost if humans are no longer capable of being active participants in the revision of the greater system. Conversations like this would be pointless. I have more to say on this subject, I’ll try to write something in a bit
If I build an AGI and get to choose its utility function, I could choose to copy my own (or what mine would be under reflection, a personal CEV), and that’s far from the worst outcome, but as a group we have solutions that we prefer a lot more and that don’t prioritize me overly much, such as CEV. The CEV of an effectively unbounded group of potential agents (yes, laws of physics bounded, but I assume we both agree that’s not a bound small enough to matter here) is effectively unbounded even if each individual agent’s function is tightly bounded.
This should (I think) make intuitive sense; a group of people who individually want basic human-level stuff discover the best way to do that is to coordinate to build a great nation/legacy/civilization, and break the bound on what they care about. The jumps we’re considering don’t seem different in kind to that.
People who don’t exist until after an AGI is created don’t have much influence over how that AGI is designed, and I don’t see any need to make concessions to them (except for the fact that we care about their preferences being satisfied, of course, but that will be reflected in our utility functions).
If you already care about a legacy as a human and you do something to make an advanced computer system aligned with you. Then the advanced computer system should also care about legacy. I don’t see anything as being lost .
I find attempts to reify the legacy as anything more than a shared agreement between near peers deeply disturbing. Mainly because our understanding of nature, humans and reality are tentative and leaky abstractions, and will remain the same way for the foreseeable. Any reification of civilisation will need a method of revising it in some way.
So much will be lost if humans are no longer capable of being active participants in the revision of the greater system. Conversations like this would be pointless. I have more to say on this subject, I’ll try to write something in a bit