AGI to me is synonymous with a universal learning machine, and in particular with a ULM that learns at human capability. Philosophy is highly unlikely to require any specialized structures—because humans do philosophy with the same general cortical circuitry that’s used for everything else.
I also have some hope that philosophy ability essentially comes “for free” with general intelligence, but I’m not sure I want to bet the future of the universe on it. Also, an AGI may be capable of learning to do philosophy, but isn’t motivated to do it, or isn’t motivated to follow the implications of its own philosophical reasoning. A lot of humans for example don’t seem to have much interest in philosophy, but instead things like maximizing wealth and status.
This is a potential problem, but the solution comes naturally if you—do the unthinkable for LWers—and think of AGI as persons/citizens.
Do you have detailed ideas of how that would work? For example if in 2030, we can make a copy of an AGI for $1000 (cost of a GPU) and that cost keeps decreasing, do we give each of them an equal vote? How do we enforce AGI rights and responsibilities if eventually anyone could buy a GPU card, download some open source software and make a new AGI?
Yeah—but we only need to manage the transition until human uploading. Uploading has enormous economic value—it is the killer derived app for AGI tech, and brain inspired AGI in particular.
I argued in a previous comment that it’s unlikely that uploads will be able to match AGIs in intelligence until AGIs reach the maximum feasible level allowed by physics and uploads catch up, but I don’t thinking you responded to that argument. If I’m correct in this, it doesn’t seem like the development of uploading tech will make any difference. Why do you think it’s a crucial threshold?
how long does it take the court decide if and what types of software could be considered citizens? The difference between 1 year and say 10 could be quite significant.
Even 10 years seem too optimistic to me. I think a better bet, if we want to take this approach, would be to convince governments to pass laws ahead of time, or prepare them to pass the necessary laws quickly once we get AGIs. But again, what laws would you want these to be, in detail?
I also have some hope that philosophy ability essentially comes “for free” with general intelligence, but I’m not sure I want to bet the future of the universe on it. Also, an AGI may be capable of learning to do philosophy, but isn’t motivated to do it, or isn’t motivated to follow the implications of its own philosophical reasoning. A lot of humans for example don’t seem to have much interest in philosophy, but instead things like maximizing wealth and status.
Do you have detailed ideas of how that would work? For example if in 2030, we can make a copy of an AGI for $1000 (cost of a GPU) and that cost keeps decreasing, do we give each of them an equal vote? How do we enforce AGI rights and responsibilities if eventually anyone could buy a GPU card, download some open source software and make a new AGI?
I argued in a previous comment that it’s unlikely that uploads will be able to match AGIs in intelligence until AGIs reach the maximum feasible level allowed by physics and uploads catch up, but I don’t thinking you responded to that argument. If I’m correct in this, it doesn’t seem like the development of uploading tech will make any difference. Why do you think it’s a crucial threshold?
Even 10 years seem too optimistic to me. I think a better bet, if we want to take this approach, would be to convince governments to pass laws ahead of time, or prepare them to pass the necessary laws quickly once we get AGIs. But again, what laws would you want these to be, in detail?