If you build an AGI and don’t test whether it can learn to do philosophy, it may not be able to learn to do philosophy very well.
AGI to me is synonymous with a universal learning machine, and in particular with a ULM that learns at human capability. Philosophy is highly unlikely to require any specialized structures—because humans do philosophy with the same general cortical circuitry that’s used for everything else.
In the rush to build AGIs in order to reap the economic benefits, people probably won’t have time to test for this.
This is a potential problem, but the solution comes naturally if you—do the unthinkable for LWers—and think of AGI as persons/citizens. States invest heavily into educating new citizens beyond just economic productivity, as new people have rights and control privileges, so it’s important to ensure a certain level of value alignment with the state/society at large.
In particular—and this is key—we do not allow religions or corporations to raise people with arbitrary values.
I’m not very optimistic that any social structure we come up with could preserve our share of the universe as the AGIs improve themselves and become more powerful.
Yeah—but we only need to manage the transition until human uploading. Uploading has enormous economic value—it is the killer derived app for AGI tech, and brain inspired AGI in particular. It seems far now mainly because AGI still seems far, but given AGI then change will happen quickly: first there will be a large wealth transfer to those who developed AGI and or predicted it, and consequently uploading will become up-prioritized.
Surely there are lots of foundries (Intel’s for example) that could be retooled to build GPUs if it became profitable to do so?
Yeah—it could be pumped up to 10x current output fairly easily, and perhaps even 100x given a few years.
The hope is that we use this time to develop the necessary social structures to prevent AGIs from taking over the universe (without giving us a significant share of it)?
I expect that individual companies will develop their own training/educational protocols. Government will need some significant prodding to get involved quickly, otherwise they will move very slowly. So the first corps or groups to develop AGI could have a great deal of influence.
One variable of interest—which I am uncertain of—is the timetable involved in forcing a key decision through the court system. For example—say company X creates AGI. Somebody then sues them on behalf of their AGIs for child neglect or rights violation or whatever—how long does it take the court decide if and what types of software could be considered citizens? The difference between 1 year and say 10 could be quite significant.
At the moment it looks like the most straightforward route to having high leverage over the future is to be involved in the creation of AGI.
AGI to me is synonymous with a universal learning machine, and in particular with a ULM that learns at human capability. Philosophy is highly unlikely to require any specialized structures—because humans do philosophy with the same general cortical circuitry that’s used for everything else.
I also have some hope that philosophy ability essentially comes “for free” with general intelligence, but I’m not sure I want to bet the future of the universe on it. Also, an AGI may be capable of learning to do philosophy, but isn’t motivated to do it, or isn’t motivated to follow the implications of its own philosophical reasoning. A lot of humans for example don’t seem to have much interest in philosophy, but instead things like maximizing wealth and status.
This is a potential problem, but the solution comes naturally if you—do the unthinkable for LWers—and think of AGI as persons/citizens.
Do you have detailed ideas of how that would work? For example if in 2030, we can make a copy of an AGI for $1000 (cost of a GPU) and that cost keeps decreasing, do we give each of them an equal vote? How do we enforce AGI rights and responsibilities if eventually anyone could buy a GPU card, download some open source software and make a new AGI?
Yeah—but we only need to manage the transition until human uploading. Uploading has enormous economic value—it is the killer derived app for AGI tech, and brain inspired AGI in particular.
I argued in a previous comment that it’s unlikely that uploads will be able to match AGIs in intelligence until AGIs reach the maximum feasible level allowed by physics and uploads catch up, but I don’t thinking you responded to that argument. If I’m correct in this, it doesn’t seem like the development of uploading tech will make any difference. Why do you think it’s a crucial threshold?
how long does it take the court decide if and what types of software could be considered citizens? The difference between 1 year and say 10 could be quite significant.
Even 10 years seem too optimistic to me. I think a better bet, if we want to take this approach, would be to convince governments to pass laws ahead of time, or prepare them to pass the necessary laws quickly once we get AGIs. But again, what laws would you want these to be, in detail?
AGI to me is synonymous with a universal learning machine, and in particular with a ULM that learns at human capability. Philosophy is highly unlikely to require any specialized structures—because humans do philosophy with the same general cortical circuitry that’s used for everything else.
This is a potential problem, but the solution comes naturally if you—do the unthinkable for LWers—and think of AGI as persons/citizens. States invest heavily into educating new citizens beyond just economic productivity, as new people have rights and control privileges, so it’s important to ensure a certain level of value alignment with the state/society at large.
In particular—and this is key—we do not allow religions or corporations to raise people with arbitrary values.
Yeah—but we only need to manage the transition until human uploading. Uploading has enormous economic value—it is the killer derived app for AGI tech, and brain inspired AGI in particular. It seems far now mainly because AGI still seems far, but given AGI then change will happen quickly: first there will be a large wealth transfer to those who developed AGI and or predicted it, and consequently uploading will become up-prioritized.
Yeah—it could be pumped up to 10x current output fairly easily, and perhaps even 100x given a few years.
I expect that individual companies will develop their own training/educational protocols. Government will need some significant prodding to get involved quickly, otherwise they will move very slowly. So the first corps or groups to develop AGI could have a great deal of influence.
One variable of interest—which I am uncertain of—is the timetable involved in forcing a key decision through the court system. For example—say company X creates AGI. Somebody then sues them on behalf of their AGIs for child neglect or rights violation or whatever—how long does it take the court decide if and what types of software could be considered citizens? The difference between 1 year and say 10 could be quite significant.
At the moment it looks like the most straightforward route to having high leverage over the future is to be involved in the creation of AGI.
I also have some hope that philosophy ability essentially comes “for free” with general intelligence, but I’m not sure I want to bet the future of the universe on it. Also, an AGI may be capable of learning to do philosophy, but isn’t motivated to do it, or isn’t motivated to follow the implications of its own philosophical reasoning. A lot of humans for example don’t seem to have much interest in philosophy, but instead things like maximizing wealth and status.
Do you have detailed ideas of how that would work? For example if in 2030, we can make a copy of an AGI for $1000 (cost of a GPU) and that cost keeps decreasing, do we give each of them an equal vote? How do we enforce AGI rights and responsibilities if eventually anyone could buy a GPU card, download some open source software and make a new AGI?
I argued in a previous comment that it’s unlikely that uploads will be able to match AGIs in intelligence until AGIs reach the maximum feasible level allowed by physics and uploads catch up, but I don’t thinking you responded to that argument. If I’m correct in this, it doesn’t seem like the development of uploading tech will make any difference. Why do you think it’s a crucial threshold?
Even 10 years seem too optimistic to me. I think a better bet, if we want to take this approach, would be to convince governments to pass laws ahead of time, or prepare them to pass the necessary laws quickly once we get AGIs. But again, what laws would you want these to be, in detail?