Just to follow up, I’m seeing nothing new in IEM (or if it’s there it’s too burried in “hear me think” to find—Eliezer really would benefit from pruning down to essentials). Most of it concerns the point where AGI approaches or exceeds human intelligence. There’s very little to support concern for the long ramp up to that point (other than some matter of genetic programming, which I haven’t the time to address here). I could go on rather at length in rebuttal of the post-human-intelligence FOOM theory (not discounting it entirely, but putting certain qualitative bounds on it that justify the claim that FAI will be most fruitfully pursued during that transition, not before it), but for the reasons implied in the original essay and in my other comments here, it seems moot against the overriding truth that AGI is going to happen without FAI regardless—which means our best hope is to see AGI+FAI happen first. If it’s really not obvious that that has to lead with AGI, then tell me why.
Does anybody really think they are going to create an AGI that will get out of their hands before they can stop it? That they will somehow bypass ant, mouse, dog, monkey, and human and go straight to superhuman? Do you really think that you can solve FAI faster or better than someone who’s invented monkey-level AI first?
I feel most of this fear is risidual leftovers from the self-modifying symbolic-program singularity FOOM theories that I hope are mostly left behind by now. But this is just the point—people who don’t understand real AGI don’t understand what the real risks are and aren’t (and certainly can’t mediate them).
I feel most of this fear is risidual leftovers from the self-modifying symbolic-program singularity FOOM theories that I hope are mostly left behind by now. But this is just the point—people who don’t understand real AGI don’t understand what the real risks are and aren’t (and certainly can’t mediate them).
Self-modifying AI is the point behind FOOM. I’m not sure why you’re connecting self-modification/FOOM/singularity with symbolic programming (I assume you mean GOFAI), but everyone I’m aware of who thinks FOOM is plausible thinks it will be because of self-modification.
Yes, I understand that. But it matters a lot what premises underlie AGI how self-modification is going to impact it. The stronger fast-FOOM arguments spring from older conceptions of AGI. Imo, a better understanding of AGI does not support it.
Thanks much for the interesting conversation, I think I am expired.
I don’t think anyone is saying that an ‘ant-level’ AGI is a problem. The issue is with ‘relatively-near-human-level’ AGI. I also don’t think there’s much disagreement about whether a better understanding of AGI would make FAI work easier. People aren’t concerned about AI work being done today, except inasmuch as it hastens better AGI work done in the future.
Just to follow up, I’m seeing nothing new in IEM (or if it’s there it’s too burried in “hear me think” to find—Eliezer really would benefit from pruning down to essentials). Most of it concerns the point where AGI approaches or exceeds human intelligence. There’s very little to support concern for the long ramp up to that point (other than some matter of genetic programming, which I haven’t the time to address here). I could go on rather at length in rebuttal of the post-human-intelligence FOOM theory (not discounting it entirely, but putting certain qualitative bounds on it that justify the claim that FAI will be most fruitfully pursued during that transition, not before it), but for the reasons implied in the original essay and in my other comments here, it seems moot against the overriding truth that AGI is going to happen without FAI regardless—which means our best hope is to see AGI+FAI happen first. If it’s really not obvious that that has to lead with AGI, then tell me why.
Does anybody really think they are going to create an AGI that will get out of their hands before they can stop it? That they will somehow bypass ant, mouse, dog, monkey, and human and go straight to superhuman? Do you really think that you can solve FAI faster or better than someone who’s invented monkey-level AI first?
I feel most of this fear is risidual leftovers from the self-modifying symbolic-program singularity FOOM theories that I hope are mostly left behind by now. But this is just the point—people who don’t understand real AGI don’t understand what the real risks are and aren’t (and certainly can’t mediate them).
Self-modifying AI is the point behind FOOM. I’m not sure why you’re connecting self-modification/FOOM/singularity with symbolic programming (I assume you mean GOFAI), but everyone I’m aware of who thinks FOOM is plausible thinks it will be because of self-modification.
Yes, I understand that. But it matters a lot what premises underlie AGI how self-modification is going to impact it. The stronger fast-FOOM arguments spring from older conceptions of AGI. Imo, a better understanding of AGI does not support it.
Thanks much for the interesting conversation, I think I am expired.
I don’t think anyone is saying that an ‘ant-level’ AGI is a problem. The issue is with ‘relatively-near-human-level’ AGI. I also don’t think there’s much disagreement about whether a better understanding of AGI would make FAI work easier. People aren’t concerned about AI work being done today, except inasmuch as it hastens better AGI work done in the future.