So is this is roughly one aspect of why MIRI’s position on AI safety concerns are different to similar parties? - that they’re generally more sympathetic to possibilities futher away from 1 than their peers? I don’t really know, but that’s what the pebblesorters/value-is-fragile strain of thinking seems to suggest for me.
That’s one reason. As an example, Goertzel seems to fall somewhat in (1) with his cosmist manifesto.
But more importantly I think are issues of hard takeoff timeline and AGI design. The mainstream opinion, I think, is that a hard-takeoff would take years at the minimum, and there would be both sufficient time to recognize what is going on and to stop the experiment. Also MIRI seems for some reason to threat-model its AGI’s as some sort of perfectly rational alien utility-maximizer, whereas real AGIs are implemented with all sorts of heuristic tricks that actually do a better job of emulating the quirky way humans think. Combined with the slow takeoff, projects like OpenCog intend to teach robot children in a preschool like environment, thereby value-loading them in the same way that we value-load our children.
Also MIRI seems for some reason to threat-model its AGI’s as some sort of perfectly rational alien utility-maximizer, whereas real AGIs are implemented with all sorts of heuristic tricks that actually do a better job of emulating the quirky way humans think.
This is extremely important, and I hope you will write a post about it.
Indistinguishable from zero, at least with current levels of technology. The mind is an immensely complex machine capable of processing information orders of magnitude faster than the largest HPC clusters. Why should we expect an early dumb intelligence running on mediocre hardware to recursively self-improve so quickly? The burden of proof rests with MIRI, I believe. (And I’m still waiting.)
So is this is roughly one aspect of why MIRI’s position on AI safety concerns are different to similar parties? - that they’re generally more sympathetic to possibilities futher away from 1 than their peers? I don’t really know, but that’s what the pebblesorters/value-is-fragile strain of thinking seems to suggest for me.
That’s one reason. As an example, Goertzel seems to fall somewhat in (1) with his cosmist manifesto.
But more importantly I think are issues of hard takeoff timeline and AGI design. The mainstream opinion, I think, is that a hard-takeoff would take years at the minimum, and there would be both sufficient time to recognize what is going on and to stop the experiment. Also MIRI seems for some reason to threat-model its AGI’s as some sort of perfectly rational alien utility-maximizer, whereas real AGIs are implemented with all sorts of heuristic tricks that actually do a better job of emulating the quirky way humans think. Combined with the slow takeoff, projects like OpenCog intend to teach robot children in a preschool like environment, thereby value-loading them in the same way that we value-load our children.
This is extremely important, and I hope you will write a post about it.
Yeah, I was thinking of Goertzel as well.
So you don’t think MIRI’s work is all that useful? What probability would you assign to hard-takeoff happening of the speed they’re worried about?
Indistinguishable from zero, at least with current levels of technology. The mind is an immensely complex machine capable of processing information orders of magnitude faster than the largest HPC clusters. Why should we expect an early dumb intelligence running on mediocre hardware to recursively self-improve so quickly? The burden of proof rests with MIRI, I believe. (And I’m still waiting.)