I was assuming that long term strategic planners (as described in section 27) are available as an AIS, and would be one of the components of the hypothetical AGI.
That’s not consistent with my understanding of section 27. My understanding is that Drexler would describe that as too dangerous.
suppose you asked the plan maker to create a plan to cure cancer.
I suspect that a problem here is that “plan maker” is ambiguous as to whether it falls within Drexler’s notion of something with a bounded goal.
CAIS isn’t just a way to structure software. It also requires some not-yet-common sense about what goals to give the software.
“Cure cancer” seems too broad to qualify as a goal that Drexler would consider safe to give to software. Sections 27 and 28 suggest that Drexler wants humans to break that down into narrower subtasks. E.g. he says:
By contrast, it is difficult to envision a development path in which AI developers would treat all aspects of biomedical research (or even cancer research) as a single task to be learned and implemented by a generic system.
After further rereading, I now think that what Drexler imagines is a bit more complex: (section 27.7) “senior human decision makers” would have access to a service with some strategic planning ability (which would have enough power to generate plans with dangerously broad goals), and they would likely restrict access to those high-level services.
I suspect Drexler is deliberately vague about the extent to which the strategic planning services will contain safeguards.
This, of course, depends on the controversial assumption that relatively responsible organizations will develop CAIS well before other entities are able to develop any form of equally powerful AI. I consider that plausible, but it seems to be one of the weakest parts of his analysis.
And presumably the publicly available AI services won’t be sufficiently general and powerful to enable random people to assemble them into an agent AGI? Combining a robocar + Google translate + an aircraft designer + a theorem prover doesn’t sound dangerous. But I’d prefer to have something more convincing than just “I spent a few minutes looking for risks, and didn’t find any”.
Fwiw, by my understanding of CAIS and my definition of a service here as “A service is an AI system that delivers bounded results for some task using bounded resources in bounded time”, a plan maker would qualify as a service. So every time I make claims about “services” I intend for those claims to apply to plan makers as well.
I have tried to use words the same way that Drexler does, but obviously I can’t know exactly what he meant.
That’s not consistent with my understanding of section 27. My understanding is that Drexler would describe that as too dangerous.
I suspect that a problem here is that “plan maker” is ambiguous as to whether it falls within Drexler’s notion of something with a bounded goal.
CAIS isn’t just a way to structure software. It also requires some not-yet-common sense about what goals to give the software.
“Cure cancer” seems too broad to qualify as a goal that Drexler would consider safe to give to software. Sections 27 and 28 suggest that Drexler wants humans to break that down into narrower subtasks. E.g. he says:
After further rereading, I now think that what Drexler imagines is a bit more complex: (section 27.7) “senior human decision makers” would have access to a service with some strategic planning ability (which would have enough power to generate plans with dangerously broad goals), and they would likely restrict access to those high-level services.
I suspect Drexler is deliberately vague about the extent to which the strategic planning services will contain safeguards.
This, of course, depends on the controversial assumption that relatively responsible organizations will develop CAIS well before other entities are able to develop any form of equally powerful AI. I consider that plausible, but it seems to be one of the weakest parts of his analysis.
And presumably the publicly available AI services won’t be sufficiently general and powerful to enable random people to assemble them into an agent AGI? Combining a robocar + Google translate + an aircraft designer + a theorem prover doesn’t sound dangerous. But I’d prefer to have something more convincing than just “I spent a few minutes looking for risks, and didn’t find any”.
Fwiw, by my understanding of CAIS and my definition of a service here as “A service is an AI system that delivers bounded results for some task using bounded resources in bounded time”, a plan maker would qualify as a service. So every time I make claims about “services” I intend for those claims to apply to plan makers as well.
I have tried to use words the same way that Drexler does, but obviously I can’t know exactly what he meant.