slow-thinking unFriendly AGIs … not any help in developing a FAI
One suggestion is that slow-thinking unFriendly near-human AIs may indeed help develop an FAI:
(1) As a test bed, as a way of learning from examples.
(2) They can help figure things out. Of course, we don’t want them to be too smart, but dull nascent AGIs, if they don’t explode, might be some sort of research partner.
(To clarify, unFriendly means “without guaranteed Friendliness”, which is close but not identical to “guaranteed to kill us.”)
Ben Goertzel and Joel Pitt 2012 suggest the former for nascent AGIs. Carl Shulman’s recent article also suggests the latter for infrahuman WBEs.
One suggestion is that slow-thinking unFriendly near-human AIs may indeed help develop an FAI:
(1) As a test bed, as a way of learning from examples.
(2) They can help figure things out. Of course, we don’t want them to be too smart, but dull nascent AGIs, if they don’t explode, might be some sort of research partner.
(To clarify, unFriendly means “without guaranteed Friendliness”, which is close but not identical to “guaranteed to kill us.”)
Ben Goertzel and Joel Pitt 2012 suggest the former for nascent AGIs. Carl Shulman’s recent article also suggests the latter for infrahuman WBEs.
That’s the question: How long a run do we have?