There’s really two things im considering. One, whether the general idea of AI throttling is meaningful and what the technical specifics could be (crude example: lets give it only X compute power yielding an intelligence level Y) Two, if we could reliably build a human level AI, it could be of great use, not in itself, but as a tool for investigation, since we could finally “look inside” at concrete realizations of mental concepts, which is not possible with our own minds. As an example, if we could teach a human level AI morality (presumably possible since we ourselves learn it) we would have a concrete realization of that morality as computation that could be looked at outright and even debugged. Could this not be of great value for insights into FAI?
Phil,
There’s really two things im considering. One, whether the general idea of AI throttling is meaningful and what the technical specifics could be (crude example: lets give it only X compute power yielding an intelligence level Y) Two, if we could reliably build a human level AI, it could be of great use, not in itself, but as a tool for investigation, since we could finally “look inside” at concrete realizations of mental concepts, which is not possible with our own minds. As an example, if we could teach a human level AI morality (presumably possible since we ourselves learn it) we would have a concrete realization of that morality as computation that could be looked at outright and even debugged. Could this not be of great value for insights into FAI?