To clarify: it’s as smart as them in the sense that when you take in to account factors A-D (and similar factors), its intellectual output on any problem (including AGI research) would be similar.
It sounds like with factor C, you are saying that you expect AI insights to come faster once an working, if slow, AGI implementation is available. I don’t think this is obvious. “Once you have an AI running on a computer it can implement good ideas or test ideas for goodness far faster and more empirically.” We already have computers available for testing out AI insights.
Not really? You can test out pattern-matching or whatnot algorithms but you can’t test out anything more generalized or as part of anything more generalized like problem-solving algorithms because those would require you to have an entity within which to run them.
Hm, plausible. To restate: if a mind consists of lots of little components that are useless on their own and only work well on concert, then it’ll be hard to optimize any given component without the mind that surrounds it, because it’s hard to gather data on which component designs work better. Does that seem accurate?
yes though I think face and voice recognition software show us that many of the algorithms can be useful on their own. But eg: an algorithm for prioritizing face vs voice recognition when talking to humans is not.
I also think all the AI developers in the world will have a higher pace of developments post functioning AIs than beforehand, which might be relevant if fooming for whatever reason does not take place. Sort of like Steam Engine time: All the little valves and pressure regulators someone could invent and try out will suddenly become a thing
To clarify: it’s as smart as them in the sense that when you take in to account factors A-D (and similar factors), its intellectual output on any problem (including AGI research) would be similar.
It sounds like with factor C, you are saying that you expect AI insights to come faster once an working, if slow, AGI implementation is available. I don’t think this is obvious. “Once you have an AI running on a computer it can implement good ideas or test ideas for goodness far faster and more empirically.” We already have computers available for testing out AI insights.
Not really? You can test out pattern-matching or whatnot algorithms but you can’t test out anything more generalized or as part of anything more generalized like problem-solving algorithms because those would require you to have an entity within which to run them.
Hm, plausible. To restate: if a mind consists of lots of little components that are useless on their own and only work well on concert, then it’ll be hard to optimize any given component without the mind that surrounds it, because it’s hard to gather data on which component designs work better. Does that seem accurate?
yes though I think face and voice recognition software show us that many of the algorithms can be useful on their own. But eg: an algorithm for prioritizing face vs voice recognition when talking to humans is not.
I also think all the AI developers in the world will have a higher pace of developments post functioning AIs than beforehand, which might be relevant if fooming for whatever reason does not take place. Sort of like Steam Engine time: All the little valves and pressure regulators someone could invent and try out will suddenly become a thing