An AI as smart as all the world’s AI scientists would make progress faster than them (not saying how much faster or if it would foom) because
A it would be perfectly coordinated. If every AI researcher knew what every other was thinking about AI ideas would be tested and examined faster. Communication would be not a problem, whiteboards unnecessary
B I’m not sure if you meant smartness as Intelligence or as optimization power but an AI that had the combined intelligence of all the AI researchers would have MORE optimization power because this intelligence would not be held back by emotions, sleep, or human biases (except those accidentally built-in to the AI)
C Faster iteration: All the AI scientists can’t actually test and run a change in the code of an AI because there’s both no code and they don’t have the supercomputer. Once you have an AI running on a computer it can implement good ideas or test ideas for goodness far faster and more empirically.
D It can do actual empiricism on AI mind, as opposed to what AI researchers now can do.
See also what Randaly said. For similar reasons as Robin Hanson’s Emu Hell, a digital computer-bound mind will just be better at a lot of tasks than ones running in meat.
A it would be perfectly coordinated. If every AI researcher knew what every other was thinking about AI ideas would be tested and examined faster. Communication would be not a problem, whiteboards unnecessary
I don’t think that has to be true. For some AI design it might be, for other it might be false.
B I’m not sure if you meant smartness as Intelligence or as optimization power but an AI that had the combined intelligence of all the AI researchers would have MORE optimization power because this intelligence would not be held back by emotions, sleep, or human biases (except those accidentally built-in to the AI)
I think you underrate the use of human heuristics and human emotions. Human biases happen because our heuristics have some weaknesses. It however doesn’t mean that our heuristics aren’t pretty good.
To clarify: it’s as smart as them in the sense that when you take in to account factors A-D (and similar factors), its intellectual output on any problem (including AGI research) would be similar.
It sounds like with factor C, you are saying that you expect AI insights to come faster once an working, if slow, AGI implementation is available. I don’t think this is obvious. “Once you have an AI running on a computer it can implement good ideas or test ideas for goodness far faster and more empirically.” We already have computers available for testing out AI insights.
Not really? You can test out pattern-matching or whatnot algorithms but you can’t test out anything more generalized or as part of anything more generalized like problem-solving algorithms because those would require you to have an entity within which to run them.
Hm, plausible. To restate: if a mind consists of lots of little components that are useless on their own and only work well on concert, then it’ll be hard to optimize any given component without the mind that surrounds it, because it’s hard to gather data on which component designs work better. Does that seem accurate?
yes though I think face and voice recognition software show us that many of the algorithms can be useful on their own. But eg: an algorithm for prioritizing face vs voice recognition when talking to humans is not.
I also think all the AI developers in the world will have a higher pace of developments post functioning AIs than beforehand, which might be relevant if fooming for whatever reason does not take place. Sort of like Steam Engine time: All the little valves and pressure regulators someone could invent and try out will suddenly become a thing
An AI as smart as all the world’s AI scientists would make progress faster than them (not saying how much faster or if it would foom) because
A it would be perfectly coordinated. If every AI researcher knew what every other was thinking about AI ideas would be tested and examined faster. Communication would be not a problem, whiteboards unnecessary
B I’m not sure if you meant smartness as Intelligence or as optimization power but an AI that had the combined intelligence of all the AI researchers would have MORE optimization power because this intelligence would not be held back by emotions, sleep, or human biases (except those accidentally built-in to the AI)
C Faster iteration: All the AI scientists can’t actually test and run a change in the code of an AI because there’s both no code and they don’t have the supercomputer. Once you have an AI running on a computer it can implement good ideas or test ideas for goodness far faster and more empirically.
D It can do actual empiricism on AI mind, as opposed to what AI researchers now can do.
See also what Randaly said. For similar reasons as Robin Hanson’s Emu Hell, a digital computer-bound mind will just be better at a lot of tasks than ones running in meat.
I don’t think that has to be true. For some AI design it might be, for other it might be false.
I think you underrate the use of human heuristics and human emotions. Human biases happen because our heuristics have some weaknesses. It however doesn’t mean that our heuristics aren’t pretty good.
To clarify: it’s as smart as them in the sense that when you take in to account factors A-D (and similar factors), its intellectual output on any problem (including AGI research) would be similar.
It sounds like with factor C, you are saying that you expect AI insights to come faster once an working, if slow, AGI implementation is available. I don’t think this is obvious. “Once you have an AI running on a computer it can implement good ideas or test ideas for goodness far faster and more empirically.” We already have computers available for testing out AI insights.
Not really? You can test out pattern-matching or whatnot algorithms but you can’t test out anything more generalized or as part of anything more generalized like problem-solving algorithms because those would require you to have an entity within which to run them.
Hm, plausible. To restate: if a mind consists of lots of little components that are useless on their own and only work well on concert, then it’ll be hard to optimize any given component without the mind that surrounds it, because it’s hard to gather data on which component designs work better. Does that seem accurate?
yes though I think face and voice recognition software show us that many of the algorithms can be useful on their own. But eg: an algorithm for prioritizing face vs voice recognition when talking to humans is not.
I also think all the AI developers in the world will have a higher pace of developments post functioning AIs than beforehand, which might be relevant if fooming for whatever reason does not take place. Sort of like Steam Engine time: All the little valves and pressure regulators someone could invent and try out will suddenly become a thing