Even if we have only smartest-human-level models, you can spawn 100000 copies at 10x speed and organize them in the way “one model checks if output of other model displays cognitive biases” and get maybe not “design nanotech in 10 days” level but still something smarter than any organized group of humans.
Hmm. I’ve not seen any research about that possibility, which is obvious enough that I’d expect to see it if it were actually promising. And naively, it’s not clear that you’d get more powerful results from using 1M times the compute this way, compared to more direct scaling.
I’d put that in the exact same bucket as “not known if it’s even possible”.
Such possibility is explored at least here: https://arxiv.org/abs/2305.17066 but that’s not the point. The point is: even in hypothetical world where scaling laws and algorithmic progress hit the wall at smartest-human-level, you can do this and get an arbitrary level of intelligence. In real world, of course, there are better ways.
There are definitely enough matter on Earth to sustain additional 100k human brains with signal speed 1000m/s instead of 100m/s? I actually can’t imagine how our understanding of physics should be wrong for it to not be possible.
I think you’re using a different sense of the word “possible”. In a simplified physics model, where mass and energy are easily transformed as needed, you can just wave your hands and say “there’s plenty of mass to use for computronium”. That’s not the same as saying “there is an achievable causal path from what we experience now to the world described”.
The only reason why it can be impossible is if the amount of compute needed to run one smart-as-smartest human model is so huge that we need to literally disassemble Earth to run 100000 copies. It’s quite unrealistic reason because similar amout of compute for actual humans fit an actual small cranium.
How about “It’s not proven yet that vastly super-intelligent machines (i.e. >10x peak human intelligence) are even possible.” as a possible frame?
I can’t see a counterargument to it yet.
Even if we have only smartest-human-level models, you can spawn 100000 copies at 10x speed and organize them in the way “one model checks if output of other model displays cognitive biases” and get maybe not “design nanotech in 10 days” level but still something smarter than any organized group of humans.
Hmm. I’ve not seen any research about that possibility, which is obvious enough that I’d expect to see it if it were actually promising. And naively, it’s not clear that you’d get more powerful results from using 1M times the compute this way, compared to more direct scaling.
I’d put that in the exact same bucket as “not known if it’s even possible”.
Such possibility is explored at least here: https://arxiv.org/abs/2305.17066 but that’s not the point. The point is: even in hypothetical world where scaling laws and algorithmic progress hit the wall at smartest-human-level, you can do this and get an arbitrary level of intelligence. In real world, of course, there are better ways.
How do you know that’s possible?
There are definitely enough matter on Earth to sustain additional 100k human brains with signal speed 1000m/s instead of 100m/s? I actually can’t imagine how our understanding of physics should be wrong for it to not be possible.
I think you’re using a different sense of the word “possible”. In a simplified physics model, where mass and energy are easily transformed as needed, you can just wave your hands and say “there’s plenty of mass to use for computronium”. That’s not the same as saying “there is an achievable causal path from what we experience now to the world described”.
Did you misunderstand my question?
How does the total mass of the Earth or ‘signal speed 1000m/s instead of 100m/s’ demonstrate how you know?
The only reason why it can be impossible is if the amount of compute needed to run one smart-as-smartest human model is so huge that we need to literally disassemble Earth to run 100000 copies. It’s quite unrealistic reason because similar amout of compute for actual humans fit an actual small cranium.
Why is the amount of matter in a human brain relevant?