This doesn’t really seem like a meaningful question. Of course “AI” will be “scaffolded”. But what is the “AI”? It’s not a natural kind. It’s just where you draw the boundaries for convenience.
An “AI” which “reaches out to a more powerful AI” is not meaningful—one could say the same thing of your brain! Or a Mixture-of-Experts model, or speculative decoding (both already in widespread use). Some tasks are harder than others, and different amounts of computation get brought to bear by the system as a whole, and that’s just part of the learned algorithms it embodies and where the smarts come from. Or one could say it of your computer: different things take different paths through your “computer”, ping-ponging through a bunch of chips and parts of chips as appropriate.
Do you muse about living in a world where for ‘scarcity of compute’ reasons your computer is a ‘scaffolded computer world’ where highly intelligent chips will essentially delegate tasks to weaker chips so long as it knows that the weaker (maybe highly specialized ASIC) chip is capable of reliably doing that task...? No. You don’t care about that. That’s just details of internal architecture which you treat as a black box.
(And that argument doesn’t protect humans for the same reason it didn’t protect, say, chimpanzees or Neanderthals or horses. Comparative advantage is extremely fragile.)
Thanks for the comment, makes sense. Applying the boundary to AI systems likely leads to erroneous thinking (though may be narrowly useful if you are careful, in my opinion).
It makes a lot of sense to imagine future AIs having learned behaviours for using their compute efficiently without relying on some outside entity.
This doesn’t really seem like a meaningful question. Of course “AI” will be “scaffolded”. But what is the “AI”? It’s not a natural kind. It’s just where you draw the boundaries for convenience.
An “AI” which “reaches out to a more powerful AI” is not meaningful—one could say the same thing of your brain! Or a Mixture-of-Experts model, or speculative decoding (both already in widespread use). Some tasks are harder than others, and different amounts of computation get brought to bear by the system as a whole, and that’s just part of the learned algorithms it embodies and where the smarts come from. Or one could say it of your computer: different things take different paths through your “computer”, ping-ponging through a bunch of chips and parts of chips as appropriate.
Do you muse about living in a world where for ‘scarcity of compute’ reasons your computer is a ‘scaffolded computer world’ where highly intelligent chips will essentially delegate tasks to weaker chips so long as it knows that the weaker (maybe highly specialized ASIC) chip is capable of reliably doing that task...? No. You don’t care about that. That’s just details of internal architecture which you treat as a black box.
(And that argument doesn’t protect humans for the same reason it didn’t protect, say, chimpanzees or Neanderthals or horses. Comparative advantage is extremely fragile.)
Thanks for the comment, makes sense. Applying the boundary to AI systems likely leads to erroneous thinking (though may be narrowly useful if you are careful, in my opinion).
It makes a lot of sense to imagine future AIs having learned behaviours for using their compute efficiently without relying on some outside entity.
I agree with the fragility example.