I like this story. Here’s what I think is incorrect:
I don’t think, from the perspective of humans monitoring single ML system running a concrete, quantifiable process—industry or mining or machine design—that it will be unexplainable. Just like today, tech stacks are already enormously complex, but at each layer someone does know how they work, and we know what they do at the layers that matter. Ever more complex designs for, say, a mining robot might start to resemble more and more some mix of living creatures and artwork out of a fractal, but we’ll still have reports that measure how much performance the design gives per cost.
And systems that “lie to us” are a risk but not an inevitability in that careful engineering, auditing systems where finding True Discrepancies is their goal, etc, might become a thing.
Here’s the part that’s correct:
I was personally a little late to the smartphone party. So it felt like overnight everyone has QR codes plastered everywhere and is playing on their phone in bed. Most products adoption is a lot slower for reasons of cost (esp up front cost) and speed to make whatever new idea there is.
Self replicating robots that in vast swarms can make any product that the process to build is sufficiently defined would change all that. New cities could be built in a matter of months by enormous swarms of robotics, installing prefabricated components from elsewhere. Newer designs of cars, clothes, furniture—far less limits.
ML systems that can find a predicted optimal design, and send it for physical prototyping for it’s design parameters to be checked are another way to get rid of some of the bottlenecks behind a new technology. Another one is that the ‘early access’ version might still have problems, but the financial model will probably be ‘rental’ not purchase.
This sounds worse but the upside is rental takes away the barrier to adoption. You don’t need to come up with $XXX for the latest gadget, just make the first payment and you have it. The manufacturer doesn’t need to force you into a contract either because their cost to recycle the gadget if you don’t want it is low.
Anyways the combination of all these factors would create a world of, well, future shock. But it’s not “the machines” doing this to humans, it would be a horde of separate groups of mainly humans doing this to each other. It’s also quite possible this kind of technology will for some areas negate some of the advantages of large corporations, in that many types of products will be creatable without needing the support of a large institution.
I don’t think, from the perspective of humans monitoring single ML system running a concrete, quantifiable process—industry or mining or machine design—that it will be unexplainable. Just like today, tech stacks are already enormously complex, but at each layer someone does know how they work, and we know what they do at the layers that matter.
This seems like the key question.
Ever more complex designs for, say, a mining robot might start to resemble more and more some mix of living creatures and artwork out of a fractal, but we’ll still have reports that measure how much performance the design gives per cost.
I think that if we relate to our machines in the same way we relate to biological systems or ecologies, but AI systems actually understand those systems very well, then that’s basically what I mean.
Having reports about outcomes is a kind of understanding, but it’s basically the one I’m scared of (since e.g. it will be tough to learn about these kinds of systemic risks via outcome-driven reports, and attempts to push down near-misses may just transform them into full-blown catastrophes).
Modern technologies are getting increasingly complicated… but when you get down to it, a car is just a box with wheels and a combustion engine. There aren’t that many ways for a outcome-perception-driven AI to go “oops, I accidentally concealed a human-killing machine gun inside the steering wheel!”, especially if the AI has to subcontract to independent suppliers for parts.
Moreover, tight constraints. Such a machine gun adds weight and cost without benefit to the AIs reward heuristic. A far more likely problem is it removes structure somewhere because every collision test doesn’t need that material to pass. But the missing structure causes fatalities in crashes a conservatively designed vehicle would survive or long term durability problems.
Human designed products have exactly this happen also however. The difference is you could make a patch to add another reward heuristic component and have another design in the prototyping phase that same week. It would let you move fast and break things and fix them far faster than human organizations can.
I like this story. Here’s what I think is incorrect:
I don’t think, from the perspective of humans monitoring single ML system running a concrete, quantifiable process—industry or mining or machine design—that it will be unexplainable. Just like today, tech stacks are already enormously complex, but at each layer someone does know how they work, and we know what they do at the layers that matter. Ever more complex designs for, say, a mining robot might start to resemble more and more some mix of living creatures and artwork out of a fractal, but we’ll still have reports that measure how much performance the design gives per cost.
And systems that “lie to us” are a risk but not an inevitability in that careful engineering, auditing systems where finding True Discrepancies is their goal, etc, might become a thing.
Here’s the part that’s correct:
I was personally a little late to the smartphone party. So it felt like overnight everyone has QR codes plastered everywhere and is playing on their phone in bed. Most products adoption is a lot slower for reasons of cost (esp up front cost) and speed to make whatever new idea there is.
Self replicating robots that in vast swarms can make any product that the process to build is sufficiently defined would change all that. New cities could be built in a matter of months by enormous swarms of robotics, installing prefabricated components from elsewhere. Newer designs of cars, clothes, furniture—far less limits.
ML systems that can find a predicted optimal design, and send it for physical prototyping for it’s design parameters to be checked are another way to get rid of some of the bottlenecks behind a new technology. Another one is that the ‘early access’ version might still have problems, but the financial model will probably be ‘rental’ not purchase.
This sounds worse but the upside is rental takes away the barrier to adoption. You don’t need to come up with $XXX for the latest gadget, just make the first payment and you have it. The manufacturer doesn’t need to force you into a contract either because their cost to recycle the gadget if you don’t want it is low.
Anyways the combination of all these factors would create a world of, well, future shock. But it’s not “the machines” doing this to humans, it would be a horde of separate groups of mainly humans doing this to each other. It’s also quite possible this kind of technology will for some areas negate some of the advantages of large corporations, in that many types of products will be creatable without needing the support of a large institution.
This seems like the key question.
I think that if we relate to our machines in the same way we relate to biological systems or ecologies, but AI systems actually understand those systems very well, then that’s basically what I mean.
Having reports about outcomes is a kind of understanding, but it’s basically the one I’m scared of (since e.g. it will be tough to learn about these kinds of systemic risks via outcome-driven reports, and attempts to push down near-misses may just transform them into full-blown catastrophes).
Yeah, that was my initial reaction as well.
Modern technologies are getting increasingly complicated… but when you get down to it, a car is just a box with wheels and a combustion engine. There aren’t that many ways for a outcome-perception-driven AI to go “oops, I accidentally concealed a human-killing machine gun inside the steering wheel!”, especially if the AI has to subcontract to independent suppliers for parts.
Moreover, tight constraints. Such a machine gun adds weight and cost without benefit to the AIs reward heuristic. A far more likely problem is it removes structure somewhere because every collision test doesn’t need that material to pass. But the missing structure causes fatalities in crashes a conservatively designed vehicle would survive or long term durability problems.
Human designed products have exactly this happen also however. The difference is you could make a patch to add another reward heuristic component and have another design in the prototyping phase that same week. It would let you move fast and break things and fix them far faster than human organizations can.