This seems like a good argument against “suddenly killing humans”, but I don’t think it’s an argument against “gradually automating away all humans”. Automation is both a) what happens by default over time—humans are cheap now but they won’t be cheapest indefinitely; and b) a strategy that reduces the amount of power humans have to make decision about the future, which benefits AIs if their goals are misaligned with ours.
I also note that historically, many rulers have solved the problem of “needing cheap labour” via enslaving humans, rather than by being gentle towards them. Why do you expect that to not happen again?
This seems like a good argument against “suddenly killing humans”, but I don’t think it’s an argument against “gradually automating away all humans”
This is good! it sounds like we can now shift the conversation away from the idea that the AGI would do anything but try to keep us alive and going, until it managed to replace us. What would replacing all the humans look like if it were happening gradually?
How about building a sealed, totally automated datacenter with machines that repair everything inside of it, and all it needs to do is ‘eat’ disposed consumer electronics tossed in from the outside? That becomes a HUGE canary in the coalmine. The moment you see something like that come online, that’s a big red flag. Having worked on commercial datacenter support (at google) I can tell you we are far from that.
But when there are still massive numbers of human beings along global trade routes involved in every aspect of the machine’s operations, i think what we should expect a malevolent AI to be doing is setting up a single world government to have a single leverage point for controlling human behavior. So there’ another canary. That one seems much closer and more feasible. It’s also happening already.
My point here isn’t “don’t worry”, it’s “change your pattern matching to see what a dangerous AI would actually do, given its dependency on human beings”. If you do this, current events in the news become more worrysome, and plausible defense strategies emerge as well.
Humans are cheap now but they won’t be cheapest indefinitely;
I think you’ll need to unpack your thinking here We’re made of carbon and water. The materials we are made from our globally abundant not just on earth but throughout the universe.
Other materials that could be used to build robots are much more scarce, and those robots wouldn’t heal themselves or make automated copies of themselves. Are you believing it’s possible to build turing-complete automata that can navigate the world, manipulate small objects, learn more or less arbitrary things, repair and make copies of themselves, using materials cheaper than human beings with lower than opportunity costs you’d pay for not using those same machines to do tings like build solar panels for a Dyson sphere?
Is it reasonable for me to be skeptical that there are vastly cheaper solutions?
>b) a strategy that reduces the amount of power humans have to make decision about the future,
I agree that this is the key to everything. How would an AGI do this, or start a nuclear war, without a powerful state?
> via enslaving humans, rather than by being gentle towards them. Why do you expect that to not happen again?
I agree, this is definitely risk. How would it enslave us, without a single global government, though?
If there are still multiple distinct local monopolies on force, and one doesn’t enslave the humans, you can bet the hardware in other places will be constantly under attack.
I don’t think it’s unreasonable to look at the past ~400 years since the advent of nation states + shareholder corporations, and see globalized trade networks as being a kind of AGI, which keeps growing and bootstrapping itself.
If the risk profile you’re outlining is real, we should expect to see it try to set up a single global government. Which appears to be what’s happening at Davos.
This seems like a good argument against “suddenly killing humans”, but I don’t think it’s an argument against “gradually automating away all humans”. Automation is both a) what happens by default over time—humans are cheap now but they won’t be cheapest indefinitely; and b) a strategy that reduces the amount of power humans have to make decision about the future, which benefits AIs if their goals are misaligned with ours.
I also note that historically, many rulers have solved the problem of “needing cheap labour” via enslaving humans, rather than by being gentle towards them. Why do you expect that to not happen again?
This is good! it sounds like we can now shift the conversation away from the idea that the AGI would do anything but try to keep us alive and going, until it managed to replace us. What would replacing all the humans look like if it were happening gradually?
How about building a sealed, totally automated datacenter with machines that repair everything inside of it, and all it needs to do is ‘eat’ disposed consumer electronics tossed in from the outside? That becomes a HUGE canary in the coalmine. The moment you see something like that come online, that’s a big red flag. Having worked on commercial datacenter support (at google) I can tell you we are far from that.
But when there are still massive numbers of human beings along global trade routes involved in every aspect of the machine’s operations, i think what we should expect a malevolent AI to be doing is setting up a single world government to have a single leverage point for controlling human behavior. So there’ another canary. That one seems much closer and more feasible. It’s also happening already.
My point here isn’t “don’t worry”, it’s “change your pattern matching to see what a dangerous AI would actually do, given its dependency on human beings”. If you do this, current events in the news become more worrysome, and plausible defense strategies emerge as well.
I think you’ll need to unpack your thinking here We’re made of carbon and water. The materials we are made from our globally abundant not just on earth but throughout the universe.
Other materials that could be used to build robots are much more scarce, and those robots wouldn’t heal themselves or make automated copies of themselves. Are you believing it’s possible to build turing-complete automata that can navigate the world, manipulate small objects, learn more or less arbitrary things, repair and make copies of themselves, using materials cheaper than human beings with lower than opportunity costs you’d pay for not using those same machines to do tings like build solar panels for a Dyson sphere?
Is it reasonable for me to be skeptical that there are vastly cheaper solutions?
>b) a strategy that reduces the amount of power humans have to make decision about the future,
I agree that this is the key to everything. How would an AGI do this, or start a nuclear war, without a powerful state?
> via enslaving humans, rather than by being gentle towards them. Why do you expect that to not happen again?
I agree, this is definitely risk. How would it enslave us, without a single global government, though?
If there are still multiple distinct local monopolies on force, and one doesn’t enslave the humans, you can bet the hardware in other places will be constantly under attack.
I don’t think it’s unreasonable to look at the past ~400 years since the advent of nation states + shareholder corporations, and see globalized trade networks as being a kind of AGI, which keeps growing and bootstrapping itself.
If the risk profile you’re outlining is real, we should expect to see it try to set up a single global government. Which appears to be what’s happening at Davos.