the super interesting question of why we don’t see massively more, among humans, of exactly all the things I would do if I was in the AI’s position.
This keeps me up at night. It’s ridiculous just how fragile civilization is, and surprising just how little destruction-of-institutional-value in pursuit of individual or group power actually happens. One can make the argument that group cohesion technology has reached the point that some collections of humans are actually ASIs—more powerful and less comprehensible than any single member.
My ASI nightmare is that it just does what corporate-fascist conspiracy theorists think billionaires already do: increase that fragility in order to control more resources, to the detriment of human flourishing. It may eventually lead to actual population collapse or eradication, but it could also be 10,000 years of dystopian serfhood, as the AI (or AIs, depending on how identity works for that kind of agent) explore and take over the universe using their conscious meat-robots for some kinds of general-purpose manipulation tasks.
As self-replicating, self-repairing (to a point), complex-action-capable physical actuators, humans are far cheaper, more capable, more flexible, and more reliable (in some ways) than any mechanical devices in current or visible-future manufacturing technology. Nanotech may change that, but who knows when that will become feasible.
But also I think that if your model doesn’t explain why we don’t see massively more of that sort of stuff coming from humans, that means your model has a giant gaping hole in the middle of it, and any conclusions you draw from that model should keep in mind that the model has a giant gaping hole in it.
(My model of the world has this giant gaping hole too. I would really love it if someone would explain what’s going on there, because as far as I can tell from my own observations, the vulnerable world hypothesis is just obviously true, but also I observe very different stuff than I would expect to observe given the things which convince me that the vulnerable world hypothesis is true).
This keeps me up at night. It’s ridiculous just how fragile civilization is, and surprising just how little destruction-of-institutional-value in pursuit of individual or group power actually happens. One can make the argument that group cohesion technology has reached the point that some collections of humans are actually ASIs—more powerful and less comprehensible than any single member.
My ASI nightmare is that it just does what corporate-fascist conspiracy theorists think billionaires already do: increase that fragility in order to control more resources, to the detriment of human flourishing. It may eventually lead to actual population collapse or eradication, but it could also be 10,000 years of dystopian serfhood, as the AI (or AIs, depending on how identity works for that kind of agent) explore and take over the universe using their conscious meat-robots for some kinds of general-purpose manipulation tasks.
As self-replicating, self-repairing (to a point), complex-action-capable physical actuators, humans are far cheaper, more capable, more flexible, and more reliable (in some ways) than any mechanical devices in current or visible-future manufacturing technology. Nanotech may change that, but who knows when that will become feasible.
But also I think that if your model doesn’t explain why we don’t see massively more of that sort of stuff coming from humans, that means your model has a giant gaping hole in the middle of it, and any conclusions you draw from that model should keep in mind that the model has a giant gaping hole in it.
(My model of the world has this giant gaping hole too. I would really love it if someone would explain what’s going on there, because as far as I can tell from my own observations, the vulnerable world hypothesis is just obviously true, but also I observe very different stuff than I would expect to observe given the things which convince me that the vulnerable world hypothesis is true).