Oh, melting the GPUs would not actually be a pivotal act
Well yeah, that’s my point. It seems to me that any pivotal act worthy of the name would essentially require the AI team to become an AGI-powered world government, which seems pretty darn difficult to pull off safely. The superpowered-AI-propaganda plan falls under this category. The long-lasting nanomachines idea is cute, but I bet people would just figure out ways to evade the nanomachines’ definition of ‘GPU’.
Note that these aren’t intended to be very good/realistic suggestions, they’re just meant to point to different dimensions of the possibility space
Fair enough...but if the pivotal act plan is workable, there should be some member of that space which actually is good/seems like it has a shot of working out in reality(and which wouldn’t require a full FAI). I’ve never heard any and am having a hard time thinking of one. Now it could be that MIRI or others think they have a workable plan which they don’t want to share the details of due to infohazard concerns. But as an outside observer, I have to assign a certain amount of probability to that being self-delusion.
Well yeah, that’s my point. It seems to me that any pivotal act worthy of the name would essentially require the AI team to become an AGI-powered world government, which seems pretty darn difficult to pull off safely. The superpowered-AI-propaganda plan falls under this category.
Yeah. I think this sort of thing is why Eliezer thinks we’re doomed – getting the humanity to coordinate collectively seems doomed (i.e. see Gain of Function Research), and there are no weak pivotal acts that aren’t basically impossible to execute safely.
The nanomachine gpu-melting pivotal act is meant to be a gesture at the difficulty / power level, not an actual working example. The other gestured-example I’ve heard is “upload aligned people who think hard for 1000 subjective years and hopefully figure something out.” I’ve heard someone from MIRI argue that one is also unworkable but wasn’t sure on the exact reasons.
The other gestured-example I’ve heard is “upload aligned people who think hard for 1000 subjective years and hopefully figure something out.” I’ve heard someone from MIRI argue that one is also unworkable but wasn’t sure on the exact reasons.
Standard counterargument to that one is “by the time we can do that we’ll already have beyond-human AI capabilities (since running humans is a lower bound on what AI can do), and therefore foom”.
You could have another limited AI design a nanofactory to make ultra-fast computers to run the emulations. I think a more difficult problem is getting a limited AI to do neuroscience well. Actually I think this whole scenario is kind of silly, but given the implausible premise of a single AI lab having a massive tech lead over all others, neuroscience may be the bigger barrier.
Yeah. I think this sort of thing is why Eliezer thinks we’re doomed
Hmm, interesting...but wasn’t he more optimistic a few years ago, when his plan was still “pull off a pivotal act with a limited AI”? I thought the thing that made him update towards doom was the apparent difficulty of safely making even a limited AI, plus shorter timelines.
other gestured-example I’ve heard is “upload aligned people who think hard for 1000 subjective years and hopefully figure something out.”
Ah, that actually seems like it might work. I guess the problem is that an AI that can competently do neuroscience well enough to do this would have to be pretty general. Maybe a more realistic plan along the same lines might be to try using ML to replicate the functional activity of various parts of the human brain and create ‘pseudo-uploads’. Or just try to create an AI with similar architecture and roughly-similar reward function to us, hoping that human values are more generic than they might appear.
It seems relatively plausible that you could use a Limited AGI to build a nanotech system capable of uploading a diverse assortment of (non-brain, or maybe only very small brains) living tissue without damaging them, and that this system would learn how to upload tissue in a general way. Then you could use the system (not the AGI) to upload humans (tested on increasingly complex animals). It would be a relatively inefficient emulation, but it doesn’t seem obviously doomed to me.
Probably too late once hardware is available to do this though.
Well yeah, that’s my point. It seems to me that any pivotal act worthy of the name would essentially require the AI team to become an AGI-powered world government, which seems pretty darn difficult to pull off safely. The superpowered-AI-propaganda plan falls under this category. The long-lasting nanomachines idea is cute, but I bet people would just figure out ways to evade the nanomachines’ definition of ‘GPU’.
Fair enough...but if the pivotal act plan is workable, there should be some member of that space which actually is good/seems like it has a shot of working out in reality(and which wouldn’t require a full FAI). I’ve never heard any and am having a hard time thinking of one. Now it could be that MIRI or others think they have a workable plan which they don’t want to share the details of due to infohazard concerns. But as an outside observer, I have to assign a certain amount of probability to that being self-delusion.
Yeah. I think this sort of thing is why Eliezer thinks we’re doomed – getting the humanity to coordinate collectively seems doomed (i.e. see Gain of Function Research), and there are no weak pivotal acts that aren’t basically impossible to execute safely.
The nanomachine gpu-melting pivotal act is meant to be a gesture at the difficulty / power level, not an actual working example. The other gestured-example I’ve heard is “upload aligned people who think hard for 1000 subjective years and hopefully figure something out.” I’ve heard someone from MIRI argue that one is also unworkable but wasn’t sure on the exact reasons.
Standard counterargument to that one is “by the time we can do that we’ll already have beyond-human AI capabilities (since running humans is a lower bound on what AI can do), and therefore foom”.
You could have another limited AI design a nanofactory to make ultra-fast computers to run the emulations. I think a more difficult problem is getting a limited AI to do neuroscience well. Actually I think this whole scenario is kind of silly, but given the implausible premise of a single AI lab having a massive tech lead over all others, neuroscience may be the bigger barrier.
Hmm, interesting...but wasn’t he more optimistic a few years ago, when his plan was still “pull off a pivotal act with a limited AI”? I thought the thing that made him update towards doom was the apparent difficulty of safely making even a limited AI, plus shorter timelines.
Ah, that actually seems like it might work. I guess the problem is that an AI that can competently do neuroscience well enough to do this would have to be pretty general. Maybe a more realistic plan along the same lines might be to try using ML to replicate the functional activity of various parts of the human brain and create ‘pseudo-uploads’. Or just try to create an AI with similar architecture and roughly-similar reward function to us, hoping that human values are more generic than they might appear.
It seems relatively plausible that you could use a Limited AGI to build a nanotech system capable of uploading a diverse assortment of (non-brain, or maybe only very small brains) living tissue without damaging them, and that this system would learn how to upload tissue in a general way. Then you could use the system (not the AGI) to upload humans (tested on increasingly complex animals). It would be a relatively inefficient emulation, but it doesn’t seem obviously doomed to me.
Probably too late once hardware is available to do this though.