Culture, thought, human DNA, human values, etc. have been stripped to their functional carbon and hydrogen atoms and everything now just optimizes for paperclip manufacturing or whatever. D(u/r) = D(u)
I contest this derivation. Whatever process produced humanity, made so that humanity produced an unsafe supercontroller. This may means that whatever the supercontroller is optimized for, it’s part of the process that produced humanity, and so it does not make g(u,h) go to zero.
Of course, without a concrete model, it’s impossible to say for certain.
So, the key issue is whether or not the representations produced by the paperclip optimizer could have been produced by other processes. If there is another process that produces the paperclip-optimized representations more efficiently than going through the process of humanity, then that process dominates the calculation of D(r).
In other words, for this objection to make sense, it’s not enough for the humanity to have been sufficient for the R scenario. It must be necessary for producing R, or at least necessary to result in it in the most efficient possible way.
What are your criteria for a more concrete model than what has been provided?
Of course, without a concrete model, it’s impossible to say for certain.
So, if humanity produces an ultra-intelligence that eats humanity and produces a giant turd, then humanity was—mathematically speaking—the biological boot loader of a giant turd.
“Hope we’re not just the biological boot loader for digital superintelligence. Unfortunately, that is increasingly probable.” -Elon Musk
...Especially if algebra nerds ignore F. A. Hayek, S. Milgram, Clay Conrad, etc., and specialize in a very narrow domain (Bayesian logic), while trying to figure out how to create a single superintelligence.
Multiple instantiations, multiple environments, certain core readings imparting a sense of “many” human values. …That will either be enough to kill us all, or get us to the “next level,” …rinse and repeat. …This will happen anyway, whether you like it or not, given the existence of Hawkins, Kurzweil, Honda, Google, (some guy in a factory in Thailand) etc.
PS: The message keeps popping up: “You are trying to win too fast. try again in 1 minute.” …How much Karma (= how many months of making innocuous sycophantic remarks) do I need in order to post more quickly?
You have possibly interesting things to say, but they don’t come out because of the noise produced by your presupposition that everyone else here is retarded.
If you think that you have to make innocuous sycophantic remarks to gain enough karma, then for God’s sake just make innocuous sycophantic remarks efficiently, instead of crying like a baby!
I contest this derivation. Whatever process produced humanity, made so that humanity produced an unsafe supercontroller. This may means that whatever the supercontroller is optimized for, it’s part of the process that produced humanity, and so it does not make g(u,h) go to zero.
Of course, without a concrete model, it’s impossible to say for certain.
So, the key issue is whether or not the representations produced by the paperclip optimizer could have been produced by other processes. If there is another process that produces the paperclip-optimized representations more efficiently than going through the process of humanity, then that process dominates the calculation of D(r).
In other words, for this objection to make sense, it’s not enough for the humanity to have been sufficient for the R scenario. It must be necessary for producing R, or at least necessary to result in it in the most efficient possible way.
What are your criteria for a more concrete model than what has been provided?
So, if humanity produces an ultra-intelligence that eats humanity and produces a giant turd, then humanity was—mathematically speaking—the biological boot loader of a giant turd.
“Hope we’re not just the biological boot loader for digital superintelligence. Unfortunately, that is increasingly probable.” -Elon Musk
...Especially if algebra nerds ignore F. A. Hayek, S. Milgram, Clay Conrad, etc., and specialize in a very narrow domain (Bayesian logic), while trying to figure out how to create a single superintelligence.
Multiple instantiations, multiple environments, certain core readings imparting a sense of “many” human values. …That will either be enough to kill us all, or get us to the “next level,” …rinse and repeat. …This will happen anyway, whether you like it or not, given the existence of Hawkins, Kurzweil, Honda, Google, (some guy in a factory in Thailand) etc.
PS: The message keeps popping up: “You are trying to win too fast. try again in 1 minute.” …How much Karma (= how many months of making innocuous sycophantic remarks) do I need in order to post more quickly?
You have possibly interesting things to say, but they don’t come out because of the noise produced by your presupposition that everyone else here is retarded.
If you think that you have to make innocuous sycophantic remarks to gain enough karma, then for God’s sake just make innocuous sycophantic remarks efficiently, instead of crying like a baby!