“AGI is death, you want Friendly AI in particular and not AGI in general.”
I’m not sure of the technical definition of AGI, but essentially I mean a machine that can reason. I don’t plan to give it outputs until I know what it does.
“‘Life’ is not the terminal value, terminal value is very complex.”
I don’t mean that life is the terminal value that all human’s actions reduce to. I mean it in exactly the way I said above; for me to achieve any other value requires that I am alive. I also don’t mean that every value I have reduces to my desire to live, just that, if it comes down to one or the other, I choose life.
If you are determined to read the sequences, you’ll see. At least read the posts linked from the wiki pages.
I’m not sure of the technical definition of AGI, but essentially I mean a machine that can reason. I don’t plan to give it outputs until I know what it does.
Well, you’ll have the same chance of successfully discovering that AI does what you want as a sequence of coin tosses spontaneously spelling out the text of “War and Peace”. Even if you have a perfect test, you still need for the tested object to have a chance of satisfying the testing criteria. And in this case, you’ll have neither, as reliable testing is also not possible. You need to construct the AI with correct values from the start.
I don’t mean that life is the terminal value that all human’s actions reduce to. I mean it in exactly the way I said above; for me to achieve any other value requires that I am alive.
Acting in the world might require you being alive, but it’s not necessary for you to be alive in order for the world to have value, all according to your own preference. It does matter to you what happens with the world after you die. A fact doesn’t disappear the moment it can no longer be observed. And it’s possible to be mistaken about your own values.
I’m not sure of the technical definition of AGI, but essentially I mean a machine that can reason. I don’t plan to give it outputs until I know what it does.
I don’t mean that life is the terminal value that all human’s actions reduce to. I mean it in exactly the way I said above; for me to achieve any other value requires that I am alive. I also don’t mean that every value I have reduces to my desire to live, just that, if it comes down to one or the other, I choose life.
Then I think you meant that “Life is theinstrumental value.”
AGI is death, you want Friendly AI in particular and not AGI in general.
“Life” is not the terminal value, terminal value is very complex.
I’m not sure of the technical definition of AGI, but essentially I mean a machine that can reason. I don’t plan to give it outputs until I know what it does.
I don’t mean that life is the terminal value that all human’s actions reduce to. I mean it in exactly the way I said above; for me to achieve any other value requires that I am alive. I also don’t mean that every value I have reduces to my desire to live, just that, if it comes down to one or the other, I choose life.
If you are determined to read the sequences, you’ll see. At least read the posts linked from the wiki pages.
Well, you’ll have the same chance of successfully discovering that AI does what you want as a sequence of coin tosses spontaneously spelling out the text of “War and Peace”. Even if you have a perfect test, you still need for the tested object to have a chance of satisfying the testing criteria. And in this case, you’ll have neither, as reliable testing is also not possible. You need to construct the AI with correct values from the start.
Acting in the world might require you being alive, but it’s not necessary for you to be alive in order for the world to have value, all according to your own preference. It does matter to you what happens with the world after you die. A fact doesn’t disappear the moment it can no longer be observed. And it’s possible to be mistaken about your own values.
I am not sure what you mean by “give it outputs”, but you may be interested in this investigation of attempting to contain an AGI.
Then I think you meant that “Life is the instrumental value.”
to amplify: Terminal Values and Instrumental Values