I recently found Less Wrong through Eliezer’s Harry Potter fanfic, which has become my second favorite book. Thank you so much Eliezer for reminding my how rich my Art can be.
I was also delighted to find out (not so surprisingly) that Eliezer was an AI researcher. I have, over the past several months, decided to change my career path to AGI. So many of these articles have been helpful.
I have been a rationalist since I can remember. But I was raised as a Christian, and for some reason it took me a while to think to question the premise of God. Fortunately as soon as I did, I rejected it. Then it was up to me to 1) figure out how to be immortal and 2) figure out morality. I’ll be signing up for cryonics as soon as I can afford it. Life is my highest value because it is the terminal value; it is required for any other value to be possible.
I’ve been reading this blog every day since I’ve found it, and hope to get constant benefit from it. I’m usually quiet, but I suspect the more I read, the more I’ll want to comment and post.
“AGI is death, you want Friendly AI in particular and not AGI in general.”
I’m not sure of the technical definition of AGI, but essentially I mean a machine that can reason. I don’t plan to give it outputs until I know what it does.
“‘Life’ is not the terminal value, terminal value is very complex.”
I don’t mean that life is the terminal value that all human’s actions reduce to. I mean it in exactly the way I said above; for me to achieve any other value requires that I am alive. I also don’t mean that every value I have reduces to my desire to live, just that, if it comes down to one or the other, I choose life.
If you are determined to read the sequences, you’ll see. At least read the posts linked from the wiki pages.
I’m not sure of the technical definition of AGI, but essentially I mean a machine that can reason. I don’t plan to give it outputs until I know what it does.
Well, you’ll have the same chance of successfully discovering that AI does what you want as a sequence of coin tosses spontaneously spelling out the text of “War and Peace”. Even if you have a perfect test, you still need for the tested object to have a chance of satisfying the testing criteria. And in this case, you’ll have neither, as reliable testing is also not possible. You need to construct the AI with correct values from the start.
I don’t mean that life is the terminal value that all human’s actions reduce to. I mean it in exactly the way I said above; for me to achieve any other value requires that I am alive.
Acting in the world might require you being alive, but it’s not necessary for you to be alive in order for the world to have value, all according to your own preference. It does matter to you what happens with the world after you die. A fact doesn’t disappear the moment it can no longer be observed. And it’s possible to be mistaken about your own values.
I’m not sure of the technical definition of AGI, but essentially I mean a machine that can reason. I don’t plan to give it outputs until I know what it does.
I don’t mean that life is the terminal value that all human’s actions reduce to. I mean it in exactly the way I said above; for me to achieve any other value requires that I am alive. I also don’t mean that every value I have reduces to my desire to live, just that, if it comes down to one or the other, I choose life.
Then I think you meant that “Life is theinstrumental value.”
I recently found Less Wrong through Eliezer’s Harry Potter fanfic, which has become my second favorite book. Thank you so much Eliezer for reminding my how rich my Art can be.
I was also delighted to find out (not so surprisingly) that Eliezer was an AI researcher. I have, over the past several months, decided to change my career path to AGI. So many of these articles have been helpful.
I have been a rationalist since I can remember. But I was raised as a Christian, and for some reason it took me a while to think to question the premise of God. Fortunately as soon as I did, I rejected it. Then it was up to me to 1) figure out how to be immortal and 2) figure out morality. I’ll be signing up for cryonics as soon as I can afford it. Life is my highest value because it is the terminal value; it is required for any other value to be possible.
I’ve been reading this blog every day since I’ve found it, and hope to get constant benefit from it. I’m usually quiet, but I suspect the more I read, the more I’ll want to comment and post.
AGI is death, you want Friendly AI in particular and not AGI in general.
“Life” is not the terminal value, terminal value is very complex.
I’m not sure of the technical definition of AGI, but essentially I mean a machine that can reason. I don’t plan to give it outputs until I know what it does.
I don’t mean that life is the terminal value that all human’s actions reduce to. I mean it in exactly the way I said above; for me to achieve any other value requires that I am alive. I also don’t mean that every value I have reduces to my desire to live, just that, if it comes down to one or the other, I choose life.
If you are determined to read the sequences, you’ll see. At least read the posts linked from the wiki pages.
Well, you’ll have the same chance of successfully discovering that AI does what you want as a sequence of coin tosses spontaneously spelling out the text of “War and Peace”. Even if you have a perfect test, you still need for the tested object to have a chance of satisfying the testing criteria. And in this case, you’ll have neither, as reliable testing is also not possible. You need to construct the AI with correct values from the start.
Acting in the world might require you being alive, but it’s not necessary for you to be alive in order for the world to have value, all according to your own preference. It does matter to you what happens with the world after you die. A fact doesn’t disappear the moment it can no longer be observed. And it’s possible to be mistaken about your own values.
I am not sure what you mean by “give it outputs”, but you may be interested in this investigation of attempting to contain an AGI.
Then I think you meant that “Life is the instrumental value.”
to amplify: Terminal Values and Instrumental Values
A value that’s instrumental to every other value is still instrumental.