Luke, thank you for your honest response, and pointing me towards your article. I will read it and formulate a response RE: timelines and viable lines of research that will lead to AGI by 2025 with medium-high confidence.
I disagree that MIRI is the only one working on Friendly AI, although certainly you may be the only ones working on your vision of FAI. Ben Goertzel has taken a very pragmatic approach to both AGI and Friendliness, so his OpenCog Foundation would be a clear alternative for my dollars. I know that he has taken potshots at MIRI for promoting what he calls the Scary Idea, but nevertheless he does appear to be genuinely concerned with Friendliness; he is just taking a different approach. As an AI developer myself, there are various other self-directed options that I could invest my time and money into as well.
To be clear, I would rather have a fully fleshed out and proven Friendly AGI design before anyone writes a single self-modifying piece of code. I would also like a pony and a perpetual motion machine. I have absolutely no faith in anyone who would claim to be able to slow down or stop AGI research—the component pieces of AGI are simply too profitable and too far reaching in their consequences^1. Because of the short timescales, I believe it does the most good to study Friendliness in the context of actual AGI designs, and to devise safety procedures for running AGI projects that bias us towards Friendliness as much as possible, even if Friendliness cannot be absolutely guaranteed at this point. Work on provably safe AGI should proceed in parallel in case it pays off, but abstract work on a provably safe design has zero utility until it is finished, and I am not confident it would be finished before a hard takeoff occurs.
^1 Not just AGI itself, but various pieces of machine learning, hierarchical planning, goal forming, etc. - it’d be like trying out outlaw fertilizer because it could be used to make a fertilizer bomb. How then do you grow your food?
Luke, thank you for your honest response, and pointing me towards your article. I will read it and formulate a response RE: timelines and viable lines of research that will lead to AGI by 2025 with medium-high confidence.
I disagree that MIRI is the only one working on Friendly AI, although certainly you may be the only ones working on your vision of FAI. Ben Goertzel has taken a very pragmatic approach to both AGI and Friendliness, so his OpenCog Foundation would be a clear alternative for my dollars. I know that he has taken potshots at MIRI for promoting what he calls the Scary Idea, but nevertheless he does appear to be genuinely concerned with Friendliness; he is just taking a different approach. As an AI developer myself, there are various other self-directed options that I could invest my time and money into as well.
To be clear, I would rather have a fully fleshed out and proven Friendly AGI design before anyone writes a single self-modifying piece of code. I would also like a pony and a perpetual motion machine. I have absolutely no faith in anyone who would claim to be able to slow down or stop AGI research—the component pieces of AGI are simply too profitable and too far reaching in their consequences^1. Because of the short timescales, I believe it does the most good to study Friendliness in the context of actual AGI designs, and to devise safety procedures for running AGI projects that bias us towards Friendliness as much as possible, even if Friendliness cannot be absolutely guaranteed at this point. Work on provably safe AGI should proceed in parallel in case it pays off, but abstract work on a provably safe design has zero utility until it is finished, and I am not confident it would be finished before a hard takeoff occurs.
^1 Not just AGI itself, but various pieces of machine learning, hierarchical planning, goal forming, etc. - it’d be like trying out outlaw fertilizer because it could be used to make a fertilizer bomb. How then do you grow your food?