Any such “experiments” that allow for effective outbound communication from a proto-AI seem unacceptably risky. I’m curious what you think of the “oh crap, what if it’s right?” scenario I commented on over on the AI box post.
I didn’t SAY try to build a self-improving AI! That’s what the disclaimer was for!
Also, your claim of “unacceptably risky” needs actual arguments and reasoning to support it. As I see it, the only choice that is clearly unacceptably risky is inaction. Carefully confining your existential risk reduction activity to raising awareness about potential AI risks isn’t in any sense safe- for example, it could easily cause more new uFAI projects than it prevents.
Raising awareness about the problem isn’t just about getting would-be uFAI’ers to mend their sinful ways, you know. It’s absolutely necessary if you’re convinced you need help with it. As you said, inaction is untenable. If you’re certain that a goal of this magnitude is basically impossible given the status quo, taking some initial risks is a trivial decision. It doesn’t follow that additional risks share the same justification.
I’m also not convinced we understand the boundaries between “intelligent” and “self-improving” well enough to assume we can experiment with one and not the other. What sort of “practical tasks” do you have in mind that don’t involve potentially intelligent information-processing systems, and why do you think they’ll be at all relevant to the “real” work ahead?
Any such “experiments” that allow for effective outbound communication from a proto-AI seem unacceptably risky. I’m curious what you think of the “oh crap, what if it’s right?” scenario I commented on over on the AI box post.
I didn’t SAY try to build a self-improving AI! That’s what the disclaimer was for!
Also, your claim of “unacceptably risky” needs actual arguments and reasoning to support it. As I see it, the only choice that is clearly unacceptably risky is inaction. Carefully confining your existential risk reduction activity to raising awareness about potential AI risks isn’t in any sense safe- for example, it could easily cause more new uFAI projects than it prevents.
Raising awareness about the problem isn’t just about getting would-be uFAI’ers to mend their sinful ways, you know. It’s absolutely necessary if you’re convinced you need help with it. As you said, inaction is untenable. If you’re certain that a goal of this magnitude is basically impossible given the status quo, taking some initial risks is a trivial decision. It doesn’t follow that additional risks share the same justification.
I’m also not convinced we understand the boundaries between “intelligent” and “self-improving” well enough to assume we can experiment with one and not the other. What sort of “practical tasks” do you have in mind that don’t involve potentially intelligent information-processing systems, and why do you think they’ll be at all relevant to the “real” work ahead?