I found the “open channel” metaphor obscure from just the quote, and found some context. The open channel is a contrast to the blind alley of seizing to a single belief that may be wrong.
I noticed that later in the passage, he says:
It is our responsibility to leave the men of the future with a free hand. In the impetuous youth of humanity, we can make grave errors that can stunt our growth for a long time. This we will do if we, so young and ignorant, say we have the answers now, if we suppress all discussion, all criticism, saying, ‘This is it, boys! Man is saved!’ Thus we can doom man for a long time to the chains of authority, confined to the limits of our present imagination. It has been done so many times before.
This doesn’t sit well with dreams of making a superintelligent FAI that will be the last invention we ever need make, after which we will have attained the perfect life for everyone always.
This doesn’t sit well with dreams of making a superintelligent FAI that will be the last invention we ever need make, after which we will have attained the perfect life for everyone always.
Indeed, but it does agree with the argument for the importance of not getting AI wrong in a way that does chain the future.
It sits well with FAI, but poorly with assuming that FAI will instantly or automatically make everything perfect. The warning is against assuming a particular theory must be true, or a particular action must be optimal. Presumeably good advice for the AI as well, at least as it is “growing up” (recursively self-improving).
I found the “open channel” metaphor obscure from just the quote, and found some context. The open channel is a contrast to the blind alley of seizing to a single belief that may be wrong.
I noticed that later in the passage, he says:
This doesn’t sit well with dreams of making a superintelligent FAI that will be the last invention we ever need make, after which we will have attained the perfect life for everyone always.
Indeed, but it does agree with the argument for the importance of not getting AI wrong in a way that does chain the future.
It sits well with FAI, but poorly with assuming that FAI will instantly or automatically make everything perfect. The warning is against assuming a particular theory must be true, or a particular action must be optimal. Presumeably good advice for the AI as well, at least as it is “growing up” (recursively self-improving).