“If we take everything into account — not only what the ancients knew, but all of what we know today that they didn’t know — then I think that we must frankly admit that we do not know.
But, in admitting this, we have probably found the open channel.”
Richard Feynman, “The Value of Science,” public address at the National Academy of Sciences (Autumn 1955); published in What Do You Care What Other People Think (1988); republished in The Pleasure of Finding Things Out: The Best Short Works of Richard P. Feynman (1999) edited by Jeffrey Robbins.
I found the “open channel” metaphor obscure from just the quote, and found some context. The open channel is a contrast to the blind alley of seizing to a single belief that may be wrong.
I noticed that later in the passage, he says:
It is our responsibility to leave the men of the future with a free hand. In the impetuous youth of humanity, we can make grave errors that can stunt our growth for a long time. This we will do if we, so young and ignorant, say we have the answers now, if we suppress all discussion, all criticism, saying, ‘This is it, boys! Man is saved!’ Thus we can doom man for a long time to the chains of authority, confined to the limits of our present imagination. It has been done so many times before.
This doesn’t sit well with dreams of making a superintelligent FAI that will be the last invention we ever need make, after which we will have attained the perfect life for everyone always.
This doesn’t sit well with dreams of making a superintelligent FAI that will be the last invention we ever need make, after which we will have attained the perfect life for everyone always.
Indeed, but it does agree with the argument for the importance of not getting AI wrong in a way that does chain the future.
It sits well with FAI, but poorly with assuming that FAI will instantly or automatically make everything perfect. The warning is against assuming a particular theory must be true, or a particular action must be optimal. Presumeably good advice for the AI as well, at least as it is “growing up” (recursively self-improving).
“If we take everything into account — not only what the ancients knew, but all of what we know today that they didn’t know — then I think that we must frankly admit that we do not know. But, in admitting this, we have probably found the open channel.”
Richard Feynman, “The Value of Science,” public address at the National Academy of Sciences (Autumn 1955); published in What Do You Care What Other People Think (1988); republished in The Pleasure of Finding Things Out: The Best Short Works of Richard P. Feynman (1999) edited by Jeffrey Robbins.
I found the “open channel” metaphor obscure from just the quote, and found some context. The open channel is a contrast to the blind alley of seizing to a single belief that may be wrong.
I noticed that later in the passage, he says:
This doesn’t sit well with dreams of making a superintelligent FAI that will be the last invention we ever need make, after which we will have attained the perfect life for everyone always.
Indeed, but it does agree with the argument for the importance of not getting AI wrong in a way that does chain the future.
It sits well with FAI, but poorly with assuming that FAI will instantly or automatically make everything perfect. The warning is against assuming a particular theory must be true, or a particular action must be optimal. Presumeably good advice for the AI as well, at least as it is “growing up” (recursively self-improving).