It seems contradictory to previous experience that humans should develop a technology with “black box” functionality, i.e. whose effects could not be foreseen and accurately controlled by the end-user. Technology has to be designed and it is designed with an effect/result in mind. It is then optimized so that the end user understands how to call forth this effect. So positing an effective equivalent of the mythological figure “Genie” in technological form ignores the optimization-for-use that would take place at each stage of developing an Outcome-Pump. The technology-falling-from-heaven which is the Outcome Pump demands that we reverse engineer the optimization of parameters which would have necessarily taken place if it had in fact developed as human technologies do.
I suppose the human mind has a very complex “ceteris paribus” function which holds all these background parameters at equal to their previous values, while not explicitly stating them, and the ironic-wish-fulfillment-Genie idea relates to the fulfillment of a wish while violating an unspoken ceteris paribus rule. Demolishing the building structure violates ceteris paribus more than the movements of a robot-retriever would in moving aside burning material to save the woman. Material displaced from building should be as nearly equal to the womans body weight as possible, inducing an explosion is a horrible violation of the objective, if the Pump could just be made to sense the proper (implied) parameters.
If the market forces of supply and demand continue to undergird technological progress (i.e. research and development and manufacturing), then the development of a sophisticated technology not-optimized-for-use is problematic: who pays for the second round of research implementation? Surely not the customer, when you give him an Outcome Pump whose every use could result in the death and destruction of his surrounding environs and family members. Granted this is an aside and maybe impertinent in the context of this discussion.
It is now 15 years later. We have large neural nets trained on large amounts of data that do impressive things by “learning” extremely complicated algorithms that might as well be black boxes, and that sometimes have bizarre and unanticipated results that are nothing like the ones we would have wanted.
“if the Pump could just be made to sense the proper (implied) parameters.”
You’re right, this would be an essential step. I’d say the main point of the post was to talk about the importance, and especially the difficulty, of achieving this.
Re optimisation for use: remember that this involves a certain amount of trial and error. In the case of dangerous technologies like explosives, firearms, or high speed vehicles, the process can often involve human beings dying, usually in the “error” part of trial and error.
If the technology in question was a super-intelligent AI, smart enough to fool us and engineer whatever outcome best matched its utility function? Then potentially we could find ourselves unable to fix the “error”.
Please excuse the cheesy line, but sometimes you can’t put the genie back in the bottle.
Re the workings of the human brain? I have to admit that I don’t know the meaning of ceteris paribus, but I think that the brain mostly works by pattern recognition. In a “burning house” scenario, people would mostly contemplate the options that they thought were “normal” for the situation, or that they had previously imagined, heard about, or seen on TV
Generating a lot of different options and then comparing them for expected utility isn’t the sort of thing that humans do naturally. It’s the sort of behaviour that we have to be trained for, if you want us to apply it.
It seems contradictory to previous experience that humans should develop a technology with “black box” functionality, i.e. whose effects could not be foreseen and accurately controlled by the end-user. Technology has to be designed and it is designed with an effect/result in mind. It is then optimized so that the end user understands how to call forth this effect. So positing an effective equivalent of the mythological figure “Genie” in technological form ignores the optimization-for-use that would take place at each stage of developing an Outcome-Pump. The technology-falling-from-heaven which is the Outcome Pump demands that we reverse engineer the optimization of parameters which would have necessarily taken place if it had in fact developed as human technologies do.
I suppose the human mind has a very complex “ceteris paribus” function which holds all these background parameters at equal to their previous values, while not explicitly stating them, and the ironic-wish-fulfillment-Genie idea relates to the fulfillment of a wish while violating an unspoken ceteris paribus rule. Demolishing the building structure violates ceteris paribus more than the movements of a robot-retriever would in moving aside burning material to save the woman. Material displaced from building should be as nearly equal to the womans body weight as possible, inducing an explosion is a horrible violation of the objective, if the Pump could just be made to sense the proper (implied) parameters.
If the market forces of supply and demand continue to undergird technological progress (i.e. research and development and manufacturing), then the development of a sophisticated technology not-optimized-for-use is problematic: who pays for the second round of research implementation? Surely not the customer, when you give him an Outcome Pump whose every use could result in the death and destruction of his surrounding environs and family members. Granted this is an aside and maybe impertinent in the context of this discussion.
It is now 15 years later. We have large neural nets trained on large amounts of data that do impressive things by “learning” extremely complicated algorithms that might as well be black boxes, and that sometimes have bizarre and unanticipated results that are nothing like the ones we would have wanted.
“if the Pump could just be made to sense the proper (implied) parameters.”
You’re right, this would be an essential step. I’d say the main point of the post was to talk about the importance, and especially the difficulty, of achieving this.
Re optimisation for use: remember that this involves a certain amount of trial and error. In the case of dangerous technologies like explosives, firearms, or high speed vehicles, the process can often involve human beings dying, usually in the “error” part of trial and error.
If the technology in question was a super-intelligent AI, smart enough to fool us and engineer whatever outcome best matched its utility function? Then potentially we could find ourselves unable to fix the “error”.
Please excuse the cheesy line, but sometimes you can’t put the genie back in the bottle.
Re the workings of the human brain? I have to admit that I don’t know the meaning of ceteris paribus, but I think that the brain mostly works by pattern recognition. In a “burning house” scenario, people would mostly contemplate the options that they thought were “normal” for the situation, or that they had previously imagined, heard about, or seen on TV
Generating a lot of different options and then comparing them for expected utility isn’t the sort of thing that humans do naturally. It’s the sort of behaviour that we have to be trained for, if you want us to apply it.