Probably true—and few want wireheading machines—but the issues are the scale of the technical challenges, and—if these are non-trivial—how much folk will be prepared to pay for the feature. In a society of machines, maybe the occasional one that turns Buddhist—and needs to go back to the factory for psychological repairs—is within tolerable limits.
Many apparently think that making machines value “external reality” fixes the wirehead problem—e.g. see “Model-based Utility Functions”—but it leads directly to the problems of what you mean by “external reality” and how to tell a machine that that is what it is supposed to be valuing. It doesn’t look much like solving the problem to me.
Probably true—and few want wireheading machines—but the issues are the scale of the technical challenges, and—if these are non-trivial—how much folk will be prepared to pay for the feature. In a society of machines, maybe the occasional one that turns Buddhist—and needs to go back to the factory for psychological repairs—is within tolerable limits.
Many apparently think that making machines value “external reality” fixes the wirehead problem—e.g. see “Model-based Utility Functions”—but it leads directly to the problems of what you mean by “external reality” and how to tell a machine that that is what it is supposed to be valuing. It doesn’t look much like solving the problem to me.