Let’s get clear on what we actually believe, I generally think; once we’ve firmly established that, we can look for maximally effective implementations.
For another thing, HU may be the best approximation etc. etc., but that’s a claim that at least should be made explicitly
I agree that it would often be good to be clearer about these points.
For a third thing, what happens when forcibly rewiring people’s brains becomes a realistic option?
At that point the people who consider themselves hedonistic utilitarians might come up with a theory that says that forcible wireheading is wrong and switch to calling themselves supporters of that theory. Or they could go on calling themselves HUs despite not forcibly wireheading anyone, in the same way that many people call themselves utilitarians today despite not actually giving most of their income away. Or some of them could decide to start working towards efforts to forcibly wirehead everyone, in which case they’d become the kinds of people described by my reply 2).
“Only approving of those behaviors that serve to promote HU” is, I think, a separate thing. Or at least, I’d need to see the concept expanded a bit more before I could judge.
By this, I meant to say “only approve of whatever course of action HU says is the best one”.
In that case, I’m unsure of what kind of an answer you were expecting (unless the “what then” was meant as a rhetorical question, but even then I’m slightly unsure of what point it was making).
Yes, the “what then” was rhetorical. If I had to express my point non-rhetorically, it’d be something like this:
If you take a position which gives ethically correct results only until such time as some (reasonably plausible) scenario comes to pass, then maybe your position isn’t ethical in the first place. “This ethical framework gives nonsensical or monstrous results in edge cases [of varying degrees of edge-ness]” is, after all, a common and quite justified criticism of ethical frameworks.
If you take a position which gives ethically correct results only until such time as some (reasonably plausible) scenario comes to pass, then maybe your position isn’t ethical in the first place. “This ethical framework gives nonsensical or monstrous results in edge cases [of varying degrees of edge-ness]” is, after all, a common and quite justified criticism of ethical frameworks.
It is a point against the framework, certainly. But so far nobody has developed an ethical framework that would have no problems at all, so at the moment we can only choose the framework that’s the least bad.
(Assuming that we wish to choose one in the first place, of course—I do think that there is merit in just accepting that they’re all flawed and then not choosing to endorse any single one.)
(Assuming that we wish to choose one in the first place, of course—I do think that there is merit in just accepting that they’re all flawed and then not choosing to endorse any single one.)
Well, that’s been my policy so far, certainly. Some are worse than others, though. “This ethical framework breaks in catastrophic, horrifying fashion, creating an instant dystopia, as soon as we can rewire people’s brains” is pretty darn bad.
Wireheading. The term is not a metaphor, and it’s not a hypothetical. You can literally stick a wire into someone’s pleasure centers and activate them, using only non-groundbreaking neuroscience.
It’s been tested on humans, but AFAIK no-one has ever felt compelled to go any further.
(Yeah, seems like it might be evidence. But then, maybe akrasia...)
I agree that it would often be good to be clearer about these points.
At that point the people who consider themselves hedonistic utilitarians might come up with a theory that says that forcible wireheading is wrong and switch to calling themselves supporters of that theory. Or they could go on calling themselves HUs despite not forcibly wireheading anyone, in the same way that many people call themselves utilitarians today despite not actually giving most of their income away. Or some of them could decide to start working towards efforts to forcibly wirehead everyone, in which case they’d become the kinds of people described by my reply 2).
By this, I meant to say “only approve of whatever course of action HU says is the best one”.
Yeah, I meant that as a normative “what then”, not an empirical one. I agree that what you describe are plausible scenarios.
In that case, I’m unsure of what kind of an answer you were expecting (unless the “what then” was meant as a rhetorical question, but even then I’m slightly unsure of what point it was making).
Yes, the “what then” was rhetorical. If I had to express my point non-rhetorically, it’d be something like this:
If you take a position which gives ethically correct results only until such time as some (reasonably plausible) scenario comes to pass, then maybe your position isn’t ethical in the first place. “This ethical framework gives nonsensical or monstrous results in edge cases [of varying degrees of edge-ness]” is, after all, a common and quite justified criticism of ethical frameworks.
It is a point against the framework, certainly. But so far nobody has developed an ethical framework that would have no problems at all, so at the moment we can only choose the framework that’s the least bad.
(Assuming that we wish to choose one in the first place, of course—I do think that there is merit in just accepting that they’re all flawed and then not choosing to endorse any single one.)
Well, that’s been my policy so far, certainly. Some are worse than others, though. “This ethical framework breaks in catastrophic, horrifying fashion, creating an instant dystopia, as soon as we can rewire people’s brains” is pretty darn bad.
… can’t we rewire brains right now? We just … don’t.
Well, we must not be hedonistic utilitarians then, right? Because if we were, and we could, we would.
Edit: Also, what the heck are you talking about?
Wireheading. The term is not a metaphor, and it’s not a hypothetical. You can literally stick a wire into someone’s pleasure centers and activate them, using only non-groundbreaking neuroscience.
It’s been tested on humans, but AFAIK no-one has ever felt compelled to go any further.
(Yeah, seems like it might be evidence. But then, maybe akrasia...)
Where and what are these “pleasure centers”, exactly?