Agreed. Which is why the scientific approach is think about how to refute the claim that the earth is flat using only information you personally gather, rather than making snarky comments about the implausibility of the conspiracy.
lmn
Ok, now your just (intentionally?) missing the point of the hypothetical.
Also, science can and has been (and certainly still is) wrong about a lot of stuff. (Nutrition being a recent less-controversial example.)
That what you describe as the “real point” amounts to an appeal to authority.
I’d have to say no here, but if you asked about plants observing light or even ice observing heat, I’d say “sure, why not”. There are various differences between what ice does, what roomba does, and what I do, however they are mostly quantitative and using one word for them all should be fine.
What are you basing this distinction on? More importantly, how is whatever you’re basing this distinction on relevant to grounding the concept of empirical reality?
Using Eliezer’s formulation of “making beliefs pay rents in anticipated experiences” may make the relevant point clearer here. Specifically, what’s an “experience”?
Science is based on the principal of nullius in verba (take no one’s word for it). So your attitude is anti-scientific and likely to fall a foul of Goodhart’s law.
Ok, so where does it store the administrator password to said server?
How do you know? Does a falling rock also observe the gravitational field?
I don’t think this could work. Where would the virus keep its private key?
even for the improvement of the virus.
I don’t think this would work. This requires some way for it to keep the human it has entrusted with editing its programing from modifying it to simply send him all the money it acquires.
Finally, being conscious doesn’t mean anything at all. It has no relationship to reality.
What do you mean by “reality”? If you’re an empiricist, as it looks like you are, you mean “that which influinces our observations”. Now what is an “observation”? Good luck answering that question without resorting to qualia.
- Jul 23, 2017, 6:16 PM; 0 points) 's comment on Steelmanning the Chinese Room Argument by (
A: “I would have an advantage in war so I demand a bigger share now” B: “Prove it” A: “Giving you the info would squander my advantage” B: “Let’s agree on a procedure to check the info, and I precommit to giving you a bigger share if the check succeeds” A: “Cool”
Simply by telling B about the existence of an advantage A is giving B info that could weaken it. Also, what if the advantage is a way to partially cheat in precommitments?
Even if A is FAI and B is a paperclipper, as long as both use correct decision theory, they will instantly merge into a new SI with a combined utility function.
What combined utility function? There is no way to combine utility functions.
Maybe you can’t think of a way to set up such trade, because emails can be faked etc, but I believe that superintelligences will find a way to achieve their mutual interest.
They’ll also find ways of faking whatever communication methods are being used.
Empirically, people who believe in the Christian hell don’t behave dramatically better than people who do.
Hasn’t quite been my experience but, whatever.
The doctrine of hell whose (de)merits we’re discussing doesn’t actually say that people are only eligible for hell if they have never stopped believing in it.
Of course, otherwise it would be completely useless as it would simply motivate people to stop believing in it.
The more people the threat is known to, the less likely that they all comply.
And someone who doesn’t know about it is even less likely to comply. If you’ve already concluded that threatening to torture someone is worth it for the increased chance of getting compliance, then the exact same calculation applies to everyone else.
it seems to me that you want the threat known to a small number of people and to persuade them to work towards a highly specific goal that those people are particularly well-suited to achieving.
Not really. In fact one reason for universality is to discourage reactions like Eliezer’s.
Because it seems incredibly unlikely to maximize utility,
Avoiding for the moment the question whether utilitarianism is the right approach to these kinds of problems. There is in fact a decision theory argument in favor of this. Eliezer stumbled up on a version of it and didn’t react well, specifically banning all detailed discussion of it from LW in an extremely ham-handed manner.
neither does it accord with what seems to me a general principle that punishment should be at most proportionate to the crime being punished.
Where does this principal come from? Can you provide any utilitarian justification for it? It’s a useful mildly useful Schelling point in certain rather specific circumstances but that’s about it.
I am not much interested in turning this into a lengthy argument about whether the available evidence actually does or doesn’t support Christianity.
I’m not necessarily ether. I’m not even a Christian. That’s what makes the number of laughably bad arguments people use to deconvert themselves so frustrating.
Punishing bad people may or may not be morally monstrous. Punishing finite badness with eternal torture is morally monstrous.
Why? I actually disagree with this point.
The scientific doctrine of light and matter does not really say that light and matter are “both particle and wave”; that is a simplification for popular presentation. What it actually does say is difficult for human brains, but even the appearance of contradiction between “particle” and “wave” goes away completely once it is understood.
One could make the same argument about the trinity. BTW, do you actually understand quantum physics well enough for that to happen?
Christian doctrines as morally monstrous (hell)
Why is punishing bad people morally monstrous?
probably internally incoherent (Trinity, dual nature of Christ)
Do you also find the scientific doctrine of light, and mater, being both particle and wave internally incoherent.
Honestly, the problem with this approach is that it tends to degenerate to “when my side tells lies, they’re still emotionally true; when the other side makes inconvenient statements that are true, I can dismiss them as emotionally false”.