Perhaps the only difference between fake gold and real gold is magical—if there’s a ritual that permanently transfigures a rock into gold, people can switch that with the gold in vaults. Of course, no one in the magical world would accept transfigured gold as payment.
Watercressed
Dumbledore may be able to overrule the contract, but that would do little to stop the political effects of Harry’s statement that Lucius did not kill Hermione. Since it would also reinstate the debt, it doesn’t seem like a net benefit to Dumbledore.
Here’s a situation where an anecdote should reduce our confidence in a belief:
A person’s beliefs are usually well-supported.
When he offers supporting evidence, he usually offers the strongest evidence he knows about.
If this person were to offer an anecdote, it should reduce our confidence in his proposition, because it makes it unlikely he knows of stronger supporting evidence.
I don’t know how applicable this is to actual people.
I would raise a hypothesis to consideration because someone was arguing for it, but I don’t think anecdotes are good evidence in that I would have similar confidence in a hypothesis supported by an anecdote, and a hypothesis that is flatly stated with no justification. The evidence to raise it to consideration comes from the fact that someone took the time to advocate it.
This is more of a heuristic than a rule, because there are anecdotes that are strong evidence (“I ran experiments on this last year and they didn’t fit”), but when dealing with murkier issues, they don’t count for much.
A related mistake I made was to be impressed by the cleverness of the aphorism “The plural of ‘anecdote’ is not ‘data’.” There may be a helpful distinction between scientific evidence and Bayesian evidence. But anecdotal evidence is evidence, and it ought to sway my beliefs.
Anecdotal evidence is filtered evidence. People often cite the anecdote that supports their belief, while not remembering or not mentioning events that contradict them. You can find people saying anecdotes on any side of a debate, and I see no reason the people who are right would cite anecdotes more.
Of course, if you witness an anecdote with your own eyes, that is not filtered, and you should adjust your beliefs accordingly.
I made a hash of that comment; I’m sorry.
If we were to put a bunch of chickens into a room, and on one side of the room was a wolf, and the other side had factory farming cages that protected the chickens from the wolf, I would expect the chickens to run into the cages.
It’s true that chickens can comprehend a wolf much better than they can comprehend factory farming, but I’m not quite sure how that affects this thought experiment.
It takes a bit of work to set up, but Tagtime does both the notifications and the logging
There’s a bit of a problem with the claim that nobody knows what’s what: the usual procedure when someone lacks knowledge is to assign an ignorance prior. The standard methods for generating ignorance priors, usually some formulation of Occam’s razor, assign very low probability to claims as complex as common religions.
What do you mean by not rational? People reporting higher satisfaction when they’re rich even though they feel less happiness?
Isn’t there an irony in belonging to an organisation dedicated to the plight of sentient but cognitively humble beings in the imminent face of vastly superior intelligence and claiming that the plight of sentient but cognitively humble beings in the face of vastly superior intelligence is of no ethical consequence whatsoever. Insofar as we want a benign outcome for humans, I’d have thought that the computational equivalent of Godlike capacity for perspective-taking is precisely what we should be aiming for.
No. Someone who cares about human-level beings but not animals will care about the plight of humans in the face of an AI, but there’s no reason they must care about the plight of animals in the face of humans, because they didn’t care about animals to begin with.
It may be that the best construction for a friendly AI is some kind of complex perspective taking that lends itself to caring about animals, but this is a fact about the world; it falls on the is side of the is-ought divide.
Also, 100% certainty can’t be impossible, because impossibility implies that it is 0% likely, which would be a self-defeating argument. You may find it highly improbable that I can truly be 100% certain. What probability do you assign to me being able to assign 100% probability?
When I say 100% certainty is impossible, I mean that there are no cases where assigning 100% to something is correct, but I have less than 100% confidence in this claim. It’s similar to the claim that it’s impossible to travel faster than the speed of light.
I’m already taking insects or nematodes into consideration probabilistically; I think it is highly unlikely that they are sentient, and I think that even if they are sentient, their suffering might not be as intense as that of mammals, but since their numbers are so huge, the well-being of all those small creatures makes up a non-negligible term in my utility function.
A priori, it seems that the moral weight of insects would either be dominated by their massive numbers or by their tiny capacities. It’s a narrow space where the two balance and you get a non-negligible but still-not-overwhelming weight for insects in a utility function. How did you decide that this was right?
It’s not really fair to call a range of .02 to 65.92 four digit precision just because the upper bound was written with four digits.
In order to bound the states at a number n, it would need to assign probability zero to ever getting an upgrade allowing it to access log n bytes of memory. I don’t know how this zero-probability assignment would be justified for any n—there’s a non-zero probability that one’s model of physics is completely wrong, and once that’s gone, there’s not much left to make something impossible.
Er, now I see that Eliezer’s post is discussing finite sets of physical laws, which rules out the cosmological horizon diagonalization. But, I think this causal models as function mapping fails in another way: we can’t predict the n in n-valued future experiential states. Before the camera was switched, B9 would assign low probability to these high n-valued experiences. If B9 can get a camera that allows it to perceive color, it could also get an attachment that allows it to calculate the permittivity constant to arbitrary precision. Since it can’t put a bound on the number of values in the L states, the set is uncountable and so is the set of functions.
But the set of causal models is not the set of experience mappings. The model where things disappear after they cross the cosmological horizon is a different model than standard physics, even though they predict the same experiences. We can differentiate between them because Occam’s Razor favors one over the other, and our experiences give us ample cause to trust Occam’s Razor.
At first glance, it seems this gives us enough to diagonalize models--1 meter outside the horizon is different from model one, two meters is different from model two...
There might be a way to constrain this based on the models we can assign different probabilities to, given our knowledge and experience, which might get it down to countable numbers, but how to do it is not obvious to me.
Causal models are countable? Are irrational constants not part of causal models?
Let’s say someone is eating pizza for 20% of their meals. Do you think that replacing pizza with Soylent would result in a worse diet?
First-years can’t cast AK for reasons of raw magical power, so an organization of first-years can’t use the Killing Curse as a membership criteria.