I sneeze quite often. When someone says ‘bless you’, my usual response is ‘and may you also be blessed’. I’ve heard a number of people who had apparently never wondered before say ‘why do we say that?’ after receiving that response.
kybernetikos
I have all of the english wikipedia available for offline searching on my phone. It’s big, sure, but it doesn’t fill the memory card by any means (and this is just the default one that came with the phone).
For offline access on a windows computer, WikiTaxi is a reasonable solution.
I’d recommend that everyone who can carry around offline versions of wikipedia. I consider it part of my disaster preparedness, not to mention the fun of learning new things by hitting the ‘random article’ button.
A parole board considers the release of a prisoner: Will he be violent again?
I think this is the kind of question that Miller is talking about. Just because a system is correct more often, doesn’t necessarily mean it’s better.
For example if the human experts allowed more people out who went on to commit relatively minor violent offences and the SPRs do this less often, but are more likely to release prisoners who go on to commit murder then there would be legitimate discussion over whether the SPR is actually better.
I think this is exactly what he is talking about when he says
Where AI’s compete well generally they beat trained humans fairly marginally on easy (or even most) cases, and then fail miserably at border or novel cases. This can make it dangerous to use them if the extreme failures are dangerous.
Whether or not there is evidence that says this is a real effect I don’t know, but to address it what you really need to measure is total utility of outcomes rather than accuracy.
that I started to wonder why adults would want children to believe in Santa Claus, and whether their reasons for it were actually good.
I think that lots of people have a kind of compulsion to lie to anyone they care about who is credulous, particularly children, about things that don’t matter very much. I assume it’s adaptive behaviour, to try to toughen up their reasoning skills on matters that aren’t so important—to teach them that they can’t rely on even good people to tell them stuff that is true.
The good you do can compound too. If you save a childs life at $500, that child might go on to save other childrens lives. I think you might well get a higher rate of interest on the good you do than 5%. There will be a savings rate at which you should save instead of give, but I don’t think we’re near it at the moment.
Most of us allocate a particular percentage to charity, despite the fact that most people would say that nearly nothing we spend money on is as important as saving childrens lives.
I don’t know whether you think it’s that we overestimate how much we value saving childrens lives, or underestimate how important xbox games, social events, large tvs and eating tasty food are to us. Or perhaps you think it’s none of that, and that we’re being simply irrational.
I doubt that anyone could consistently live as if the difference between choice of renting a nice flat and renting a dive was one life per month, or that halving normal grocery consumption for a month was a childs life that month, etc. If that’s really the aim, we’re going to have to do a significant amount of emotional engineering.
I also want to stick up for the necessity of analysing the way that a charity works, not just what they do. For example, charities that employ local people and local equipment may save fewer people per dollar in the short term, but may be less likely to create a culture of dependence, and may be more sustainable in the long term. These considerations are important too.
I have to admire the cunning of your last sentence.
Or have I accidentally defected? I can’t tell.
EDIT: I think the ‘wizened’ correction was intended to be a joke. When I read your piece originally the idea of you ‘wizening up’ made me smile, and I suspect that the corrector just wanted to share that idea with others who may have missed it.
I suppose the goal you were going to spend the money on would have to be of sufficient utility if achieved to offset that in order to make the scenario work. Maybe saving the world, or creating lots of happy simulations of yourself, or finding a way to communicate between them.
Imagine a raffle where the winner is chosen by some quantum process. Presumably under the many worlds interpretation you can see it as a way of shifting money from lots of your potential selves to just one of them. If you have a goal you are absolutely determined to achieve and a large sum of money would help towards it, then it might make a lot of sense to take part, since the self that wins will also have that desire, and could be trusted to make good use of that money.
Now, I wonder if anyone would take part in such a raffle if all the entrants who didn’t win were killed on the spot. That would mean that everyone would win in some universe, and cease to exist in the other universes where they entered. Could that be a kind of intellectual assent vs belief test for Many Worlds?
Yeah, that is a problem with the illustration. However, I don’t think it’s completely devoid of use.
Taking a risk based on some knowledge is a very strong sign of having internalised that knowledge.
I’ve heard this contrasted as ‘knowledge’, where you intellectually assent to something and can make predictions from it and ‘belief’, where you order your life according to that knowledge, but this distinction is certainly not made in normal speech.
A common illustration of this distinction (often told by preachers) is that Blondin the tightrope walker asked the crowd if they believed he could safely carry someone across the Niagra falls on a tightrope, and almost the whole crowd shouted ‘yes’. Then he asked for a volunteer to become the first man ever so carried, at which point the crowd shut up. In the end the only person he could find to accept was his manager.
If you want to read the full thing, rather than just the description, you can download the ebook here. I certainly enjoyed it.
There’s a fairly obvious answer to that stuff in my opinion. Ventus by Schroeder (scifi) covers it nicely. It would be a structure set up by the atlantians for control of nature, before they ascended probably and left Earth for the stars.
Edit: It occurs to me that the other possibility would be a simulation, originally invented by the atlantians for them to upload themselves into, or perhaps muggles were supposed to be NPCs.
Just because something only exists at high levels of abstraction doesn’t mean it’s not real or explanatory. Surely the important question is whether humans genuinely have preferences that explain their behaviour (or at least whether a preference system can occasionally explain their behaviour—even if their behaviour is truly explained by the interaction of numerous systems) rather than how these preferences are encoded.
The information in a jpeg file that indicates a particular pixel should be red cannot be analysed down to a single bit that doesn’t do anything else, that doesn’t mean that there isn’t a sense in which the red pixel genuinely exists. Preferences could exist and be encoded holographically in the brain. Whether you can find a specific neuron or not is completely irrelevant to their reality.