Computational theory of identity so some large number of exact copies of the same individual experiencing the same thing don’t sum, they only count as once instance?
HungryHobo
This seems like a weird mishmash of other hypotheticals on the site, I’m not really seeing the point of parts of your scenario.
I can sort of imagine a world where some extremely well funded terrorists engineer/manufacture a few dozen really nasty diseases and release them in hundreds/thousands of locations at once, (though most terrorists wouldn’t because such an attack would hurt their own side as much or more than anyone else) that might seriously hurt society as a whole but most of the time the backlash against terrorism seems more dangerous than the actual terrorists.
Very clearly put.
Some companies (like up until recently Apple) didn’t pay much in the way of dividends but instead pumped money back into company growth to try to increase the value of their shares. I think this may have been the kind of gain gjm was thinking of where you buy hoping the value will increase rather than banking on the company handing out good dividends.
Terrorists are a rounding error. Sure, some day they’ll take out a city with a nuke but in history cities have been wiped out many many times without taking their parent civilization with them.
Sure, if you could coordinate with almost all players in a market and got them to agree to give up financial gain to achieve your goals without any defecting then it would work. Though that’s a mighty big “if” in any large market.
You seem to be implicitly assuming that the only value of a stock is it’s potential future increase in price, for most their dividends and stability are largely what set their value. Unless the divestment activists control a really really massive fraction of the market then that’s not going to matter in any way shape or form.
Losing actual customers as with tobacco and fossil fuels absolutely can hurt a company. Losing sales hurts, it’s only divestment that’s irrelevant.
only if they do so in total secrecy.
If you’re an analyst in a big trading firm you know that ,say, an oil company exists and is currently valued at a certain level by the market taking into account the available information about it’s business and profits.
Later that company is targeted by divestment activists. A big university is pressured by economically illiterate students into selling all it’s stock in the oil company.
The analyst’s note this and know that the company is probably being slightly undervalued as a result and buy up some stock.
There are actually Sin Funds that target stock of companies like tobacco, fossil fuel companies etc and invest in them on the basis that they’re likely slightly undervalued due to other “moral” investors avoiding them.
Thus the only effect of divestment is to transfer a moderate amount of money from yourself to people who are slightly less ethical. It doesn’t hurt the company at all.
“Error rendering the page: Couldn’t find page”
Why must the oracle continue to believe it’s messages weren’t read?
In the example you give I’m guessing the reason you’d want an oracle to believe with cold certainty that it’s messages won’t be read is to avoid it trying to influence the world with them but that doesn’t require that it continue to believe that later. As long as when it’s composing and ouputing the message it believes solidly that it will never be read and nothing can move that belief from zero then that’s fine. That does not preclude it being perfectly accepting that it’s past messages were in fact read and basing it’s beliefs about the world on that. That knowledge after all cannot shift the belief that this next message will never, ever ever be read unlike all the others.
Of course that brings up the question of why an oracle would even be designed as a goal based AI with any kind of utility function. Square peg, round hole and all that.
The first theory is falsifiable as long as you’re willing to let enough people die. Collect blood samples from everyone before they take the medecine. Sequence their full exome and put it on file.
once you have a few thousand dead and a few thousand survivors you should be able to narrow candidates down to a few dozen genes.
Make predictions about who will die out of the next few hundred who take the pill, bam.
Turns out it’s an eye color gene having some weird effect on a vital pathway that the drug is linked to.
Alternatively if it’s not genetic at all, if single members of pairs of twins taking the drug died at rates inconsistent with the expected numbers of mutations between twins then we could be pretty sure it’s not genetic.
or perhaps it’s only partially genetic, again twins and siblings would let us work this out.
Seems pretty falsifiable.
I’m curious if there’s a term for a variant where you assume that when someone does definitely share the information you have, that they have also seen the implications of that information.
Where you and someone else both have the same level of information but where there are implications of that information.
In the past I’ve found myself to make too many assumptions about what others have concluded from information available to both of us and skip over nodes in the reasoning chain because i believe the other person to have already passed them which can lead to some confused conversations and backtracking.
Examples would be where the details of some chemical reaction have been laid out or the rules of some system or state machine are laid out and they imply a conclusion but the other person hasn’t followed the implication .
Or where I know person A is is in situation X and person B knows that person A is in situation X and I talk to B with the assumption that the person A will be taking the most obvious responses to X.
My SO has rightly scolded me in the past for over-assuming about what will be obvious to the people I’m dealing with and what I assume them to have considered if they’re domain experts, particularly when dealing with anything financial because it’s come back to bite us in the past.
Since we’re now about a year and a half on, how did this go?
Example:
The Big Mac Index has been used to compare prices across countries, as we have noted before. Argentina currently has very high prices due to a combination of inflation and a strong economy, and this shows up glaringly in the Big Mac Index.
Tyler Cowen reports (translating a Spanish original) that the Argentinian government has persuaded McDonalds to lower the price of the Big Mac (relative to other McDonalds items, and relative to competing hamburgers), so that Brazil’s Big Mac Index becomes more competitive.
http://www.statschat.org.nz/2011/11/16/goodharts-law-and-brazilian-hamburgers/
In other words, the real price of the Big Mac rose nearly twice as much as the official statistics were willing to admit, in Argentina of course. That’s not right, so the government sprang into action. The minister of the commerce department “persuaded” McDonald’s to price the Big Mac at $16, while other sandwiches at the chain are in the $21 to $23 range.
The outlets now keep the Big Mac well-hidden
http://marginalrevolution.com/marginalrevolution/2011/11/sentences-to-ponder-32.html
Putting a human face on things does seem to be terribly important to get smaller donors to donate long term.
My own mother has been donating for years to a particular charity which puts young girls in a particular country who did very well in end-of-primary school exams but live in absolute poverty through secondary school in a third world country. Donors are explicitly paired with a child and exchange letters twice a year. (donors are asked to write something or other even if it’s just a postcard because it scares the girls if their donor goes silent even though the charity doesn’t kick kids out if their donor dies but rather pairs them with a new donor) Donors also cannot “overfund” one individual to avoid jealousy and unequal treatment based on funder.
I doubt the scheme is the absolute best bang for your buck though it probably does pretty well if you value education highly.
It amounts to a couple of hundred euro per year but they have exceptionally good donor retention with some donors explicitly putting aside enough in their Will to put their girl through the rest of her education.
They’re not sending money into a generic money-hole, they’re taking care of an individual who they’ve talked with and who dots her ’i’s with a squiggly line and draws faces in her margins in purple pencil.
Perhaps ubiquitous cheap tech will allow some big problems to be handled in such a manner as to give feedback to donors. Tell them about the million people in the refugee camp but put a ring around a half dozen families who are getting the malaria nets you funded and the vaccines you paid for.
Right now coordination in those situations is such a nightmare as to make that impossible but I could imagine cheap electronics solving communication long before solving refugee camps.
I like that though it would probably need to be somewhat interactive. Either that or you’d need a friend to do some of the call and response bits.
Probably most suitable if the audience includes lots of children.
You: “Hi everyone.
Today I’m going to be talking about some of the important concepts in science like falsifiability …. etc.
To help me I’ve brought John here and my Dragon.”
John:”What dragon? I don’t see any dragon.”
You: “This dragon” [gesture at empty space] “I should probably have mentioned, he’s invisible”
John: ”… Ok, so he’s invisible, lets see what his scales feel like” [wave hand through empty air]
And so on
Wikipedia lists the “all swans are white” as an example for a falsifiable statement, but it is not practical enough. To prove that all swans are white would require to observe all the swans in the world.
Something being falsifiable and something being universally possible to check are 2 different things.
In theory you could falsify that statement after checking only a single swan if it happens to be a black swan.
Conservation of energy is falsifiable. If you found some way of creating energy without taking it from elsewhere then you would falsify it. However it isn’t practical to check every cubic meter of space in the universe to check if it applies everywhere.
there’s also the old Invisible Dragon example from Sagan
“A fire-breathing dragon lives in my garage.”
Suppose … I seriously make such an assertion to you. Surely you’d want to check it out, see for yourself….
“Show me,” you say. I lead you to my garage. You look inside and see a ladder, empty paint cans, an old tricycle—but no dragon.
“Where’s the dragon?” you ask.
“Oh, she’s right here,” I reply, waving vaguely. “I neglected to mention that she’s an invisible dragon.”
You propose spreading flour on the floor of the garage to capture the dragon’s footprints.
“Good idea,” I say, “but this dragon floats in the air.”
Then you’ll use an infrared sensor to detect the invisible fire.
“Good idea, but the invisible fire is also heatless.”
You’ll spray-paint the dragon and make her visible.
“Good idea, except she’s an incorporeal dragon and the paint won’t stick.”
And so on. I counter every physical test you propose with a special explanation of why it won’t work.
Now, what’s the difference between an invisible, incorporeal, floating dragon who spits heatless fire and no dragon at all? If there’s no way to disprove my contention, no conceivable experiment that would count against it, what does it mean to say that my dragon exists? Your inability to invalidate my hypothesis is not at all the same thing as proving it is true. Claims that cannot be tested, assertions immune to disproof are veridically worthless, whatever value they may have in inspiring us or in exciting our sense of wonder. What I’m asking you do comes down to believing, in the absence of evidence, on my say-so.
In a world where it was indeed perfectly even, lets say a world with a highly regimented system of prejudice where everyone’s skin tone is carefully measured and restrictions were then applied across the country.
But if prejudice isn’t perfectly even or measuring the same things then you’d expect it to vary.
If one twin grows up in a town where most of the residents still oppose miscegenation while the other ends up in a town with far less prejudice then the difference is going to show up as environmental.Or if they have the same accent and same skin tone but one grows up in a town where people are more prejudiced about cultural markers like accent and dress while the other grows up in a town where they care more about biological markers the difference would show up as environmental.
Also it’s common to run PCA when dealing with samples from different ethnic origins to make it easy to spot when you’ve simply flagged up a variant linked to one group.
Examples:
If your flagged variant overlaps almost totally with only one group then it can be a sign that your analysis has been confounded though overlapping a little more with one group than with others is to be expected.
I find that about as convincing as “if you see a watch there must be a watchmaker” style arguments.
There are a number of ways theorized to test if we’re in various kinds of simulation and so far they’ve all turned up negative.
String theory is famously bad at being usable to predict even mundane things even if it is elegant and “flat” is not the same as “infinite”.
One meta-hazard would be that “community hazards” could end up defined far too broadly, encompassing anything that might make some people feel uncomfortable and simply become a defense for sacred values of the people assessing what should constitute “community hazards”.
Or worse, that the arguments for one set of positions could get classified as “community hazards” such that, to use a mind-killing example, all the pro-life arguments get classified as “community hazards” while the pro-choice ones do not.
So it’s probably best to be exceptionally conservative with what you’re willing to classify as a “community hazard”