A stupid question…
If I ask people (n = several hundreds to thousand) to put a coin down on the table such that it wouldn’t roll away, heads or tails up… I expect the overall results to be near 1:1 ratio of heads to tails… But it wouldn’t be as random as when I (or they) just tossed the coin on the table, right?
This is an interesting case. If people are free to place the coins as they wish, I wouldn’t be surprised if P(heads)>P(tails) due to biases about which way is “right side up”.
When a face appears on a coin many people seem to think of this as the front of the coin, and in numismatic circles, the obverse is typically “heads” if a head appears on only one side (although no surprise that there are contentious debates about obverse/reverse in specific circumstances).
“Random”, in objective Bayesianism, just means that you have no information privileging one alternative or the other. In this case, the reasons why you don’t know which outcome each single experiment will have are different, but the end result is the same. There are possibly differences in their Ap distributions though, depending on how you model human behaviour...
I think “random” does mean something more than “unpredictable”. It means something more like “independent of things you care about”. More precisely, that’s what it should mean in most places where it’s used.
(I’m not quite satisfied with this formulation; e.g., a “random” thing that always takes the same value is independent of everything, but you wouldn’t usually want to call it random. What we’re really trying to get at is “statistically indistinguishable from idealized randomness” but it would be nice to find a way of saying it that doesn’t appeal to an existing notion of randomness. Perhaps something like “incompressible, on average, given the accessible state of all the other things we care about”.)
Imagine, if you will, a lottery that works as follows. Each lottery ticket bears a SHA-256 hash of (the ticket’s lottery numbers + a further string); the further string is not revealed until the time of the draw. When drawing time comes, the winning numbers and the further string are revealed on national TV, and if you think you might have a winning ticket you can bring it to have the hash checked.
In Scenario 1, the winning numbers are chosen “at random”. In Scenario 2, you (and only you) have the magical power to make the winning numbers be the ones hashed onto your ticket. You don’t know the further string. You also don’t know what your ticket’s numbers are. You just know that after you perform your magical ritual the numbers drawn will match the numbers on your ticket.
The numbers drawn are still unpredictable. No one knows what numbers are on your ticket. (Let’s suppose that the tickets are made by some process that after making each ticket erases all evidence of what numbers have been hashed onto it.) But something is predictable, namely that the numbers drawn will match yours and you will win the lottery.
The lottery numbers in Scenario 2 are still “random” in some sense, but this is exactly the kind of situation you’re trying to avoid when you deliberately randomize things.
So, the answer to Romashka’s question is: those results would be less random-in-my-sense because they might correlate with interesting properties of the people in the experiment, which could be a problem if e.g. you were using these coin “tosses” to control something you’re doing with the people (e.g., which ones to use for the next phase of the experiment, or which of two things to ask them to do). They might well also diverge somewhat from the 1:1 ratio you’d expect from theoretical unbiased coin tosses (e.g., maybe most people prefer to put coins with their “heads” face upwards).
It means something more like “independent of things you care about”.
I don’t understand what that means. It sounds like something I would call “noise” (=”variation which I do not care about”) which is a quite a different concept from “random”.
There is also “true” randomness, e.g. radioactive decay, which doesn’t seem to be related to whatever I might care about. And if you put yourself into the paws of Schrodinger’s cat, you might care a great deal about that trigger which breaks the poison vial, but does that make it not random?
What we’re really trying to get at is “statistically indistinguishable from idealized randomness”
As you yourself point out that’s entirely circular and, besides, I have no idea what “idealized randomness” is.
Imagine, if you will, a lottery that works as follows.
You’re basically talking about randomness as that which lies beyond the limits of (current) knowledge. Didn’t you just come back to randomness meaning “unpredictable”?
Well, there is what is usually called quantum randomness. While many common kinds of randomness represent just lack of knowledge, contemporary physics says that quantum randomness (e.g. how much time will pass before a particular unstable atom decays) is different because it is impossible in principle to predict it. You can probably call it “utter lack of causality”.
As to “idealized”, I don’t know. Depending on which framework you pick, the notions of “idealized randomness” might well differ.
Hmm, it seems that I do not grok synergism, as well… Never could convince myself that a ‘synergistic’ outcome is ‘more than the sum’ as opposed to ‘very different from the sum’ - that is, I can imagine some chemical catalyst system which processes substrate faster than the combined rates of its subsystems, but… In the kind of biology to which I am used, the ‘synergistic’ outcome is usually different from the theoretical ‘sum’ in more ways than one, and the ‘sum’ might not exist...
I mean, it’s hard for me to see why two steps of randomness are more random than one. Yet your words are, somehow, an answer… The conclusion is probably that I have zero knowledge.
I’m not exactly sure what you mean by “as random.”
It may well be that there are discernable patterns in a sequence of manually simulated coin-flips that would allow us to distinguish such sequences from actual coinflips. The most plausible hypothetical examples I can come up with would result in a non-1:1 ratio… e.g., humans having a bias in favor of heads or tails.
Or, if each person is laying a coin down next to the previous coin, such that they are able to see the pattern thus far, we might find any number of pattern-level biases… e.g., if told to simulate randomness, humans might be less than 50% likely to select heads if they see a series of heads-up coins, whereas if not told to do so, they might be more than 50%.
It’s kind of an interesting question, actually. I know there’s been some work on detecting test scores by looking for artificial-pattern markers in the distribution of numbers, but I don’t know if anyone’s done equivalent things for coinflips.
Thank you. I realized, as soon as I posted it, that the method of obtaining the sequence would not matter (as the previous commenter rightly said), but somehow, the ‘feeling of a question’ remained. I was not thinking of showing them part of ‘the sequence so far’… But it might be fun to determine whether there is any effect on the subject’s choice of knowing this ‘flip’ is a part of a pattern (or not knowing it), of the composition of the revealed pattern, and maybe—if there is an effect—the length of the washout period...
I mean, it’s only a coin flip! The preceding choices should have no bearing on it. It’s, like, the least significant choice you can ever make...
If none of the participants can predict or influence the 50⁄50 outcome, then it’s random. The procedure for generating the state doesn’t matter—only that the individual events cannot be predicted and the aggregate converges toward a distribution.
A stupid question… If I ask people (n = several hundreds to thousand) to put a coin down on the table such that it wouldn’t roll away, heads or tails up… I expect the overall results to be near 1:1 ratio of heads to tails… But it wouldn’t be as random as when I (or they) just tossed the coin on the table, right?
This is an interesting case. If people are free to place the coins as they wish, I wouldn’t be surprised if P(heads)>P(tails) due to biases about which way is “right side up”. When a face appears on a coin many people seem to think of this as the front of the coin, and in numismatic circles, the obverse is typically “heads” if a head appears on only one side (although no surprise that there are contentious debates about obverse/reverse in specific circumstances).
“Random”, in objective Bayesianism, just means that you have no information privileging one alternative or the other.
In this case, the reasons why you don’t know which outcome each single experiment will have are different, but the end result is the same.
There are possibly differences in their Ap distributions though, depending on how you model human behaviour...
“Random” doesn’t mean anything but “unpredictable”, and a possibly relevant question is “unpredictable by whom?”.
But yes, probably. (If you ask 1000 people for a number from 1 to 10 many more than 100 of them will say “7” etc.)
I think “random” does mean something more than “unpredictable”. It means something more like “independent of things you care about”. More precisely, that’s what it should mean in most places where it’s used.
(I’m not quite satisfied with this formulation; e.g., a “random” thing that always takes the same value is independent of everything, but you wouldn’t usually want to call it random. What we’re really trying to get at is “statistically indistinguishable from idealized randomness” but it would be nice to find a way of saying it that doesn’t appeal to an existing notion of randomness. Perhaps something like “incompressible, on average, given the accessible state of all the other things we care about”.)
Imagine, if you will, a lottery that works as follows. Each lottery ticket bears a SHA-256 hash of (the ticket’s lottery numbers + a further string); the further string is not revealed until the time of the draw. When drawing time comes, the winning numbers and the further string are revealed on national TV, and if you think you might have a winning ticket you can bring it to have the hash checked.
In Scenario 1, the winning numbers are chosen “at random”. In Scenario 2, you (and only you) have the magical power to make the winning numbers be the ones hashed onto your ticket. You don’t know the further string. You also don’t know what your ticket’s numbers are. You just know that after you perform your magical ritual the numbers drawn will match the numbers on your ticket.
The numbers drawn are still unpredictable. No one knows what numbers are on your ticket. (Let’s suppose that the tickets are made by some process that after making each ticket erases all evidence of what numbers have been hashed onto it.) But something is predictable, namely that the numbers drawn will match yours and you will win the lottery.
The lottery numbers in Scenario 2 are still “random” in some sense, but this is exactly the kind of situation you’re trying to avoid when you deliberately randomize things.
So, the answer to Romashka’s question is: those results would be less random-in-my-sense because they might correlate with interesting properties of the people in the experiment, which could be a problem if e.g. you were using these coin “tosses” to control something you’re doing with the people (e.g., which ones to use for the next phase of the experiment, or which of two things to ask them to do). They might well also diverge somewhat from the 1:1 ratio you’d expect from theoretical unbiased coin tosses (e.g., maybe most people prefer to put coins with their “heads” face upwards).
I don’t understand what that means. It sounds like something I would call “noise” (=”variation which I do not care about”) which is a quite a different concept from “random”.
There is also “true” randomness, e.g. radioactive decay, which doesn’t seem to be related to whatever I might care about. And if you put yourself into the paws of Schrodinger’s cat, you might care a great deal about that trigger which breaks the poison vial, but does that make it not random?
As you yourself point out that’s entirely circular and, besides, I have no idea what “idealized randomness” is.
You’re basically talking about randomness as that which lies beyond the limits of (current) knowledge. Didn’t you just come back to randomness meaning “unpredictable”?
Wouldn’t idealized randomness mean utter lack of causality?
Well, there is what is usually called quantum randomness. While many common kinds of randomness represent just lack of knowledge, contemporary physics says that quantum randomness (e.g. how much time will pass before a particular unstable atom decays) is different because it is impossible in principle to predict it. You can probably call it “utter lack of causality”.
As to “idealized”, I don’t know. Depending on which framework you pick, the notions of “idealized randomness” might well differ.
Hmm, it seems that I do not grok synergism, as well… Never could convince myself that a ‘synergistic’ outcome is ‘more than the sum’ as opposed to ‘very different from the sum’ - that is, I can imagine some chemical catalyst system which processes substrate faster than the combined rates of its subsystems, but… In the kind of biology to which I am used, the ‘synergistic’ outcome is usually different from the theoretical ‘sum’ in more ways than one, and the ‘sum’ might not exist...
I mean, it’s hard for me to see why two steps of randomness are more random than one. Yet your words are, somehow, an answer… The conclusion is probably that I have zero knowledge.
I’m not exactly sure what you mean by “as random.”
It may well be that there are discernable patterns in a sequence of manually simulated coin-flips that would allow us to distinguish such sequences from actual coinflips. The most plausible hypothetical examples I can come up with would result in a non-1:1 ratio… e.g., humans having a bias in favor of heads or tails.
Or, if each person is laying a coin down next to the previous coin, such that they are able to see the pattern thus far, we might find any number of pattern-level biases… e.g., if told to simulate randomness, humans might be less than 50% likely to select heads if they see a series of heads-up coins, whereas if not told to do so, they might be more than 50%.
It’s kind of an interesting question, actually. I know there’s been some work on detecting test scores by looking for artificial-pattern markers in the distribution of numbers, but I don’t know if anyone’s done equivalent things for coinflips.
Thank you. I realized, as soon as I posted it, that the method of obtaining the sequence would not matter (as the previous commenter rightly said), but somehow, the ‘feeling of a question’ remained. I was not thinking of showing them part of ‘the sequence so far’… But it might be fun to determine whether there is any effect on the subject’s choice of knowing this ‘flip’ is a part of a pattern (or not knowing it), of the composition of the revealed pattern, and maybe—if there is an effect—the length of the washout period...
I mean, it’s only a coin flip! The preceding choices should have no bearing on it. It’s, like, the least significant choice you can ever make...
If none of the participants can predict or influence the 50⁄50 outcome, then it’s random. The procedure for generating the state doesn’t matter—only that the individual events cannot be predicted and the aggregate converges toward a distribution.