In the name of supporting people actually doing stuff:
Scott’s IRB Nightmare comes from the circumstance of polling taking place within the context of privileged patient-provider interaction, which is covered by HIPAA, which requires somewhat stringent data handling. If you are not a doctor, and you’re not asking your patients in the hospital, this does not apply to you.
Yes, you are allowed to “just go out and ask a whole bunch of people stuff”. People can, actually, give away whatever information they feel compelled to do so. People are allowed to enter (mostly) any trade. People are free to do stuff.
For people <18, you need parental consent.
There are, like, hundreds of tools to do this -both finding people, and nailing the questions. Google Survey samples currently best across US (specifically, it had predicted the 2016 election results successfully). This is good, if you have a specific hypothesis, that you want to ask from 1000++ people.
The more quantitative you get, the less signal it carries, at higher precision. Survey & stats criticism generally comes from attempting to determine “things about humanity in general”, which is also (somewhat) useful, but requires N > very large, and _very_ methodical sampling / experiment formulating / etc.
Generate qualitatively, validate quantitatively. Vast majority of effort goes into actually locating the hypothesis. Before building a research thesis in your room, go out and do the simplest thing first. Talk with people, like, in-person. There’s a learning curve prior to being able to formulate meaningful hypotheses.
Ask yourself, what rent does answer to a specific question pays. What does it say of reality if it turns out to be A vs B? How does that interact with neighbouring things?
And: what, specifically, do you wish to achieve here? Specifically, some qualitative answers to some of the questions above from Bay Area people, along with some synthesis, would be extremely informative (to me at least).
A good starting point for this might be cultural anthropology, but instead of getting a book, here’s an MVP: get a tape recorder,ask 50 of your friends the questions above, then put answers into a spreadsheet, and a synthesis into an lw post. This is extremely informative for eg measuring local shifts in the overton window, finding common ground (and grounds shifting); and is sorely missing.
Why in-person? People who persistently fill out textareas on web pages are heavily biased in income, and mental illnesses; generally, people don’t do that. Being in-person, you’re raising the interview against personal reputation, which bridges the addressibility gap, and makes a much wider variety of people’s voices accessible.
To avoid pet-theory issue: Ask open-ended questions (eg the relating to job/ambition ones above are good). Don’t lead, capture the raw stuff
Do this simple thing first, prior to embarking on specific hypothesis formulation; and post the results!
In the context of customer development for product research, yes. For good questions on that, see eg the book “Mom test” by Rob Fitzpatrick, and lean customer development field in general. This was solving for the general question “will developing x be paid for”; being wrong on this particular question is expensive.
There are, like, hundreds of tools to do this -both finding people, and nailing the questions. Google Survey samples currently best across US (specifically, it had predicted the 2016 election results successfully).
Could you list some good ones (other than Google Surveys)?
In the name of supporting people actually doing stuff:
Scott’s IRB Nightmare comes from the circumstance of polling taking place within the context of privileged patient-provider interaction, which is covered by HIPAA, which requires somewhat stringent data handling. If you are not a doctor, and you’re not asking your patients in the hospital, this does not apply to you.
Yes, you are allowed to “just go out and ask a whole bunch of people stuff”. People can, actually, give away whatever information they feel compelled to do so. People are allowed to enter (mostly) any trade. People are free to do stuff.
For people <18, you need parental consent.
There are, like, hundreds of tools to do this -both finding people, and nailing the questions. Google Survey samples currently best across US (specifically, it had predicted the 2016 election results successfully). This is good, if you have a specific hypothesis, that you want to ask from 1000++ people.
The more quantitative you get, the less signal it carries, at higher precision. Survey & stats criticism generally comes from attempting to determine “things about humanity in general”, which is also (somewhat) useful, but requires N > very large, and _very_ methodical sampling / experiment formulating / etc.
Generate qualitatively, validate quantitatively. Vast majority of effort goes into actually locating the hypothesis. Before building a research thesis in your room, go out and do the simplest thing first. Talk with people, like, in-person. There’s a learning curve prior to being able to formulate meaningful hypotheses.
Ask yourself, what rent does answer to a specific question pays. What does it say of reality if it turns out to be A vs B? How does that interact with neighbouring things?
And: what, specifically, do you wish to achieve here? Specifically, some qualitative answers to some of the questions above from Bay Area people, along with some synthesis, would be extremely informative (to me at least).
A good starting point for this might be cultural anthropology, but instead of getting a book, here’s an MVP: get a tape recorder,ask 50 of your friends the questions above, then put answers into a spreadsheet, and a synthesis into an lw post. This is extremely informative for eg measuring local shifts in the overton window, finding common ground (and grounds shifting); and is sorely missing.
Why in-person? People who persistently fill out textareas on web pages are heavily biased in income, and mental illnesses; generally, people don’t do that. Being in-person, you’re raising the interview against personal reputation, which bridges the addressibility gap, and makes a much wider variety of people’s voices accessible.
To avoid pet-theory issue: Ask open-ended questions (eg the relating to job/ambition ones above are good). Don’t lead, capture the raw stuff
Do this simple thing first, prior to embarking on specific hypothesis formulation; and post the results!
Have you done this? If so, what were the questions, what were the answers, and are they published anywhere?
In the context of customer development for product research, yes. For good questions on that, see eg the book “Mom test” by Rob Fitzpatrick, and lean customer development field in general. This was solving for the general question “will developing x be paid for”; being wrong on this particular question is expensive.
Could you list some good ones (other than Google Surveys)?
Thanks!
(Edit note: Fixed formatting)