I was mostly hoping for an explanation of why you think compensation and monetary incentives are among the first problems you are considering. A common startup failure mode (and would-be technocrat ineffectual bloviating) is spending a bunch of energy on mechanism and incentive design to handle massive scale, before even doing basic functionality experiments. I hope I’m wrong, and I’d like to know your thinking about why I am.
I may well be over-focused on that aspect of the discussion—feel free to tell me I’m wrong and you’re putting most of your thought into mechanisms for tracking, sharing, and breaking down problems into smaller pieces. Or feel free to tell me I’m wrong and incentives are the most important part.
Yeah, I think we’re actually thinking much more broadly than it came across. We’ve been thinking about this for 4 months along many dimensions. Ruby will be posting more internal docs soon that highlight different avenues of thinking. What’s left are things that we’re legitimately uncertain about.
I had previously posted a question about whether questions should be renamed “confusions” which didn’t get much engagement and I ultimately don’t think the right approach, but which I considered potentially quite important at the time.
But it’s possible to find hidden problems in the problem, and it’s quite a challenging problem.
What if your intuition comes from computer science, machine learning, or game theory, and you can exploit them? If you’re working on something like the brain of general intelligence or the problem solving problem, what do you do to get started?
When I see problems on a search-solving algorithm, my intuition has to send the message that something is wrong. All of my feelings about how work done and how work is usually gone wrong.
I was mostly hoping for an explanation of why you think compensation and monetary incentives are among the first problems you are considering. A common startup failure mode (and would-be technocrat ineffectual bloviating) is spending a bunch of energy on mechanism and incentive design to handle massive scale, before even doing basic functionality experiments. I hope I’m wrong, and I’d like to know your thinking about why I am.
I may well be over-focused on that aspect of the discussion—feel free to tell me I’m wrong and you’re putting most of your thought into mechanisms for tracking, sharing, and breaking down problems into smaller pieces. Or feel free to tell me I’m wrong and incentives are the most important part.
Yeah, I think we’re actually thinking much more broadly than it came across. We’ve been thinking about this for 4 months along many dimensions. Ruby will be posting more internal docs soon that highlight different avenues of thinking. What’s left are things that we’re legitimately uncertain about.
I had previously posted a question about whether questions should be renamed “confusions” which didn’t get much engagement and I ultimately don’t think the right approach, but which I considered potentially quite important at the time.
This is a very good post.
Another important example:
But it’s possible to find hidden problems in the problem, and it’s quite a challenging problem.
What if your intuition comes from computer science, machine learning, or game theory, and you can exploit them? If you’re working on something like the brain of general intelligence or the problem solving problem, what do you do to get started?
When I see problems on a search-solving algorithm, my intuition has to send the message that something is wrong. All of my feelings about how work done and how work is usually gone wrong.