“latency”—you’re using this like I’d use “impact” or “influence”.
Good link to Will_Newsome’s nightmare.
What you’re saying is fine—predicting the future is hard. But I think “this sort of reasoning is all we are going to have until we have an AI” is unwarranted.
Latency is the propagation delay. Until you propagate through the hard path at all, the shorter paths are the only paths that you could propagate through. There is no magical way for skipping multiple unknown nodes in a circuit and still obtaining useful values. It’d be very easy to explain in terms of electrical engineering (the calculation of signal propagation of beliefs through the inference graphs is homologous to the calculation of signal propagation through a network of electrical components; one can construct an equivalent circuit for specific reasoning graph).
The problem with ‘hard’ is that it does not specify how hard. Usually ‘hard’ is taken as ‘still doable right now’. It can be arbitrarily harder than this even for most elementary propagation through 1 path.
I still have no idea what your model is (“belief propagation graph with latencies”). It’s worth spelling out rigorously, perhaps aided by a simpler example. If we’re to talk about your model, then we’ll need you to teach it to us.
In very short summary, that is also sort of insulting so I am having second thoughts on posting that:
Math homework takes time.
See, one thing I never really even got about LW. So you have some black list of biases, which is weird because the logic is known to work via white list and rigour in using just the whitelisted reasoning. So you supposedly get rid of biases (opinions on this really vary). You still haven’t gotten some ultra powers that would instantly get you through enormous math homework which is prediction of anything to any extent what so ever. You know, you can get half grade if you at least got some part of probability homework from the facts to the final estimate, even if you didn’t do everything required. Even that, still has a minimum work below which there has not been anything done to even allow some silly guess at the answer. The answer doesn’t even start to gradually improve before a lot of work, even if you do numbers by how big they feel. Now, there’s this reasoning—if it is not biases, then it must be the answer—no, it could be neuronal noise, or other biases, or the residual weight of biases, or the negations of biases from overcompensation (Happens to the brightest; Nobel Prize Committee one time tried not to be biased against gross unethical-ish looking medical procedures that seem like they can’t possibly do any good, got itself biased other way, and gave Nobel Prize to inventor of lobotomy, a crank pseudoscientist with no empirical support, really quickly too. )
“latency”—you’re using this like I’d use “impact” or “influence”.
Good link to Will_Newsome’s nightmare.
What you’re saying is fine—predicting the future is hard. But I think “this sort of reasoning is all we are going to have until we have an AI” is unwarranted.
Latency is the propagation delay. Until you propagate through the hard path at all, the shorter paths are the only paths that you could propagate through. There is no magical way for skipping multiple unknown nodes in a circuit and still obtaining useful values. It’d be very easy to explain in terms of electrical engineering (the calculation of signal propagation of beliefs through the inference graphs is homologous to the calculation of signal propagation through a network of electrical components; one can construct an equivalent circuit for specific reasoning graph).
The problem with ‘hard’ is that it does not specify how hard. Usually ‘hard’ is taken as ‘still doable right now’. It can be arbitrarily harder than this even for most elementary propagation through 1 path.
I still have no idea what your model is (“belief propagation graph with latencies”). It’s worth spelling out rigorously, perhaps aided by a simpler example. If we’re to talk about your model, then we’ll need you to teach it to us.
In very short summary, that is also sort of insulting so I am having second thoughts on posting that:
Math homework takes time.
See, one thing I never really even got about LW. So you have some black list of biases, which is weird because the logic is known to work via white list and rigour in using just the whitelisted reasoning. So you supposedly get rid of biases (opinions on this really vary). You still haven’t gotten some ultra powers that would instantly get you through enormous math homework which is prediction of anything to any extent what so ever. You know, you can get half grade if you at least got some part of probability homework from the facts to the final estimate, even if you didn’t do everything required. Even that, still has a minimum work below which there has not been anything done to even allow some silly guess at the answer. The answer doesn’t even start to gradually improve before a lot of work, even if you do numbers by how big they feel. Now, there’s this reasoning—if it is not biases, then it must be the answer—no, it could be neuronal noise, or other biases, or the residual weight of biases, or the negations of biases from overcompensation (Happens to the brightest; Nobel Prize Committee one time tried not to be biased against gross unethical-ish looking medical procedures that seem like they can’t possibly do any good, got itself biased other way, and gave Nobel Prize to inventor of lobotomy, a crank pseudoscientist with no empirical support, really quickly too. )
You can’t use pure logic to derive the inputs to your purely logical system. That’s where identifying biases comes in.