Why not just say e is evidence for X if P(X) is not equal to P(X|e)?
Incidentally, I don’t really see the difference between probabilistic dependence (as above) and entanglement. Entanglement is dependence in the quantum setting.
Why not just say e is evidence for X if P(X) is not equal to P(X|e)?
Incidentally, I don’t really see the difference between probabilistic dependence (as above) and entanglement. Entanglement is dependence in the quantum setting.
Eliezer said: “These are blog posts, I’ve got to write them quickly to pump out one a day.”
I am curious what motivated this goal.
In computer science there is a saying ‘You don’t understand something until you can program it.’ This may be because programming is not forgiving to the kind of errors Eliezer is talking about. Interestingly, programmers often use the term ‘magic’ (or ‘automagically’) in precisely the same way Eliezer and his colleague did.
Some other vague concepts people disagree on: ‘cause,’ ‘intelligence,’ ‘mental state,’ and so on.
I am a little suspicious of projects to ‘exorcise’ vague concepts from scientific discourse. I think scientists are engaged in a healthy enough enterprise that eventually they will be able to sort out the uselessly vague concepts from the ‘vague because they haven’t been adequately understood and defined yet’.
I ll try a silly info-theoretic description of emergence:
Let K(.) be Kolmogorov complexity. Assume you have a system M consisting of and fully determined by n small identical parts C. Then M is ‘emergent’ if M can be well approximated by an object M’ such that K(M’) << n*K(C).
The particulars of the definition aren’t even important. What’s important is this is (or can be) a mathematical, rather than a scientific definition, something like the definition of derivative. Mathematical concepts seem more about description, representation, and modeling than about prediction, and falsifiability. Mathematical concepts may not increase our ability to predict directly, but they do indirectly as they form a part in larger scientific predictions. Derivatives don’t predict anything themselves, but many physical laws are stated in terms of derivatives.
Robin Hanson said: “Actually, Pearl’s algorithm only works for a tree of cause/effects. For non-trees it is provably hard, and it remains an open question how best to update. I actually need a good non-tree method without predictable errors for combinatorial market scoring rules.”
To be even more precise, Pearl’s belief propagation algorithm works for the so-called ‘poly-tree graphs,’ which are directed acyclic graphs without undirected cycles (e.g., cycles which show up if you drop directionality). The state of the art for exact inference in bayesian networks are various junction tree based algorithms (essentially you run an algorithm similar to belief propagation on a graph where you force cycles out by merging nodes). For large intractable networks people resort to approximating what they are interested in by sampling. Of course there are lots of approaches to this problem: bayesian network inference is a huge industry.
Eliezer said: “I encounter people who are quite willing to entertain the notion of dumber-than-human Artificial Intelligence, or even mildly smarter-than-human Artificial Intelligence. Introduce the notion of strongly superhuman Artificial Intelligence, and they’ll suddenly decide it’s “pseudoscience”.”
It may be that the notion of strongly superhuman AI runs into people’s preconceptions they aren’t willing to give up (possibly of religious origins). But I wonder if the ‘Singularians’ aren’t suffering from a bias of their own. Our current understanding of science and intelligence is compatible with many non-Singularity outcomes:
(a) ‘human-level’ intelligence is, for various physical reasons, an approximate upper bound on intelligence (b) Scaling past ‘human-level’ intelligence is possible but difficult due to extremely poor returns (e.g., logarithmic rather than exponential growth past a certain point) (c) Scaling past ‘human-level’ intelligence is possible, is not difficult, but runs into an inherent ‘glass ceiling’ far below ‘incomprehensibility’ of the resulting intelligence
and so on
Many of these scenarios seem as interesting to me as a true Singularity outcome, but my perception is they aren’t being given equal time. Singularity is certainly more ‘vivid,’ but is it more likely?
The core issue is whether statements in number theory, and more generally, mathematical statements are independent of physical reality or entailed by our physical laws. (This question isn’t as obvious as it might seem, I remember reading a paper claiming to construct a consistent set of physical laws where 2 + 2 has no definite answer). At any rate, if the former is true, 2+2=4 is outside the province of empirical science, and applying empirical reasoning to evaluate its ‘truth’ is wrong.