I’ve mostly been here for the sequences and interesting rationality discussion, I know very little about AI outside of the general problem of FAI, so apologies if this question is extremely broad.
These people seem confident deep learning and neural networks are superior to some unspecified LW approach. Can anyone give a high level overview of what the LW approach to AI is, possibly contrasted with theirs?
There isn’t really a “LW approach to AI,” but there are some factors at work here. If there’s one universal LW buzzword, it’s “Bayesian methods,” though that’s not an AI design, one might call it a conceptual stance. There’s also LW’s focus on decision theory, which, while still not an AI design, is usually expressed as short, “model-dependent” algorithms. It would also be nice for a self-improving AI to have a human-understandable method of value learning, which leads to more focus diverted away from black-box methods.
As to whether there’s some tribal conflict to be worried about here, nah, probably not.
If we genuinely had no idea of what neural nets were doing, NN research wouldn’t be getting anywhere. But that’s obviously not the case.
More to the point, there’s promising-lookingwork going on at getting a better understanding of what various NNs actually represent. Deep learning networks might actually have relatively human-comprehensible features on some of their levels (see e.g. the first link).
Furthermore it’s not clear that any other human-level machine learning model would be any more comprehensible. Worst case, we have something like a billion variables in a million dimensions: good luck trying to understand how that works, regardless of whether it’s a neural network or not.
I’ve mostly been here for the sequences and interesting rationality discussion, I know very little about AI outside of the general problem of FAI, so apologies if this question is extremely broad.
I stumbled upon this facebook group (Model-Free Methods) https://www.facebook.com/groups/model.free.methods.for.agi/416111845251471/?notif_t=group_comment_reply discussion a recent LW post, and they seem to cast LW’s “reductionist AI” approach to AI in a negative light compared to their “neural network paradigm”.
These people seem confident deep learning and neural networks are superior to some unspecified LW approach. Can anyone give a high level overview of what the LW approach to AI is, possibly contrasted with theirs?
There isn’t really a “LW approach to AI,” but there are some factors at work here. If there’s one universal LW buzzword, it’s “Bayesian methods,” though that’s not an AI design, one might call it a conceptual stance. There’s also LW’s focus on decision theory, which, while still not an AI design, is usually expressed as short, “model-dependent” algorithms. It would also be nice for a self-improving AI to have a human-understandable method of value learning, which leads to more focus diverted away from black-box methods.
As to whether there’s some tribal conflict to be worried about here, nah, probably not.
I think this sums up the problem. If you want to build a safe AI you can’t use neural nets because you have no clue what the system is actually doing.
If we genuinely had no idea of what neural nets were doing, NN research wouldn’t be getting anywhere. But that’s obviously not the case.
More to the point, there’s promising-looking work going on at getting a better understanding of what various NNs actually represent. Deep learning networks might actually have relatively human-comprehensible features on some of their levels (see e.g. the first link).
Furthermore it’s not clear that any other human-level machine learning model would be any more comprehensible. Worst case, we have something like a billion variables in a million dimensions: good luck trying to understand how that works, regardless of whether it’s a neural network or not.