In a flash of insight combined with some open source deep learning sites (like kaggle), he’s able to create the first self recursive AI, and he tests it out by telling it to maximise the amount of paperclips his factory makes.
You’re kidding, right? Deep neural nets are very good at learning hierarchies of features, but they are still basically doing correlative statistical inference rather than causal inference. They are going to be much too slow, with respect to actual computation speed and sample complexity, to function dangerously well in realistically complex environments (ie: not Atari games).
There’s an unwritten rule around here that you have to discuss AI in terms of unimplementable abstractions...its rude to bring in real world limitations.
You’re kidding, right? Deep neural nets are very good at learning hierarchies of features, but they are still basically doing correlative statistical inference rather than causal inference. They are going to be much too slow, with respect to actual computation speed and sample complexity, to function dangerously well in realistically complex environments (ie: not Atari games).
There’s an unwritten rule around here that you have to discuss AI in terms of unimplementable abstractions...its rude to bring in real world limitations.
Excuse me if I care more about getting a working design that does the right things than I do about treating LW discussions as battles.
I’m… pretty sure that was sarcasm? I hope so, at least.
Yeah, but I still object to even the sarcastic implications. I was posting in full seriousness about the limitations of deep neural nets.
Yes, that was sarcasm.
Or this is meta-sarcasm, and therein lies the problem.