Do these strike you as things which could plausibly be written by someone who actually anticipated the modern revolution?
I do not think I claimed that Eliezer anticipated the modern revolution, and I would not claim that based on those quotes.
The point that I have been attempting to make since here is that ‘neural networks_2007’, and the ‘neural networks_1970s’ Eliezer describes in the post, did not point to the modern revolution; in fact other things were necessary. I see your point that this is maybe a research taste question—even if it doesn’t point to the right idea directly, does it at least point there indirectly?--to which I think it is evidence against Eliezer’s research taste (on what will work, not necessarily on what will be alignable).
[I also have long thought Eliezer’s allergy to the word “emergence” is misplaced (and that it’s a useful word while thinking about dynamical systems modeling in a reductionistic way, which is a behavior that I think he approves of) while agreeing with him that I’m not optimistic about people whose plan for building intelligence doesn’t route thru them understanding what intelligence is and how it works in a pretty deep way.]
Conservation of expected evidence. If you would have updated upwards on his predictive abilities if he had made hashed comments and then revealed them, then observing not-that makes you update downwards (eta—on average, with a few finicky details here that I think work out to the same overall conclusion; happy to discuss if you want).
I agree with regards to Bayesian superintelligences but not bounded agents, mostly because I think this depends on how you do the accounting. Consider the difference between scheme A, where you transfer prediction points from everyone who didn’t make a correct prediction to people who did make correct predictions, and scheme B, where you transfer prediction points from people who make incorrect predictions to people who make correct predictions, leaving untouched people who didn’t make predictions. On my understanding, things like logical induction and infrabayesianism look more like scheme B.
I do not think I claimed that Eliezer anticipated the modern revolution, and I would not claim that based on those quotes.
The point that I have been attempting to make since here is that ‘neural networks_2007’, and the ‘neural networks_1970s’ Eliezer describes in the post, did not point to the modern revolution; in fact other things were necessary.
I apologize if I have misunderstood your intended point. Thanks for the clarification. I agree with this claim (insofar as I understand what the 2007 landscape looked like, which may be “not much”). I think that the claim is not that interesting, though, but this might be coming down to semantics.
The following is what I perceived us to disagree on, so I’d consider us to be in agreement on the point I originally wanted to discuss:
I see your point that this is maybe a research taste question—even if it doesn’t point to the right idea directly, does it at least point there indirectly?--to which I think it is evidence against Eliezer’s research taste (on what will work, not necessarily on what will be alignable).
I’m not optimistic about people whose plan for building intelligence doesn’t route thru them understanding what intelligence is and how it works in a pretty deep way
Yeah. I think that in a grown-up world, we would do this, and really take our time.
On my understanding, things like logical induction and infrabayesianism look more like scheme B.
Nice, I like this connection. Will think more about this, don’t want to hastily unpack my thoughts into a response which isn’t true to my intuitions here.
I do not think I claimed that Eliezer anticipated the modern revolution, and I would not claim that based on those quotes.
The point that I have been attempting to make since here is that ‘neural networks_2007’, and the ‘neural networks_1970s’ Eliezer describes in the post, did not point to the modern revolution; in fact other things were necessary. I see your point that this is maybe a research taste question—even if it doesn’t point to the right idea directly, does it at least point there indirectly?--to which I think it is evidence against Eliezer’s research taste (on what will work, not necessarily on what will be alignable).
[I also have long thought Eliezer’s allergy to the word “emergence” is misplaced (and that it’s a useful word while thinking about dynamical systems modeling in a reductionistic way, which is a behavior that I think he approves of) while agreeing with him that I’m not optimistic about people whose plan for building intelligence doesn’t route thru them understanding what intelligence is and how it works in a pretty deep way.]
I agree with regards to Bayesian superintelligences but not bounded agents, mostly because I think this depends on how you do the accounting. Consider the difference between scheme A, where you transfer prediction points from everyone who didn’t make a correct prediction to people who did make correct predictions, and scheme B, where you transfer prediction points from people who make incorrect predictions to people who make correct predictions, leaving untouched people who didn’t make predictions. On my understanding, things like logical induction and infrabayesianism look more like scheme B.
I apologize if I have misunderstood your intended point. Thanks for the clarification. I agree with this claim (insofar as I understand what the 2007 landscape looked like, which may be “not much”). I think that the claim is not that interesting, though, but this might be coming down to semantics.
The following is what I perceived us to disagree on, so I’d consider us to be in agreement on the point I originally wanted to discuss:
Yeah. I think that in a grown-up world, we would do this, and really take our time.
Nice, I like this connection. Will think more about this, don’t want to hastily unpack my thoughts into a response which isn’t true to my intuitions here.