I feel like you’re using “strongest” and “weakest” to design “more concrete” and “more abstract”, with maybe the value judgement (implicit in your focus on specific testable claims) that concreteness is better. My interpretation doesn’t disagree with your point about Bio Anchors, it simply says that this is a concrete instantiation of a general pattern, and that the whole point of the original post as I understand it is to share this pattern. Hence the title who talks about all biology-inspired timelines, the three examples in the post, and the seven times that Yudkowsky repeats his abstract arguments in differents ways.
It’s hardly surprising there are ‘two paths through a space’ - if you reran either (biological or cultural/technological) evolution with slightly different initial conditions you’d get a different path. However technological evolution is aware of biological evolution and thus strongly correlated to and influenced by it. IE deep learning is in part brain reverse engineering (explicitly in the case of DeepMind, but there are many other examples). The burden proof is thus arguably more opposite of what you claim (EY claims).
Maybe a better way of framing my point here is that the optimization processes are fundamentally different (something about which Yudkowsky has written a lot, see for example this post from 13 years ago), and that the burden of proof is on showing that they have enough similarity to extract a lot of info from the evolutionary optimization to the human research optimization.
I also don’t think your point about DeepMind works, because DM is working in a way extremely different from evolution. They are in part reverse engineering the brain, but that’s a very different (and very human and insight heavy) paths towards AGI than the one evolution took.
Lastly for this point, I don’t think the interpretation that “Yudkowsky says that the burden of proof is on showing that the optimization of evolution and human research are non correlated” survives the contact with a text where Yudkowsky constantly berates his interlocutors for assuming such correlation, and keeps drawing again and again the differences.
To the extent EY makes specific testable claims about the inefficiency of biology, those claims are in err—or at least easily contestable.
Hum, I find myself feeling like this comment: Yudkowsky’s main point about biology IMO is that brains are not at all the most efficient computational way of implementing AGI. Another way of phrasing it is that Yudkowsky says (according to me) that you could use significantly less hardware and ops/sec to make an AGI.
To be clear, my disagreement concerns more your implicit prioritization—rather than interpretation—of EY’s points.
I also don’t think your point about DeepMind works, because DM is working in a way extremely different from evolution. They are in part reverse engineering the brain, but that’s a very different (and very human and insight heavy) paths towards AGI than the one evolution took.
If search process Y fully reverse engineers the result of search process X, then Y ends up at the same endpoint as X, regardless of the path Y took. Obviously the path is different but also correlated; and reverse engineering the brain makes brain efficiency considerations (and thus some form of bio anchor) relevant.
Yudkowsky’s main point about biology IMO is that brains are not at all the most efficient computational way of implementing AGI. Another way of phrasing it is that Yudkowsky says (according to me) that you could use significantly less hardware and ops/sec to make an AGI.
Sure, but that’s also the worst part of his argument, because to support it he makes a very specific testable claim concerning thermodynamic efficiency; a claim that is almost certainly off base.
To the extent EY makes specific testable claims about the inefficiency of biology, those claims are in err—or at least easily contestable.
Hum, I find myself feeling like this comment: Yudkowsky’s main point about biology IMO is that brains are not at all the most efficient computational way of implementing AGI. Another way of phrasing it is that Yudkowsky says (according to me) that you could use significantly less hardware and ops/sec to make an AGI.
That’s unfortunate that you agree; here’s the full comment:
You’re missing the point!
Your arguments apply mostly toward arguing that brains are optimized for energy efficiency, but the important quantity in question is computational efficiency! You even admit that neurons are “optimizing hard for energy efficiency at the expense of speed”, but don’t seem to have noticed that this fact makes almost everything else you said completely irrelevant!
Adele claims that I’m ‘missing the point’ by focusing on energy efficiency, but the specific EY claim I disagreed with is very specifically about energy efficiency! Which is highly relevant, because he then uses this claim as evidence to suggest general inefficiency.
EY specifically said the following, repeating the claim twice in slightly different form:
The result is that the brain’s computation is something like half a million times less efficient than the thermodynamic limit for its temperature—so around two millionths as efficient as ATP synthase.
The software for a human brain is not going to be 100% efficient compared to the theoretical maximum, nor 10% efficient, nor 1% efficient, even before taking into account
. . .
That is simply not a kind of thing that I expect Reality to say “Gotcha” to me about, any more than I expect to be told that the human brain, whose neurons and synapses are 500,000 times further away from the thermodynamic efficiency wall than ATP synthase, is the most efficient possible consumer of computation
Adele’s comment completely ignores the very specific point I was commenting on, and strawman’s my position while steelmanning EY’s.
Thanks for pushing back on my interpretation.
I feel like you’re using “strongest” and “weakest” to design “more concrete” and “more abstract”, with maybe the value judgement (implicit in your focus on specific testable claims) that concreteness is better. My interpretation doesn’t disagree with your point about Bio Anchors, it simply says that this is a concrete instantiation of a general pattern, and that the whole point of the original post as I understand it is to share this pattern. Hence the title who talks about all biology-inspired timelines, the three examples in the post, and the seven times that Yudkowsky repeats his abstract arguments in differents ways.
Maybe a better way of framing my point here is that the optimization processes are fundamentally different (something about which Yudkowsky has written a lot, see for example this post from 13 years ago), and that the burden of proof is on showing that they have enough similarity to extract a lot of info from the evolutionary optimization to the human research optimization.
I also don’t think your point about DeepMind works, because DM is working in a way extremely different from evolution. They are in part reverse engineering the brain, but that’s a very different (and very human and insight heavy) paths towards AGI than the one evolution took.
Lastly for this point, I don’t think the interpretation that “Yudkowsky says that the burden of proof is on showing that the optimization of evolution and human research are non correlated” survives the contact with a text where Yudkowsky constantly berates his interlocutors for assuming such correlation, and keeps drawing again and again the differences.
Hum, I find myself feeling like this comment: Yudkowsky’s main point about biology IMO is that brains are not at all the most efficient computational way of implementing AGI. Another way of phrasing it is that Yudkowsky says (according to me) that you could use significantly less hardware and ops/sec to make an AGI.
To be clear, my disagreement concerns more your implicit prioritization—rather than interpretation—of EY’s points.
If search process Y fully reverse engineers the result of search process X, then Y ends up at the same endpoint as X, regardless of the path Y took. Obviously the path is different but also correlated; and reverse engineering the brain makes brain efficiency considerations (and thus some form of bio anchor) relevant.
Sure, but that’s also the worst part of his argument, because to support it he makes a very specific testable claim concerning thermodynamic efficiency; a claim that is almost certainly off base.
That’s unfortunate that you agree; here’s the full comment:
Adele claims that I’m ‘missing the point’ by focusing on energy efficiency, but the specific EY claim I disagreed with is very specifically about energy efficiency! Which is highly relevant, because he then uses this claim as evidence to suggest general inefficiency.
EY specifically said the following, repeating the claim twice in slightly different form:
Adele’s comment completely ignores the very specific point I was commenting on, and strawman’s my position while steelmanning EY’s.