This is the end of the history analysis arc and I feel like I don’t get a lot out of it and I have only hazy idea why its inclusions in its length was proper.
Yeah, I noticed being confused by this also the second time around. I’ve got a few guesses for what’s going on.
John is a guy with a theory (about relevance realization), the theory explains some stuff, but the way to sell it is to tie it to something bigger. [“All of history is culminating in this moment!”]
John is a guy who constantly comes across lots of objections, and the general answer to those objections is a detailed dive through all of history. [“Eliezer, did you really have to write so many words about how to think in order to talk about AI alignment?” “Yes.”]
John is trying to convince people who are coming at this from the history side to take the science side seriously, and giving his spin on how all of the relevant history comes to bear is table stakes. [This is like the previous one, but who asked for the focus on that is flipped.]
Actually the series is mostly about “where we are, and how we got here,” and so it’s more like the history is the content and the cognitive science is the secondary content. So it’s not “why is half of this history?” and more “why did he tack on another 25 lectures afterwards?”
But I am noticing that quite probably I should just recommend the latter bits to people interested in relevance realization and not the history?
I would be more interested the selection critderia on why keep those bits or what kind of interessting soup one can make with the ingredients rather than a list of reviews why previous soups tasted bland.
This feels to me a bit like the normal style of philosophy (or history of science or so on); you maybe talk a little about what it is that you’re hoping for with a theory of astronomy or theories in general, but you spend most of your time talking about “ok, these are the observations that theory A got wrong, and this is how theory B accounted for them”, and if you’re a working astronomer today, you spend most of your time thinking about “ok, what is up with these observations that my theory doesn’t account for?”
I do think this comes up sometimes; like when he talks about homuncular explanations and why those are unsatisfying, that feels to me like it’s transferring the general technique that helps people do good cognitive science instead of just being a poor review of a single soup.
Howdy. I think his concern with the history is that he wants to reduce equivocation in debate surrounding consciousness (he is clear about this in his ‘Untangling the Worldknot of Consciousness’ miniseries with Gregg Henriques, though he does point to this in early AftMC episodes) by showing that so much of what we take to be natural to our cognition is largely the result of invented psychotechnology and (at least seemingly) insightful changes to our cultural cognitive grammar. It is incredibly standard for us to immediately obviate solved problems, and when something is obvious to us, we often have incredible difficulty seeing how it could have ever been otherwise.
Actually the series is mostly about “where we are, and how we got here,” and so it’s more like the history is the content and the cognitive science is the secondary content. So it’s not “why is half of this history?” and more “why did he tack on another 25 lectures afterwards?”
I agree. I also think that part is the better part of the series, and I can see myself recommending to people to watch just the first part, but not just the second. Though the second part explores some important concepts (like relevance realization) I think there’s a lot of room for improvement on the delivery, where I think the first part is quite well done.
I think the two things that most bothered me in the second part were his overuse of complicated language, and his overuse of caveats (I get why he makes them, but it breaks the flow and makes it so much harder to follow, especially together with all the complicated language)
Yeah, I noticed being confused by this also the second time around. I’ve got a few guesses for what’s going on.
John is a guy with a theory (about relevance realization), the theory explains some stuff, but the way to sell it is to tie it to something bigger. [“All of history is culminating in this moment!”]
John is a guy who constantly comes across lots of objections, and the general answer to those objections is a detailed dive through all of history. [“Eliezer, did you really have to write so many words about how to think in order to talk about AI alignment?” “Yes.”]
John is trying to convince people who are coming at this from the history side to take the science side seriously, and giving his spin on how all of the relevant history comes to bear is table stakes. [This is like the previous one, but who asked for the focus on that is flipped.]
Actually the series is mostly about “where we are, and how we got here,” and so it’s more like the history is the content and the cognitive science is the secondary content. So it’s not “why is half of this history?” and more “why did he tack on another 25 lectures afterwards?”
But I am noticing that quite probably I should just recommend the latter bits to people interested in relevance realization and not the history?
This feels to me a bit like the normal style of philosophy (or history of science or so on); you maybe talk a little about what it is that you’re hoping for with a theory of astronomy or theories in general, but you spend most of your time talking about “ok, these are the observations that theory A got wrong, and this is how theory B accounted for them”, and if you’re a working astronomer today, you spend most of your time thinking about “ok, what is up with these observations that my theory doesn’t account for?”
I do think this comes up sometimes; like when he talks about homuncular explanations and why those are unsatisfying, that feels to me like it’s transferring the general technique that helps people do good cognitive science instead of just being a poor review of a single soup.
Howdy. I think his concern with the history is that he wants to reduce equivocation in debate surrounding consciousness (he is clear about this in his ‘Untangling the Worldknot of Consciousness’ miniseries with Gregg Henriques, though he does point to this in early AftMC episodes) by showing that so much of what we take to be natural to our cognition is largely the result of invented psychotechnology and (at least seemingly) insightful changes to our cultural cognitive grammar. It is incredibly standard for us to immediately obviate solved problems, and when something is obvious to us, we often have incredible difficulty seeing how it could have ever been otherwise.
I agree. I also think that part is the better part of the series, and I can see myself recommending to people to watch just the first part, but not just the second. Though the second part explores some important concepts (like relevance realization) I think there’s a lot of room for improvement on the delivery, where I think the first part is quite well done.
I think the two things that most bothered me in the second part were his overuse of complicated language, and his overuse of caveats (I get why he makes them, but it breaks the flow and makes it so much harder to follow, especially together with all the complicated language)