I guess you could try it and see if you reach wrong conclusions, but that only works isn’t so wired up with shortcuts that you cannot (or are much less likely to) discover your mistakes.
I’ve been puzzling over why EY’s efforts to show the dangers of AGI (most notably this) have been unconvincing enough so that other experts (e.g. Paul Christiano) and, in my experience, typical rationalists have not adopted p(doom) > 90% like EY, or even > 50%. I was unconvinced because he simply didn’t present a chain of reasoning that shows what he’s trying to show. Rational thinking is a lot like math: a single mistake in a chain of reasoning can invalidate the whole conclusion. Failure to generate a complete chain of reasoning is a sign that the thinking isn’t rational. And failure to communicate a complete chain of reasoning, as in this case, should fail to convince people (except if the audience can mentally reconstruct the missing information).
I read all six “tomes” of Rationality: A-Z and I don’t recall EY ever writing about the importance of having a solid and complete chain (or graph) of reasoning―but here is a post about the value of shortcuts (if you can pardon the strawman; I’m using the word “shortcut” as a shortcut). There’s no denying that shortcuts can have value, but only if it leads to winning, which for most of us including EY includes having true beliefs, which in turn requires an ability to generate solid and complete chains of reasoning. If you used shortcuts to generate it, that’s great insofar as it generates correct results, but mightn’t shortcuts make your reasoning less reliable than it first appears? When it comes to AI safety, EY’s most important cause, I’ve seen a shortcut-laden approach (in his communication, if not his reasoning) and wasn’t convinced, so I’d like to see him take it slower and give us a more rigorous and clear case for AI doom ― one that either clearly justifies a very high near-term catastrophic risk assessment, or admits that it doesn’t.
I think EY must have a mental system that is far above average, but from afar it seems not good enough.
On the other hand, I’ve learned a lot about rationality from EY that I didn’t already know, and perhaps many of the ideas he came up with are a product of this exact process of identifying necessary cognitive work and casting off the rest. Notable if true! But in my field I, too, have had various unique ideas that no one else ever presented, and I came about it from a different angle: I’m always looking for the (subjectively) “best” solutions to problems. Early in my career, getting the work done was never enough, I wanted my code to be elegant and beautiful and fast and generalized too. Seems like I’d never accept the first version, I’d always find flaws and change it immediately after, maybe more than once. My approach (which I guess earns the boring label ‘perfectionism’) wasn’t fast, but I think it built up a lot of good intuitions that many other developers just don’t have. Likewise in life in general, I developed nuanced thinking and rationalist-like intuitions without ever hearing about rationalism. So I am fairly satisfied with plain-old perfectionism―reaching conclusions faster would’ve been great, but I’m uncertain whether I could’ve or would’ve found a process of doing that such that my conclusions would’ve been as correct. (I also recommend always thinking a lot, but maybe that goes without saying around here)
I’m reminded of a great video about two ways of thinking about math problems: a slick way that finds a generalized solution, and a more meandering, exploratory way way that looks at many specific cases and examples. The slick solutions tend to get way more attention, but slower processes are way more common when no one is looking, and famous early mathematicians haven’t shied away from long and even tedious work. I feel like EY’s saying “make it slick and fast!” and to be fair, I probably should’ve worked harder at developing Slick Thinking, but my slow non-slick methods also worked pretty well.
I guess you could try it and see if you reach wrong conclusions, but that only works isn’t so wired up with shortcuts that you cannot (or are much less likely to) discover your mistakes.
I’ve been puzzling over why EY’s efforts to show the dangers of AGI (most notably this) have been unconvincing enough so that other experts (e.g. Paul Christiano) and, in my experience, typical rationalists have not adopted p(doom) > 90% like EY, or even > 50%. I was unconvinced because he simply didn’t present a chain of reasoning that shows what he’s trying to show. Rational thinking is a lot like math: a single mistake in a chain of reasoning can invalidate the whole conclusion. Failure to generate a complete chain of reasoning is a sign that the thinking isn’t rational. And failure to communicate a complete chain of reasoning, as in this case, should fail to convince people (except if the audience can mentally reconstruct the missing information).
I read all six “tomes” of Rationality: A-Z and I don’t recall EY ever writing about the importance of having a solid and complete chain (or graph) of reasoning―but here is a post about the value of shortcuts (if you can pardon the strawman; I’m using the word “shortcut” as a shortcut). There’s no denying that shortcuts can have value, but only if it leads to winning, which for most of us including EY includes having true beliefs, which in turn requires an ability to generate solid and complete chains of reasoning. If you used shortcuts to generate it, that’s great insofar as it generates correct results, but mightn’t shortcuts make your reasoning less reliable than it first appears? When it comes to AI safety, EY’s most important cause, I’ve seen a shortcut-laden approach (in his communication, if not his reasoning) and wasn’t convinced, so I’d like to see him take it slower and give us a more rigorous and clear case for AI doom ― one that either clearly justifies a very high near-term catastrophic risk assessment, or admits that it doesn’t.
I think EY must have a mental system that is far above average, but from afar it seems not good enough.
On the other hand, I’ve learned a lot about rationality from EY that I didn’t already know, and perhaps many of the ideas he came up with are a product of this exact process of identifying necessary cognitive work and casting off the rest. Notable if true! But in my field I, too, have had various unique ideas that no one else ever presented, and I came about it from a different angle: I’m always looking for the (subjectively) “best” solutions to problems. Early in my career, getting the work done was never enough, I wanted my code to be elegant and beautiful and fast and generalized too. Seems like I’d never accept the first version, I’d always find flaws and change it immediately after, maybe more than once. My approach (which I guess earns the boring label ‘perfectionism’) wasn’t fast, but I think it built up a lot of good intuitions that many other developers just don’t have. Likewise in life in general, I developed nuanced thinking and rationalist-like intuitions without ever hearing about rationalism. So I am fairly satisfied with plain-old perfectionism―reaching conclusions faster would’ve been great, but I’m uncertain whether I could’ve or would’ve found a process of doing that such that my conclusions would’ve been as correct. (I also recommend always thinking a lot, but maybe that goes without saying around here)
I’m reminded of a great video about two ways of thinking about math problems: a slick way that finds a generalized solution, and a more meandering, exploratory way way that looks at many specific cases and examples. The slick solutions tend to get way more attention, but slower processes are way more common when no one is looking, and famous early mathematicians haven’t shied away from long and even tedious work. I feel like EY’s saying “make it slick and fast!” and to be fair, I probably should’ve worked harder at developing Slick Thinking, but my slow non-slick methods also worked pretty well.