I didn’t interpret your comment as expressing an expectation that there would be more discussion about analogical reasoning or creativity as a topic within AI; keep in mind, after all, that LW is not a blog about AI—its topic is human rationality. (There is, naturally, a fair amount of incidental discussion of AI, because Eliezer happens to be an AI researcher and that’s his “angle”.) In this context, I therefore interpreted your remark as “given Eliezer’s interest in AI, a subject which requires an understanding of the phenomena of analogies and creativity, I’m surprised there isn’t more discussion of these phenomena.”
I’ll use this opportunity to state my feeling that, as interesting as AI is, human rationality is a distinct topic, and it’s important to keep LW from becoming “about” AI (or any other particular interest that happens to be shared by a significant number of participants) . Rationality is for everyone, whether you’re part of the “AI crowd” or not.
(I realize that someone is probably going to post a reply to the effect that, given the stakes of the Singularity, rational thought clearly compels us to drop everything and basically think about nothing except AI. But...come on, folks—not even Eliezer thinks about nothing else.)
Sorry I wasn’t clearer first time around. Yes, rationality is a distinct topic; but it has some overlap with AI, inasmuch as learning how to think better is served by understanding more of how we can think at all. The discussions around decision theory clearly belong to that overlapping area; Eliezer makes no bones about needing a decision theory for FAI research. Analogy in the Hofstadterian sense seems underappreciated here by comparison. To my way of thinking it belongs in the overlap too, as Kaj’s post seems to me to hint strongly.
Eliezer doesn’t want to publish any useful information on producing AI, because he knows that that will raise the probability (extremely marginally) of some jackass causing an unFriendly foom.
Eliezer doesn’t want to publish any useful information on producing AI, because he knows that that will raise the probability (extremely marginally) of some jackass causing an unFriendly foom.
It seems like it would also raise the probability (extremely marginally) of Eliezer missing something crucial causing an unFriendly foom.
I didn’t interpret your comment as expressing an expectation that there would be more discussion about analogical reasoning or creativity as a topic within AI; keep in mind, after all, that LW is not a blog about AI—its topic is human rationality. (There is, naturally, a fair amount of incidental discussion of AI, because Eliezer happens to be an AI researcher and that’s his “angle”.) In this context, I therefore interpreted your remark as “given Eliezer’s interest in AI, a subject which requires an understanding of the phenomena of analogies and creativity, I’m surprised there isn’t more discussion of these phenomena.”
I’ll use this opportunity to state my feeling that, as interesting as AI is, human rationality is a distinct topic, and it’s important to keep LW from becoming “about” AI (or any other particular interest that happens to be shared by a significant number of participants) . Rationality is for everyone, whether you’re part of the “AI crowd” or not.
(I realize that someone is probably going to post a reply to the effect that, given the stakes of the Singularity, rational thought clearly compels us to drop everything and basically think about nothing except AI. But...come on, folks—not even Eliezer thinks about nothing else.)
Sorry I wasn’t clearer first time around. Yes, rationality is a distinct topic; but it has some overlap with AI, inasmuch as learning how to think better is served by understanding more of how we can think at all. The discussions around decision theory clearly belong to that overlapping area; Eliezer makes no bones about needing a decision theory for FAI research. Analogy in the Hofstadterian sense seems underappreciated here by comparison. To my way of thinking it belongs in the overlap too, as Kaj’s post seems to me to hint strongly.
Eliezer doesn’t want to publish any useful information on producing AI, because he knows that that will raise the probability (extremely marginally) of some jackass causing an unFriendly foom.
It seems like it would also raise the probability (extremely marginally) of Eliezer missing something crucial causing an unFriendly foom.