The second link isn’t really about analogy-making as topic within AI, it’s more about “analogy as flawed human thinking”. (And Kaj’s post reminds us precisely that given the role played by analogy in cognition, it may not fully deserve the bad rap Eliezer has given it.)
The first is partly about AI creativity (and also quite a bit about the flawed human thinking of AI researchers). It is the only one tagged “creativity”; and my reading of the Sequences has left me with an impression that the promise in the final sentence was left unfulfilled when I came to the end. I could rattle off a list of things I’ve learned from the Sequences, at various levels of understanding; they’d cover a variety of topics but creativity would be ranked quite low.
I mean, CopyCat comes up once in search results. If the topic of analogy within AI was discussed much here, I’d expect it to be referenced more often.
I didn’t interpret your comment as expressing an expectation that there would be more discussion about analogical reasoning or creativity as a topic within AI; keep in mind, after all, that LW is not a blog about AI—its topic is human rationality. (There is, naturally, a fair amount of incidental discussion of AI, because Eliezer happens to be an AI researcher and that’s his “angle”.) In this context, I therefore interpreted your remark as “given Eliezer’s interest in AI, a subject which requires an understanding of the phenomena of analogies and creativity, I’m surprised there isn’t more discussion of these phenomena.”
I’ll use this opportunity to state my feeling that, as interesting as AI is, human rationality is a distinct topic, and it’s important to keep LW from becoming “about” AI (or any other particular interest that happens to be shared by a significant number of participants) . Rationality is for everyone, whether you’re part of the “AI crowd” or not.
(I realize that someone is probably going to post a reply to the effect that, given the stakes of the Singularity, rational thought clearly compels us to drop everything and basically think about nothing except AI. But...come on, folks—not even Eliezer thinks about nothing else.)
Sorry I wasn’t clearer first time around. Yes, rationality is a distinct topic; but it has some overlap with AI, inasmuch as learning how to think better is served by understanding more of how we can think at all. The discussions around decision theory clearly belong to that overlapping area; Eliezer makes no bones about needing a decision theory for FAI research. Analogy in the Hofstadterian sense seems underappreciated here by comparison. To my way of thinking it belongs in the overlap too, as Kaj’s post seems to me to hint strongly.
Eliezer doesn’t want to publish any useful information on producing AI, because he knows that that will raise the probability (extremely marginally) of some jackass causing an unFriendly foom.
Eliezer doesn’t want to publish any useful information on producing AI, because he knows that that will raise the probability (extremely marginally) of some jackass causing an unFriendly foom.
It seems like it would also raise the probability (extremely marginally) of Eliezer missing something crucial causing an unFriendly foom.
Creativity.
Analogy-making.
The second link isn’t really about analogy-making as topic within AI, it’s more about “analogy as flawed human thinking”. (And Kaj’s post reminds us precisely that given the role played by analogy in cognition, it may not fully deserve the bad rap Eliezer has given it.)
The first is partly about AI creativity (and also quite a bit about the flawed human thinking of AI researchers). It is the only one tagged “creativity”; and my reading of the Sequences has left me with an impression that the promise in the final sentence was left unfulfilled when I came to the end. I could rattle off a list of things I’ve learned from the Sequences, at various levels of understanding; they’d cover a variety of topics but creativity would be ranked quite low.
I mean, CopyCat comes up once in search results. If the topic of analogy within AI was discussed much here, I’d expect it to be referenced more often.
I didn’t interpret your comment as expressing an expectation that there would be more discussion about analogical reasoning or creativity as a topic within AI; keep in mind, after all, that LW is not a blog about AI—its topic is human rationality. (There is, naturally, a fair amount of incidental discussion of AI, because Eliezer happens to be an AI researcher and that’s his “angle”.) In this context, I therefore interpreted your remark as “given Eliezer’s interest in AI, a subject which requires an understanding of the phenomena of analogies and creativity, I’m surprised there isn’t more discussion of these phenomena.”
I’ll use this opportunity to state my feeling that, as interesting as AI is, human rationality is a distinct topic, and it’s important to keep LW from becoming “about” AI (or any other particular interest that happens to be shared by a significant number of participants) . Rationality is for everyone, whether you’re part of the “AI crowd” or not.
(I realize that someone is probably going to post a reply to the effect that, given the stakes of the Singularity, rational thought clearly compels us to drop everything and basically think about nothing except AI. But...come on, folks—not even Eliezer thinks about nothing else.)
Sorry I wasn’t clearer first time around. Yes, rationality is a distinct topic; but it has some overlap with AI, inasmuch as learning how to think better is served by understanding more of how we can think at all. The discussions around decision theory clearly belong to that overlapping area; Eliezer makes no bones about needing a decision theory for FAI research. Analogy in the Hofstadterian sense seems underappreciated here by comparison. To my way of thinking it belongs in the overlap too, as Kaj’s post seems to me to hint strongly.
Eliezer doesn’t want to publish any useful information on producing AI, because he knows that that will raise the probability (extremely marginally) of some jackass causing an unFriendly foom.
It seems like it would also raise the probability (extremely marginally) of Eliezer missing something crucial causing an unFriendly foom.
Remember, there is a long tradition here, especially for EY, to usually not refer to any scholarly research.