perhaps it would be a good idea to toss the word out the window for its active connotations.
Why? It’s still just as much of a flaw if it’s a passive phenomenon.
To make an analogy with some literal overlap, some people are creationists because they don’t know any science, and others are creationists despite knowing science. Should we avoid using the term “creationist” for the first group? I think not.
Compartmentalization is still compartmentalization, whether it’s the result of specifically motivated cognition, or just an intellectual deficiency such as a failure to abstract.
(In fact, I’d venture that motivated thought sometimes keeps people from improving their intellectual skills, just as religiously-motivated creationists may deliberately avoid learning science.)
This is perhaps a clue to one thing that has been puzzling me, given Eliezer’s interest in AI, namely the predominance of topics such as decision theory on this blog, and the near total absence of discussion around topics such as creativity or analogy-making
Honestly, I think this is mainly just a result of the personalities of the folks who happen to be posting. Creativity and analogy-making were often discussed in Eliezer’s OB sequences; posts by Yvain and Alicorn also seem to have this flavor.
Creativity and analogy-making were often discussed in Eliezer’s OB sequences
I would appreciate, if you can think of any examples offhand, if you’d point me to them. I’ll have another look-see later to check on my (possibly mistaken) impression. Just not today, I’m ODing on LW as it is. Is it just me or has the pace of top-level posting been particularly hectic lately?
I considered delaying this post for a few days until the general pace of posting had died down a bit, but then I’m bad at delaying the posting of anything I’ve written.
The second link isn’t really about analogy-making as topic within AI, it’s more about “analogy as flawed human thinking”. (And Kaj’s post reminds us precisely that given the role played by analogy in cognition, it may not fully deserve the bad rap Eliezer has given it.)
The first is partly about AI creativity (and also quite a bit about the flawed human thinking of AI researchers). It is the only one tagged “creativity”; and my reading of the Sequences has left me with an impression that the promise in the final sentence was left unfulfilled when I came to the end. I could rattle off a list of things I’ve learned from the Sequences, at various levels of understanding; they’d cover a variety of topics but creativity would be ranked quite low.
I mean, CopyCat comes up once in search results. If the topic of analogy within AI was discussed much here, I’d expect it to be referenced more often.
I didn’t interpret your comment as expressing an expectation that there would be more discussion about analogical reasoning or creativity as a topic within AI; keep in mind, after all, that LW is not a blog about AI—its topic is human rationality. (There is, naturally, a fair amount of incidental discussion of AI, because Eliezer happens to be an AI researcher and that’s his “angle”.) In this context, I therefore interpreted your remark as “given Eliezer’s interest in AI, a subject which requires an understanding of the phenomena of analogies and creativity, I’m surprised there isn’t more discussion of these phenomena.”
I’ll use this opportunity to state my feeling that, as interesting as AI is, human rationality is a distinct topic, and it’s important to keep LW from becoming “about” AI (or any other particular interest that happens to be shared by a significant number of participants) . Rationality is for everyone, whether you’re part of the “AI crowd” or not.
(I realize that someone is probably going to post a reply to the effect that, given the stakes of the Singularity, rational thought clearly compels us to drop everything and basically think about nothing except AI. But...come on, folks—not even Eliezer thinks about nothing else.)
Sorry I wasn’t clearer first time around. Yes, rationality is a distinct topic; but it has some overlap with AI, inasmuch as learning how to think better is served by understanding more of how we can think at all. The discussions around decision theory clearly belong to that overlapping area; Eliezer makes no bones about needing a decision theory for FAI research. Analogy in the Hofstadterian sense seems underappreciated here by comparison. To my way of thinking it belongs in the overlap too, as Kaj’s post seems to me to hint strongly.
Eliezer doesn’t want to publish any useful information on producing AI, because he knows that that will raise the probability (extremely marginally) of some jackass causing an unFriendly foom.
Eliezer doesn’t want to publish any useful information on producing AI, because he knows that that will raise the probability (extremely marginally) of some jackass causing an unFriendly foom.
It seems like it would also raise the probability (extremely marginally) of Eliezer missing something crucial causing an unFriendly foom.
Why? It’s still just as much of a flaw if it’s a passive phenomenon.
To make an analogy with some literal overlap, some people are creationists because they don’t know any science, and others are creationists despite knowing science. Should we avoid using the term “creationist” for the first group? I think not.
Compartmentalization is still compartmentalization, whether it’s the result of specifically motivated cognition, or just an intellectual deficiency such as a failure to abstract.
(In fact, I’d venture that motivated thought sometimes keeps people from improving their intellectual skills, just as religiously-motivated creationists may deliberately avoid learning science.)
Honestly, I think this is mainly just a result of the personalities of the folks who happen to be posting. Creativity and analogy-making were often discussed in Eliezer’s OB sequences; posts by Yvain and Alicorn also seem to have this flavor.
I would appreciate, if you can think of any examples offhand, if you’d point me to them. I’ll have another look-see later to check on my (possibly mistaken) impression. Just not today, I’m ODing on LW as it is. Is it just me or has the pace of top-level posting been particularly hectic lately?
It is not just you
I considered delaying this post for a few days until the general pace of posting had died down a bit, but then I’m bad at delaying the posting of anything I’ve written.
Creativity.
Analogy-making.
The second link isn’t really about analogy-making as topic within AI, it’s more about “analogy as flawed human thinking”. (And Kaj’s post reminds us precisely that given the role played by analogy in cognition, it may not fully deserve the bad rap Eliezer has given it.)
The first is partly about AI creativity (and also quite a bit about the flawed human thinking of AI researchers). It is the only one tagged “creativity”; and my reading of the Sequences has left me with an impression that the promise in the final sentence was left unfulfilled when I came to the end. I could rattle off a list of things I’ve learned from the Sequences, at various levels of understanding; they’d cover a variety of topics but creativity would be ranked quite low.
I mean, CopyCat comes up once in search results. If the topic of analogy within AI was discussed much here, I’d expect it to be referenced more often.
I didn’t interpret your comment as expressing an expectation that there would be more discussion about analogical reasoning or creativity as a topic within AI; keep in mind, after all, that LW is not a blog about AI—its topic is human rationality. (There is, naturally, a fair amount of incidental discussion of AI, because Eliezer happens to be an AI researcher and that’s his “angle”.) In this context, I therefore interpreted your remark as “given Eliezer’s interest in AI, a subject which requires an understanding of the phenomena of analogies and creativity, I’m surprised there isn’t more discussion of these phenomena.”
I’ll use this opportunity to state my feeling that, as interesting as AI is, human rationality is a distinct topic, and it’s important to keep LW from becoming “about” AI (or any other particular interest that happens to be shared by a significant number of participants) . Rationality is for everyone, whether you’re part of the “AI crowd” or not.
(I realize that someone is probably going to post a reply to the effect that, given the stakes of the Singularity, rational thought clearly compels us to drop everything and basically think about nothing except AI. But...come on, folks—not even Eliezer thinks about nothing else.)
Sorry I wasn’t clearer first time around. Yes, rationality is a distinct topic; but it has some overlap with AI, inasmuch as learning how to think better is served by understanding more of how we can think at all. The discussions around decision theory clearly belong to that overlapping area; Eliezer makes no bones about needing a decision theory for FAI research. Analogy in the Hofstadterian sense seems underappreciated here by comparison. To my way of thinking it belongs in the overlap too, as Kaj’s post seems to me to hint strongly.
Eliezer doesn’t want to publish any useful information on producing AI, because he knows that that will raise the probability (extremely marginally) of some jackass causing an unFriendly foom.
It seems like it would also raise the probability (extremely marginally) of Eliezer missing something crucial causing an unFriendly foom.
Remember, there is a long tradition here, especially for EY, to usually not refer to any scholarly research.