Do you think Nick Bostrom’s journal-published work on very similar subjects needs to be rewritten in different language to be understood? I don’t, anyway. I personally find the style of mainstream philosophy and science much easier to understand than, say, your CEV paper. But that might be because mainstream philosophy and science is what I spend most of my time reading.
Frankly, I think your arguments can be made more clear and persuasive to a greater number of intelligent people if phrased in the common language.
Just because most philosophy is bad doesn’t mean that when you write mainstream philosophy, you have to write badly.
I personally find the style of mainstream philosophy and science much easier to understand than, say, your CEV paper. But that might be because mainstream philosophy and science is what I spend most of my time reading.
Seconded. I haven’t read that many academic philosophy papers, but what I have seen has almost always been remarkably clear and understandable. I’m baffled that Eliezer would make such an extreme statement and actually mean it seriously (and get upvoted for it?!), considering how often he’s cited academic philosophers like e.g. Chalmers, Bostrom, Dennett, or Parfit.
(Here of course I have in mind the Anglospheric analytic philosophy; continental philosophy is a horrible mess in comparison.)
BTW, one of my favorite takedowns of postmodernism is this one.
Thanks for the link. I skimmed the article and it seems well written and quite informative; I’ll read it in full later.
In my opinion, there are some good insights in postmodernism, but as someone (Eysenck?) said about Freud, what’s true in it isn’t new, and what’s new isn’t true. In a sense, postmodernism itself provides perhaps the most fruitful target for a postmodernist analysis (of sorts). What these people say is of little real interest when taken at face value, but some fascinating insight can be obtained by analyzing the social role of them and their intellectual output, their interactions and conflicts with other sorts of intellectuals, and the implicit (conscious or not) meanings of their claims.
If I remember correctly, you’re Russian? Those Slavic double negatives must be giving you constant distress, if you’re so bothered by (seeming) deficiencies of logic in natural language.
It technically is redundant, though, because it has the form (A=>~B)&(B=>~A), while A=>~B and B=>~A are equivalent to each other. It doesn’t need to be symmetrized because the statement was symmetric in the first place, even if it wasn’t stated in an obviously symmetric form such as ~(A&B). (Going to have to say I like the redundant version for emphasis, though.)
If we’re talking about CEV, I agree. It needs rewriting. So does the Intuitive Explanation of Bayesian Reasoning, or any number of other documents produced by earlier Eliezers.
It was the linked Sobel paper which called forth that particular comment by me, if you’re wondering. I looked at it in hopes of finding useful details about how to construe an extrapolated volition, and got a couple of pages in before I decided that I wasn’t willing to read this paper unless someone had produced a human-readable version of it. (Scanning the rest did not change my mind.)
I’m not sure I want to import FAI ethics into philosophical academia as a field where people can garner prestige by writing papers that are hard to read. Maybe it makes sense to put up a fence around it and declare that if you can’t write plain-English papers you shouldn’t be writing about FAI.
Yes, the Sobel paper is definitely not an example of how I would write a philosophy paper, and your not reading it was a wise choice. Unfortunately, it is one of the major pieces in the literature on the subject of informed preference. But have you really never read any journal-published philosophy you thought was clear, such that you think one cannot write clearly if writing philosophy for journals? That would be shocking if true.
You will not stop people from garnering prestige by writing papers that are hard to read. You will also not stop people from writing hard-to-read papers on Friendly AI. That subject is already becoming a major field, whether you call it “machine ethics” or “machine morality” or “artificial morality” or “computational ethics” or “friendly AI.” (As it turns out, “machine ethics” looks like it will win.)
But one can write clear and easy-to-read papers on Friendly AI. Who knows? Maybe it will even make your work stand out among all the other people writing on the subject, for example those proposing Kantian machines. (The horror!)
But I will suggest for the record that we can probably get away with just ignoring anything that was written for other philosophers rather than for the general public or competent AGI researchers, since those are the only two constituencies we care about. If anyone in philosophy has something to contribute to the real FAI discussion, let them rewrite it in English. I should also note that anything which does not assume analytic naturalism as a matter of course is going to be rejected out of hand because it cannot be conjugate to a computer program composed of ones and zeroes.
Philosophers are not the actual audience. The general public and competent AGI researchers are the actual audience. Now there’s some case to be made for trying to communicate with the real audience via a complicated indirect method that involves rewriting things in philosophical jargon to get published in philosophy journals, but we shouldn’t overlook that this is not, in fact, the end goal.
The AGI researchers you’re talking about are the people who read IEEE Intelligent Systems and Minds and Machines. That’s where this kind of work is being published, except for that tiny portion of stuff produced by SIAI and by Ben Goertzel, who publishes in his own online “journal”, Dynamical Psychology.
So if you want to communicate with AGI researchers and others working on Friendly AI, then you should write in the language of IEEE Intelligent Systems and Minds and Machines, which is the language I described above.
The papers in Journal of Artificial General Intelligence follow the recommendations given above, too—though as a brand new online journal with little current prestige, it’s far less picky about those things than more established journals.
Moreover, if you want to communicate with others about new developments in deontic logic or decision theory for use in FAI, then those audiences are all over the philosophical terrain, in mainstream philosophy journals not focused on AI. (Deontic logic and decision theory discussions are particular prevalent in journals focused on formal philosophy.)
Also, it’s not just a matter of rewriting things in philosophical jargon for the sake of talking to others. Often, the philosophical jargon has settled on a certain vocabulary because it has certain advantages.
Above, I gave the example of making a distinction between “extrapolating” from means to ends, and “extrapolating” current ends to new ends given a process of reflective equilibrium and other mental changes. That’s a useful distinction that philosophers make because there are many properties of the first thing not shared by the second, and vice versa. Conflating the two doesn’t carve reality at its joints terribly well.
And of course I agree that anything not assuming reductionism must be dismissed.
But then, it seems you are interested in publishing for mainstream academia anyway, right? I know SIAI is pushing pretty hard on that Singularity Hypothesis volume from Springer, for example. And of course publishing in mainstream academia will bring in funds and credibility and so on, as I stated. It’s just that, as you said, you don’t have many people who can do that kind of thing, and those people are tied up with other things. Yes?
I just realized that maybe I’m confusing things by talking about philosophy journals, when really I mean to include cognitive science journals in general.
But what I said in my original post applies to cognitive science journals as well, it’s just that when you’re talking about philosophy (e.g. idealized preference theories of value), you place what you’re saying in the context of the relevant philosophy, and when you’re talking about neuroscience (e.g. the complexity of human values) then you place what you’re saying in the context of the relevant neuroscience, and when you’re talking about AI (e.g. approaches to AGI) then you place what you’re saying in the context of relevant AI research. You can do all three in the same paper.
The kind of philosophy I spend most of my time reading these days is just like that, actually. Epistemology and the Psychology of Human Judgment spends just as much time discussing work done by psychologists like Dawes and Kahneman as it does discussing epistemologists like Goldman and Stich. Philosophy and Neuroscience: A Ruthlessly Reductive Account spends much more time discussing neuroscience than it does philosophy. Three Faces of Desire is split about 60⁄40 between philosophy and neuroscience. Many of the papers on machine ethics aka Friendly AI are split about 50⁄50 between philosophy and AI programming. Cognitive science is like this, after all.
In fact, I’ve been going through the Pennachin & Goertzel volume, reading it as a philosophy of mind book when most people, I guess, are probably considering it a computer science book. Whatever. Cognitive science is probably what I should have said. This is all cognitive science, whether it’s slightly more heavy on philosophy or computer science or neuroscience or experimental psychology or whatever. The problem is that philosophy almost just is cognitive science, to me. Cognitive science + logics/maths.
Anyway, sorry if the ‘philosophy’ word caused any confusion.
Maybe. But while I’m pretty familiar with philosophy journals and cognitive science journals, I’m not familiar with some other types of journals, and so I’m not sure whether my advice applies to, for example, math journals.
Above, I gave the example of making a distinction between “extrapolating” from means to ends, and “extrapolating” current ends to new ends given a process of reflective equilibrium and other mental changes. That’s a useful distinction that philosophers make because there are many properties of the first thing not shared by the second, and vice versa. Conflating the two doesn’t carve reality at its joints terribly well.
Could you write up the relevant distinction, as applied to CEV, perhaps as a discussion post? I don’t know the terminology, but expect that given the CEV ambition to get a long way towards the normative stuff, the distinction becomes far less relevant than when you discuss human decision-making.
(Prompted by the reference you made in this comment.)
Yes, I read it, and it’s still not clear. Recent discussion made a connection with terminal/instrumental values, but it’s not clear in what context they play a role.
I expect I could research this discussion in more detail and figure out what you meant, but that could be avoided and open the issue to a bigger audience if you make, say, a two-paragraph self-contained summary. I wouldn’t mention this issue if you didn’t attach some significance to it by giving it as an example in a recent comment.
I’m not sure what to say beyond what I said in the post. Which part is unclear?
In any case, it’s kind of a moot point because Eliezer said that it is a useful distinction to make, he just chose not to include it in his CEV paper because his CEV paper doesn’t deep enough into the detailed problems of implementing CEV where the distinction I made becomes particularly useful.
Do you think Nick Bostrom’s journal-published work on very similar subjects needs to be rewritten in different language to be understood? I don’t, anyway. I personally find the style of mainstream philosophy and science much easier to understand than, say, your CEV paper. But that might be because mainstream philosophy and science is what I spend most of my time reading.
Frankly, I think your arguments can be made more clear and persuasive to a greater number of intelligent people if phrased in the common language.
Just because most philosophy is bad doesn’t mean that when you write mainstream philosophy, you have to write badly.
lukeprog:
Seconded. I haven’t read that many academic philosophy papers, but what I have seen has almost always been remarkably clear and understandable. I’m baffled that Eliezer would make such an extreme statement and actually mean it seriously (and get upvoted for it?!), considering how often he’s cited academic philosophers like e.g. Chalmers, Bostrom, Dennett, or Parfit.
(Here of course I have in mind the Anglospheric analytic philosophy; continental philosophy is a horrible mess in comparison.)
Yeah, don’t get me started on continental philosophy.
BTW, one of my favorite takedowns of postmodernism is this one.
lukeprog:
Thanks for the link. I skimmed the article and it seems well written and quite informative; I’ll read it in full later.
In my opinion, there are some good insights in postmodernism, but as someone (Eysenck?) said about Freud, what’s true in it isn’t new, and what’s new isn’t true. In a sense, postmodernism itself provides perhaps the most fruitful target for a postmodernist analysis (of sorts). What these people say is of little real interest when taken at face value, but some fascinating insight can be obtained by analyzing the social role of them and their intellectual output, their interactions and conflicts with other sorts of intellectuals, and the implicit (conscious or not) meanings of their claims.
The logical redundancy in this phrase has long bothered me.
If I remember correctly, you’re Russian? Those Slavic double negatives must be giving you constant distress, if you’re so bothered by (seeming) deficiencies of logic in natural language.
It’s not redundant; it’s a more witty and elegant way of saying that there are some new things, some true things, but none that are both.
It technically is redundant, though, because it has the form (A=>~B)&(B=>~A), while A=>~B and B=>~A are equivalent to each other. It doesn’t need to be symmetrized because the statement was symmetric in the first place, even if it wasn’t stated in an obviously symmetric form such as ~(A&B). (Going to have to say I like the redundant version for emphasis, though.)
If we’re talking about CEV, I agree. It needs rewriting. So does the Intuitive Explanation of Bayesian Reasoning, or any number of other documents produced by earlier Eliezers.
It was the linked Sobel paper which called forth that particular comment by me, if you’re wondering. I looked at it in hopes of finding useful details about how to construe an extrapolated volition, and got a couple of pages in before I decided that I wasn’t willing to read this paper unless someone had produced a human-readable version of it. (Scanning the rest did not change my mind.)
I’m not sure I want to import FAI ethics into philosophical academia as a field where people can garner prestige by writing papers that are hard to read. Maybe it makes sense to put up a fence around it and declare that if you can’t write plain-English papers you shouldn’t be writing about FAI.
Yes, the Sobel paper is definitely not an example of how I would write a philosophy paper, and your not reading it was a wise choice. Unfortunately, it is one of the major pieces in the literature on the subject of informed preference. But have you really never read any journal-published philosophy you thought was clear, such that you think one cannot write clearly if writing philosophy for journals? That would be shocking if true.
You will not stop people from garnering prestige by writing papers that are hard to read. You will also not stop people from writing hard-to-read papers on Friendly AI. That subject is already becoming a major field, whether you call it “machine ethics” or “machine morality” or “artificial morality” or “computational ethics” or “friendly AI.” (As it turns out, “machine ethics” looks like it will win.)
But one can write clear and easy-to-read papers on Friendly AI. Who knows? Maybe it will even make your work stand out among all the other people writing on the subject, for example those proposing Kantian machines. (The horror!)
Bostrom writes clearly.
But I will suggest for the record that we can probably get away with just ignoring anything that was written for other philosophers rather than for the general public or competent AGI researchers, since those are the only two constituencies we care about. If anyone in philosophy has something to contribute to the real FAI discussion, let them rewrite it in English. I should also note that anything which does not assume analytic naturalism as a matter of course is going to be rejected out of hand because it cannot be conjugate to a computer program composed of ones and zeroes.
Philosophers are not the actual audience. The general public and competent AGI researchers are the actual audience. Now there’s some case to be made for trying to communicate with the real audience via a complicated indirect method that involves rewriting things in philosophical jargon to get published in philosophy journals, but we shouldn’t overlook that this is not, in fact, the end goal.
Relevance. It’s what’s for dinner.
The AGI researchers you’re talking about are the people who read IEEE Intelligent Systems and Minds and Machines. That’s where this kind of work is being published, except for that tiny portion of stuff produced by SIAI and by Ben Goertzel, who publishes in his own online “journal”, Dynamical Psychology.
So if you want to communicate with AGI researchers and others working on Friendly AI, then you should write in the language of IEEE Intelligent Systems and Minds and Machines, which is the language I described above.
The papers in Journal of Artificial General Intelligence follow the recommendations given above, too—though as a brand new online journal with little current prestige, it’s far less picky about those things than more established journals.
Moreover, if you want to communicate with others about new developments in deontic logic or decision theory for use in FAI, then those audiences are all over the philosophical terrain, in mainstream philosophy journals not focused on AI. (Deontic logic and decision theory discussions are particular prevalent in journals focused on formal philosophy.)
Also, it’s not just a matter of rewriting things in philosophical jargon for the sake of talking to others. Often, the philosophical jargon has settled on a certain vocabulary because it has certain advantages.
Above, I gave the example of making a distinction between “extrapolating” from means to ends, and “extrapolating” current ends to new ends given a process of reflective equilibrium and other mental changes. That’s a useful distinction that philosophers make because there are many properties of the first thing not shared by the second, and vice versa. Conflating the two doesn’t carve reality at its joints terribly well.
And of course I agree that anything not assuming reductionism must be dismissed.
But then, it seems you are interested in publishing for mainstream academia anyway, right? I know SIAI is pushing pretty hard on that Singularity Hypothesis volume from Springer, for example. And of course publishing in mainstream academia will bring in funds and credibility and so on, as I stated. It’s just that, as you said, you don’t have many people who can do that kind of thing, and those people are tied up with other things. Yes?
I just realized that maybe I’m confusing things by talking about philosophy journals, when really I mean to include cognitive science journals in general.
But what I said in my original post applies to cognitive science journals as well, it’s just that when you’re talking about philosophy (e.g. idealized preference theories of value), you place what you’re saying in the context of the relevant philosophy, and when you’re talking about neuroscience (e.g. the complexity of human values) then you place what you’re saying in the context of the relevant neuroscience, and when you’re talking about AI (e.g. approaches to AGI) then you place what you’re saying in the context of relevant AI research. You can do all three in the same paper.
The kind of philosophy I spend most of my time reading these days is just like that, actually. Epistemology and the Psychology of Human Judgment spends just as much time discussing work done by psychologists like Dawes and Kahneman as it does discussing epistemologists like Goldman and Stich. Philosophy and Neuroscience: A Ruthlessly Reductive Account spends much more time discussing neuroscience than it does philosophy. Three Faces of Desire is split about 60⁄40 between philosophy and neuroscience. Many of the papers on machine ethics aka Friendly AI are split about 50⁄50 between philosophy and AI programming. Cognitive science is like this, after all.
In fact, I’ve been going through the Pennachin & Goertzel volume, reading it as a philosophy of mind book when most people, I guess, are probably considering it a computer science book. Whatever. Cognitive science is probably what I should have said. This is all cognitive science, whether it’s slightly more heavy on philosophy or computer science or neuroscience or experimental psychology or whatever. The problem is that philosophy almost just is cognitive science, to me. Cognitive science + logics/maths.
Anyway, sorry if the ‘philosophy’ word caused any confusion.
You probably should have just titled it “How SIAI could publish in mainstream academic journals”.
Maybe. But while I’m pretty familiar with philosophy journals and cognitive science journals, I’m not familiar with some other types of journals, and so I’m not sure whether my advice applies to, for example, math journals.
It definitely does.
Could you write up the relevant distinction, as applied to CEV, perhaps as a discussion post? I don’t know the terminology, but expect that given the CEV ambition to get a long way towards the normative stuff, the distinction becomes far less relevant than when you discuss human decision-making.
(Prompted by the reference you made in this comment.)
Did you read the original discussion post to which the linked comment is attached? I go into more detail there.
Yes, I read it, and it’s still not clear. Recent discussion made a connection with terminal/instrumental values, but it’s not clear in what context they play a role.
I expect I could research this discussion in more detail and figure out what you meant, but that could be avoided and open the issue to a bigger audience if you make, say, a two-paragraph self-contained summary. I wouldn’t mention this issue if you didn’t attach some significance to it by giving it as an example in a recent comment.
I’m not sure what to say beyond what I said in the post. Which part is unclear?
In any case, it’s kind of a moot point because Eliezer said that it is a useful distinction to make, he just chose not to include it in his CEV paper because his CEV paper doesn’t deep enough into the detailed problems of implementing CEV where the distinction I made becomes particularly useful.