Yes, the Sobel paper is definitely not an example of how I would write a philosophy paper, and your not reading it was a wise choice. Unfortunately, it is one of the major pieces in the literature on the subject of informed preference. But have you really never read any journal-published philosophy you thought was clear, such that you think one cannot write clearly if writing philosophy for journals? That would be shocking if true.
You will not stop people from garnering prestige by writing papers that are hard to read. You will also not stop people from writing hard-to-read papers on Friendly AI. That subject is already becoming a major field, whether you call it “machine ethics” or “machine morality” or “artificial morality” or “computational ethics” or “friendly AI.” (As it turns out, “machine ethics” looks like it will win.)
But one can write clear and easy-to-read papers on Friendly AI. Who knows? Maybe it will even make your work stand out among all the other people writing on the subject, for example those proposing Kantian machines. (The horror!)
But I will suggest for the record that we can probably get away with just ignoring anything that was written for other philosophers rather than for the general public or competent AGI researchers, since those are the only two constituencies we care about. If anyone in philosophy has something to contribute to the real FAI discussion, let them rewrite it in English. I should also note that anything which does not assume analytic naturalism as a matter of course is going to be rejected out of hand because it cannot be conjugate to a computer program composed of ones and zeroes.
Philosophers are not the actual audience. The general public and competent AGI researchers are the actual audience. Now there’s some case to be made for trying to communicate with the real audience via a complicated indirect method that involves rewriting things in philosophical jargon to get published in philosophy journals, but we shouldn’t overlook that this is not, in fact, the end goal.
The AGI researchers you’re talking about are the people who read IEEE Intelligent Systems and Minds and Machines. That’s where this kind of work is being published, except for that tiny portion of stuff produced by SIAI and by Ben Goertzel, who publishes in his own online “journal”, Dynamical Psychology.
So if you want to communicate with AGI researchers and others working on Friendly AI, then you should write in the language of IEEE Intelligent Systems and Minds and Machines, which is the language I described above.
The papers in Journal of Artificial General Intelligence follow the recommendations given above, too—though as a brand new online journal with little current prestige, it’s far less picky about those things than more established journals.
Moreover, if you want to communicate with others about new developments in deontic logic or decision theory for use in FAI, then those audiences are all over the philosophical terrain, in mainstream philosophy journals not focused on AI. (Deontic logic and decision theory discussions are particular prevalent in journals focused on formal philosophy.)
Also, it’s not just a matter of rewriting things in philosophical jargon for the sake of talking to others. Often, the philosophical jargon has settled on a certain vocabulary because it has certain advantages.
Above, I gave the example of making a distinction between “extrapolating” from means to ends, and “extrapolating” current ends to new ends given a process of reflective equilibrium and other mental changes. That’s a useful distinction that philosophers make because there are many properties of the first thing not shared by the second, and vice versa. Conflating the two doesn’t carve reality at its joints terribly well.
And of course I agree that anything not assuming reductionism must be dismissed.
But then, it seems you are interested in publishing for mainstream academia anyway, right? I know SIAI is pushing pretty hard on that Singularity Hypothesis volume from Springer, for example. And of course publishing in mainstream academia will bring in funds and credibility and so on, as I stated. It’s just that, as you said, you don’t have many people who can do that kind of thing, and those people are tied up with other things. Yes?
I just realized that maybe I’m confusing things by talking about philosophy journals, when really I mean to include cognitive science journals in general.
But what I said in my original post applies to cognitive science journals as well, it’s just that when you’re talking about philosophy (e.g. idealized preference theories of value), you place what you’re saying in the context of the relevant philosophy, and when you’re talking about neuroscience (e.g. the complexity of human values) then you place what you’re saying in the context of the relevant neuroscience, and when you’re talking about AI (e.g. approaches to AGI) then you place what you’re saying in the context of relevant AI research. You can do all three in the same paper.
The kind of philosophy I spend most of my time reading these days is just like that, actually. Epistemology and the Psychology of Human Judgment spends just as much time discussing work done by psychologists like Dawes and Kahneman as it does discussing epistemologists like Goldman and Stich. Philosophy and Neuroscience: A Ruthlessly Reductive Account spends much more time discussing neuroscience than it does philosophy. Three Faces of Desire is split about 60⁄40 between philosophy and neuroscience. Many of the papers on machine ethics aka Friendly AI are split about 50⁄50 between philosophy and AI programming. Cognitive science is like this, after all.
In fact, I’ve been going through the Pennachin & Goertzel volume, reading it as a philosophy of mind book when most people, I guess, are probably considering it a computer science book. Whatever. Cognitive science is probably what I should have said. This is all cognitive science, whether it’s slightly more heavy on philosophy or computer science or neuroscience or experimental psychology or whatever. The problem is that philosophy almost just is cognitive science, to me. Cognitive science + logics/maths.
Anyway, sorry if the ‘philosophy’ word caused any confusion.
Maybe. But while I’m pretty familiar with philosophy journals and cognitive science journals, I’m not familiar with some other types of journals, and so I’m not sure whether my advice applies to, for example, math journals.
Above, I gave the example of making a distinction between “extrapolating” from means to ends, and “extrapolating” current ends to new ends given a process of reflective equilibrium and other mental changes. That’s a useful distinction that philosophers make because there are many properties of the first thing not shared by the second, and vice versa. Conflating the two doesn’t carve reality at its joints terribly well.
Could you write up the relevant distinction, as applied to CEV, perhaps as a discussion post? I don’t know the terminology, but expect that given the CEV ambition to get a long way towards the normative stuff, the distinction becomes far less relevant than when you discuss human decision-making.
(Prompted by the reference you made in this comment.)
Yes, I read it, and it’s still not clear. Recent discussion made a connection with terminal/instrumental values, but it’s not clear in what context they play a role.
I expect I could research this discussion in more detail and figure out what you meant, but that could be avoided and open the issue to a bigger audience if you make, say, a two-paragraph self-contained summary. I wouldn’t mention this issue if you didn’t attach some significance to it by giving it as an example in a recent comment.
I’m not sure what to say beyond what I said in the post. Which part is unclear?
In any case, it’s kind of a moot point because Eliezer said that it is a useful distinction to make, he just chose not to include it in his CEV paper because his CEV paper doesn’t deep enough into the detailed problems of implementing CEV where the distinction I made becomes particularly useful.
Yes, the Sobel paper is definitely not an example of how I would write a philosophy paper, and your not reading it was a wise choice. Unfortunately, it is one of the major pieces in the literature on the subject of informed preference. But have you really never read any journal-published philosophy you thought was clear, such that you think one cannot write clearly if writing philosophy for journals? That would be shocking if true.
You will not stop people from garnering prestige by writing papers that are hard to read. You will also not stop people from writing hard-to-read papers on Friendly AI. That subject is already becoming a major field, whether you call it “machine ethics” or “machine morality” or “artificial morality” or “computational ethics” or “friendly AI.” (As it turns out, “machine ethics” looks like it will win.)
But one can write clear and easy-to-read papers on Friendly AI. Who knows? Maybe it will even make your work stand out among all the other people writing on the subject, for example those proposing Kantian machines. (The horror!)
Bostrom writes clearly.
But I will suggest for the record that we can probably get away with just ignoring anything that was written for other philosophers rather than for the general public or competent AGI researchers, since those are the only two constituencies we care about. If anyone in philosophy has something to contribute to the real FAI discussion, let them rewrite it in English. I should also note that anything which does not assume analytic naturalism as a matter of course is going to be rejected out of hand because it cannot be conjugate to a computer program composed of ones and zeroes.
Philosophers are not the actual audience. The general public and competent AGI researchers are the actual audience. Now there’s some case to be made for trying to communicate with the real audience via a complicated indirect method that involves rewriting things in philosophical jargon to get published in philosophy journals, but we shouldn’t overlook that this is not, in fact, the end goal.
Relevance. It’s what’s for dinner.
The AGI researchers you’re talking about are the people who read IEEE Intelligent Systems and Minds and Machines. That’s where this kind of work is being published, except for that tiny portion of stuff produced by SIAI and by Ben Goertzel, who publishes in his own online “journal”, Dynamical Psychology.
So if you want to communicate with AGI researchers and others working on Friendly AI, then you should write in the language of IEEE Intelligent Systems and Minds and Machines, which is the language I described above.
The papers in Journal of Artificial General Intelligence follow the recommendations given above, too—though as a brand new online journal with little current prestige, it’s far less picky about those things than more established journals.
Moreover, if you want to communicate with others about new developments in deontic logic or decision theory for use in FAI, then those audiences are all over the philosophical terrain, in mainstream philosophy journals not focused on AI. (Deontic logic and decision theory discussions are particular prevalent in journals focused on formal philosophy.)
Also, it’s not just a matter of rewriting things in philosophical jargon for the sake of talking to others. Often, the philosophical jargon has settled on a certain vocabulary because it has certain advantages.
Above, I gave the example of making a distinction between “extrapolating” from means to ends, and “extrapolating” current ends to new ends given a process of reflective equilibrium and other mental changes. That’s a useful distinction that philosophers make because there are many properties of the first thing not shared by the second, and vice versa. Conflating the two doesn’t carve reality at its joints terribly well.
And of course I agree that anything not assuming reductionism must be dismissed.
But then, it seems you are interested in publishing for mainstream academia anyway, right? I know SIAI is pushing pretty hard on that Singularity Hypothesis volume from Springer, for example. And of course publishing in mainstream academia will bring in funds and credibility and so on, as I stated. It’s just that, as you said, you don’t have many people who can do that kind of thing, and those people are tied up with other things. Yes?
I just realized that maybe I’m confusing things by talking about philosophy journals, when really I mean to include cognitive science journals in general.
But what I said in my original post applies to cognitive science journals as well, it’s just that when you’re talking about philosophy (e.g. idealized preference theories of value), you place what you’re saying in the context of the relevant philosophy, and when you’re talking about neuroscience (e.g. the complexity of human values) then you place what you’re saying in the context of the relevant neuroscience, and when you’re talking about AI (e.g. approaches to AGI) then you place what you’re saying in the context of relevant AI research. You can do all three in the same paper.
The kind of philosophy I spend most of my time reading these days is just like that, actually. Epistemology and the Psychology of Human Judgment spends just as much time discussing work done by psychologists like Dawes and Kahneman as it does discussing epistemologists like Goldman and Stich. Philosophy and Neuroscience: A Ruthlessly Reductive Account spends much more time discussing neuroscience than it does philosophy. Three Faces of Desire is split about 60⁄40 between philosophy and neuroscience. Many of the papers on machine ethics aka Friendly AI are split about 50⁄50 between philosophy and AI programming. Cognitive science is like this, after all.
In fact, I’ve been going through the Pennachin & Goertzel volume, reading it as a philosophy of mind book when most people, I guess, are probably considering it a computer science book. Whatever. Cognitive science is probably what I should have said. This is all cognitive science, whether it’s slightly more heavy on philosophy or computer science or neuroscience or experimental psychology or whatever. The problem is that philosophy almost just is cognitive science, to me. Cognitive science + logics/maths.
Anyway, sorry if the ‘philosophy’ word caused any confusion.
You probably should have just titled it “How SIAI could publish in mainstream academic journals”.
Maybe. But while I’m pretty familiar with philosophy journals and cognitive science journals, I’m not familiar with some other types of journals, and so I’m not sure whether my advice applies to, for example, math journals.
It definitely does.
Could you write up the relevant distinction, as applied to CEV, perhaps as a discussion post? I don’t know the terminology, but expect that given the CEV ambition to get a long way towards the normative stuff, the distinction becomes far less relevant than when you discuss human decision-making.
(Prompted by the reference you made in this comment.)
Did you read the original discussion post to which the linked comment is attached? I go into more detail there.
Yes, I read it, and it’s still not clear. Recent discussion made a connection with terminal/instrumental values, but it’s not clear in what context they play a role.
I expect I could research this discussion in more detail and figure out what you meant, but that could be avoided and open the issue to a bigger audience if you make, say, a two-paragraph self-contained summary. I wouldn’t mention this issue if you didn’t attach some significance to it by giving it as an example in a recent comment.
I’m not sure what to say beyond what I said in the post. Which part is unclear?
In any case, it’s kind of a moot point because Eliezer said that it is a useful distinction to make, he just chose not to include it in his CEV paper because his CEV paper doesn’t deep enough into the detailed problems of implementing CEV where the distinction I made becomes particularly useful.