Richard Loosemore is a professor of mathematics with about twenty publications in refereed journals on artificial intelligence.
I was at an AI conference—it may have been the 2009 AGI conference in Virginia—where Selmer Bringsjords gave a talk explaining why he believed that, in order to build “safe” artificial intelligence, it was necessary to encode their goal systems in formal logic so that we could predict and control their behavior. It had much in common with your approach. After his talk, a lot of people in the audience, including myself, were shaking their heads in dismay at Selmer’s apparent ignorance of everything in AI since 1985. Richard got up and schooled him hard, in his usual undiplomatic way, in the many reasons why his approach was hopeless. You could’ve benefited from being there. Michael Vassar was there; you can ask him about it.
AFAIK, Richard is one of only two people who have taken the time to critique your FAI + CEV ideas, who have decades of experience trying to codify English statements into formal representations, building them into AI systems, turning them on, and seeing what happens. The other is me. (Ben Goertzel has the experience, but I don’t think he’s interested in your specific computational approach as much as in higher-level futurist issues.) You have declared both of us to be not worth talking to.
In your excellent fan-fiction Harry Potter and the Methods of Rationality, one of your themes is the difficulty of knowing whether you’re becoming a Dark Lord when you’re much smarter than almost everyone else. When you spend your time on a forum that you control and that is built around your personal charisma, moderated by votes that you are not responsible for, but that you know will side with you in aggregate unless you step very far over the line, and you write off as irredeemable the two people you should listen to most, that’s one of the signs. When you have entrenched beliefs that are suspiciously convenient to your particular circumstances, such as that academic credential should not adjust your priors, that’s another.
At the point where he was kicked off SL4, he was claiming to be an experienced cognitive scientist who knew all about the conjunction fallacy, which was obviously false.
Mathscinet doesn’t list any publications for Loosemore. However, if one extends outside the area of math into a slightly broader area then he does have some substantial publications. However if one looks at the list given above, the number which are on AI issues seems to be much smaller than 20. But, the basic point is sound: he is a subject matter expert.
I see a bunch of papers about consciousness. I clicked on a random other paper about dyslexia and neural nets and found no math in it. Where is his theorem?
Also, I once attended a non-AGI, mainstream AI conference which happened to be at Stanford and found that the people there unfortunately did not seem all that bright compared to those who e.g. work at hedge funds. I put much respect in mainstream machine learning, but the average practitioner of such who attends conferences is, apparently, a good deal below the level of the greats. If this is the level of ‘subject matter expert’ we are talking about, then indeed I feel very little hesitation indeed about labeling one perhaps non-representative example from such as an idiot—even if he really is a ‘math professor’ at some tiny college (whose publications contain no theorems?) then he can still happen to be a permanent idiot. It would not be all that odd. The level of social authority we are talking about is not great even on the scales of those impressed by such things.
I recently opened a book on how-to-write-fiction and was unpleasantly surprised on how useless it seemed; most books on how-to-write-fiction are surprisingly good (for some odd reason, writers are much better able to communicate their knowledge than many other people who try to write how-to books). Checking the author bibliography showed that the author was an English professor at some tiny college who’d never actually written any fiction. How dare I contradict them and call their book useless, when I’m not a professor at any college? Well… (Lesson learned: Libraries have good books on how-to-write, but a how-to-write book that shows up in the used bookstore may be unwanted for a reason.)
I see a bunch of papers about consciousness. I clicked on a random other paper about dyslexia and neural nets and found no math in it. Where is his theorem?
I didn’t assert he was a mathematician, and indeed that part of my point when I said he had no Mathscinet listed publications. But he does have publications about AI.
It seems very heavily that both you and Loosemore are letting your personal animosity cloud your judgement. I by and large think Loosemore is wrong about many of the AI issues under discussion here, but that discussion should occur, and having it derailed by emotional issues from a series of disagreements on a mailing list yeas ago is almost the exact opposite of rationality.
It had much in common with your approach. After his talk, a lot of people in the audience, including myself, were shaking their heads in dismay at Selmer’s apparent ignorance of everything in AI since 1985. Richard got up and schooled him hard, in his usual undiplomatic way, in the many reasons why his approach was hopeless.
Which are?
(Not asking for a complete and thorough reproduction, which I realize is outside the scope of a comment, just some pointers or an abridged version. Mostly I wonder which arguments you lend the most credence to.)
Edit: Having read the discussion on “nothing is mere”, I retract my question. There’s such a thing as arguments disqualifying someone from any further discourse in a given topic:
As a result, the machine is able to state, quite categorically, that it will now do something that it KNOWS to be inconsistent with its past behavior, that it KNOWS to be the result of a design flaw, that it KNOWS will have drastic consequences of the sort that it has always made the greatest effort to avoid, and that it KNOWS could be avoided by the simple expedient of turning itself off to allow for a small operating system update ………… and yet in spite of knowing all these things, and confessing quite openly to the logical incoherence of saying one thing and doing another, it is going to go right ahead and follow this bizarre consequence in its programming.
… yes? Unless the ghost in the machine saves it … from itself!
Richard Loosemore is a professor of mathematics with about twenty publications in refereed journals on artificial intelligence.
I was at an AI conference—it may have been the 2009 AGI conference in Virginia—where Selmer Bringsjords gave a talk explaining why he believed that, in order to build “safe” artificial intelligence, it was necessary to encode their goal systems in formal logic so that we could predict and control their behavior. It had much in common with your approach. After his talk, a lot of people in the audience, including myself, were shaking their heads in dismay at Selmer’s apparent ignorance of everything in AI since 1985. Richard got up and schooled him hard, in his usual undiplomatic way, in the many reasons why his approach was hopeless. You could’ve benefited from being there. Michael Vassar was there; you can ask him about it.
AFAIK, Richard is one of only two people who have taken the time to critique your FAI + CEV ideas, who have decades of experience trying to codify English statements into formal representations, building them into AI systems, turning them on, and seeing what happens. The other is me. (Ben Goertzel has the experience, but I don’t think he’s interested in your specific computational approach as much as in higher-level futurist issues.) You have declared both of us to be not worth talking to.
In your excellent fan-fiction Harry Potter and the Methods of Rationality, one of your themes is the difficulty of knowing whether you’re becoming a Dark Lord when you’re much smarter than almost everyone else. When you spend your time on a forum that you control and that is built around your personal charisma, moderated by votes that you are not responsible for, but that you know will side with you in aggregate unless you step very far over the line, and you write off as irredeemable the two people you should listen to most, that’s one of the signs. When you have entrenched beliefs that are suspiciously convenient to your particular circumstances, such as that academic credential should not adjust your priors, that’s another.
http://citeseer.ist.psu.edu/search?q=author%3A%28richard+loosemore%29&sort=cite&t=doc
Don’t see ’em. Citation needed.
At the point where he was kicked off SL4, he was claiming to be an experienced cognitive scientist who knew all about the conjunction fallacy, which was obviously false.
Mathscinet doesn’t list any publications for Loosemore. However, if one extends outside the area of math into a slightly broader area then he does have some substantial publications. However if one looks at the list given above, the number which are on AI issues seems to be much smaller than 20. But, the basic point is sound: he is a subject matter expert.
I see a bunch of papers about consciousness. I clicked on a random other paper about dyslexia and neural nets and found no math in it. Where is his theorem?
Also, I once attended a non-AGI, mainstream AI conference which happened to be at Stanford and found that the people there unfortunately did not seem all that bright compared to those who e.g. work at hedge funds. I put much respect in mainstream machine learning, but the average practitioner of such who attends conferences is, apparently, a good deal below the level of the greats. If this is the level of ‘subject matter expert’ we are talking about, then indeed I feel very little hesitation indeed about labeling one perhaps non-representative example from such as an idiot—even if he really is a ‘math professor’ at some tiny college (whose publications contain no theorems?) then he can still happen to be a permanent idiot. It would not be all that odd. The level of social authority we are talking about is not great even on the scales of those impressed by such things.
I recently opened a book on how-to-write-fiction and was unpleasantly surprised on how useless it seemed; most books on how-to-write-fiction are surprisingly good (for some odd reason, writers are much better able to communicate their knowledge than many other people who try to write how-to books). Checking the author bibliography showed that the author was an English professor at some tiny college who’d never actually written any fiction. How dare I contradict them and call their book useless, when I’m not a professor at any college? Well… (Lesson learned: Libraries have good books on how-to-write, but a how-to-write book that shows up in the used bookstore may be unwanted for a reason.)
I didn’t assert he was a mathematician, and indeed that part of my point when I said he had no Mathscinet listed publications. But he does have publications about AI.
It seems very heavily that both you and Loosemore are letting your personal animosity cloud your judgement. I by and large think Loosemore is wrong about many of the AI issues under discussion here, but that discussion should occur, and having it derailed by emotional issues from a series of disagreements on a mailing list yeas ago is almost the exact opposite of rationality.
http://lesswrong.com/lw/yq/wise_pretensions_v0/
Which are?
(Not asking for a complete and thorough reproduction, which I realize is outside the scope of a comment, just some pointers or an abridged version. Mostly I wonder which arguments you lend the most credence to.)
Edit: Having read the discussion on “nothing is mere”, I retract my question. There’s such a thing as arguments disqualifying someone from any further discourse in a given topic:
… yes? Unless the ghost in the machine saves it … from itself!