Ok opinions on the relative merits of the AGI projects mentioned aside you did not answer my first and the question. The question I am actually most interested in the answer too which is where is the technical work? I was looking for some detail as to what part of step one you are working on? So if TDT is important to your FAI then how is the math coming? Are you updating LOGI or are you discarding it and doing it all over?
“The arc of Less Wrong read start to finish should be sufficient for an intelligent person to discard existing AGI projects—once your “mysterious answer to mysterious question” detector is initialized and switched on, and so on—so I consider my work of explanation in that area to be pretty much done. Anything left is public relations, taking an existing explanation and making it more persuasive.”
Ok, this being said where is your design? This reminds me of a movement in physics that wants to discard GR because it fails to explain some phenomena and is part of the rift in physics. Of course these people have nothing to replace GR with so the fact that you can argue that GR is not completely right is a bit pointless until you have something to replace it with, GR not being totally wrong. That being said how is your dismissal of the rest of AGI any better then that?
Its easy enough to sit back with no formal theories or in progress AGI code out for public review and say all these other AGI projects won’t work. Even if that is the case it begs the question where are your contributions, your code, and published papers etc.? Without your formal working being out for public review is it really fair to make statements that all the current AGI projects are wrong-headed essentially?
“So tell me have you worked with anyone from DARPA (I have worked with DARPA) or Intel? Have you ever work at a research organization with millions or billions of dollars to throw at R&D? If not how can you be so sure?”
So I take it from the fact that you didn’t answer the question that you have in fact not worked for Intel or DARPA etc. That being said I think a measure of humility is an order before you categorically dismiss them as being minor players in FAI. Sorry if that sounds harsh but there it is (I prefer to be blunt because it leaves no room for interpretation).
Sorry if that sounds harsh but there it is (I prefer to be blunt because it leaves no room for interpretation).
Really, we get it. We don’t have automated signatures on this system but we can all pretend that this is included in yours. All this serves is to create a jarring discord between the quality of your claims and your presumption of status.
Its easy enough to sit back with no formal theories or in progress AGI code out for public review and say all these other AGI projects won’t work.
The hypothesis is that yes, they won’t work as steps towards FAI. Worse, they might actually backfire. And FAI progress is not as “impressive”. What do you expect should be done, given this conclusion? Continue running to the abyss, just for the sake of preserving appearance of productivity?
Without your formal working being out for public review is it really fair to make statements that all the current AGI projects are wrong-headed essentially?
Ok, this being said where is your design? This reminds me of a movement in physics that wants to discard GR because it fails to explain some phenomena and is part of the rift in physics. Of course these people have nothing to replace GR with so the fact that you can argue that GR is not completely right is a bit pointless until you have something to replace it with, GR not being totally wrong. That being said how is your dismissal of the rest of AGI any better then that?
For this analogy to hold there would need to be an existing complete theory of AGI.
(There would also need to be something in the theory or proposed application analogous to “hey! We should make a black hole just outside our solar system because black holes are like way cool and powerful and stuff!”)
Ok opinions on the relative merits of the AGI projects mentioned aside you did not answer my first and the question. The question I am actually most interested in the answer too which is where is the technical work? I was looking for some detail as to what part of step one you are working on? So if TDT is important to your FAI then how is the math coming? Are you updating LOGI or are you discarding it and doing it all over?
These are good questions. Particularly the TDT one. Even if the answer happened to be “not that important”.
I was working on something related to TDT this summer, can’t be more specific than that. If I get any of the remaining problems in TDT nailed down beyond what was already presented, and it’s not classified, I’ll let y’all know. Writing up the math I’ve already mentioned with impressive Greek symbols so it can be published is lower priority than the rationality book.
LOGI’s out the window, of course, as anyone who’s read the arc of LW could very easily guess.
Writing up the math I’ve already mentioned with impressive Greek symbols so it can be published is lower priority than the rationality book.
I’m curious to know your reasoning behind this, if you can share it.
It seems to me that the publication of some high-quality technical papers would increase the chances of attracting and keeping the attention of one-in-a-million people like this much more than a rationality book would.
Thank you thats all I wanted to know. You don’t have any math for TDT. TDT is just an idea and thats it just like the rest of your AI work. Its nothing more then nambi-pambi philosophical mumbo-jumbo… Well, I will spend my time reading people who have a chance of creating AGI or FAI and its not you…
To sum up you have nothing but some ideas for FAI, no theory, no math and the best defense you have is you don’t care about the academic community. The other key one is that you are the only person smart enough to make and understand FAI. This delusion is fueled by your LW followers.
The latest in lame excuses is this “classified” statement which is total (being honest here) BS. Maybe if you had it protected under NDA, or a patent pending, but neither are the case therefore since most LW people understanding the math is unlikely the most probable conclusion is your making excuses for your lack of due diligence in study and actually producing a single iota of a real theory.
Happy pretense of solving FAI… (hey we should have a holiday)
Further comments refer to the complaint department at 1-800-i dont care....
The problem is that even if nothing “impressive” is available at SIAI, there is no other source where something is. Nada. The only way to improve this situation is to work on the problem. Criticism would be constructive if you suggested a method of improvement on this situation, e.g. organize a new team that is expected to get to FAI more likely than SIAI. Merely arguing about status won’t help to solve the problem.
You keep ignoring the distinction between AGI and FAI, which doesn’t add sanity to this conversation. You may disagree that there is a difference, but that’s distinct from implying that people who believe there is a difference should also act as if there is none. To address the latter, you must directly engage this disagreement.
That comment was intended as the last if m. doesn’t suddenly turn reasonable, and more for the benefit of lurkers (since this topic isn’t frequently discussed).
As a curiosity, having one defector in a group who is visibly socially penalized is actually a positive influence on those who witness it (as distinct from having a significant minority, which is a negative influence.) I expect this to be particularly the case when the troll is unable to invoke a similarly childish response.
“As a curiosity, having one defector in a group who is visibly socially penalized is actually a positive influence on those who witness it (as distinct from having a significant minority, which is a negative influence.) I expect this to be particularly the case when the troll is unable to invoke a similarly childish response.”
Wow I say one negative thing and all of a sudden I am a troll.
Let’s consider the argument behind my comment:
Premises:
Has EY ever constructed AI of any form FAI, AGI or narrow AI?
Does EY have any degrees in any relevant fields regarding FAI?
Is EY backed by a large well funded research organization?
Could EY get a technical job at such an organization?
Does EY have a team of respected experts helping him make FAI?
Does EY have a long list of technical math and algorithm rich publications on any area regarding FAI?
Has EY ever published a single math paper in for example a real math journal like AMS?
Has he published findings on FAI in something like IEEE?
The answer to each of these questions is no.
The final question to consider is: If EY’s primary goal is to create FAI first then why is he spending most of his time blogging and working on a book on rationality (which would never be taken seriously outside of LW)?
Answer: this is counter to his stated goal.
So if all answers being in the negative then what hope should any here hold for EY making FAI? Answer: zero, zilch, none, zip...
If you have evidence to the contrary for example proof that not all the answers to the above questions are no then please… otherwise I rest my case. If you come back with this lame troll response I will consider my case proven, closed and done. Oh and to be clear I have no doubt in failing to sway any from the LW/EY worship cult but the exercise is useful for other reasons.
Has EY ever constructed AI of any form FAI, AGI or narrow AI?
Nobody has done the first two (fortunately). I am not sure if he has created a narrow AI. I have, it took me a few years to realise that the whole subfield I was working in was utter bullshit. I don’t disrespect anyone else for reaching the same conclusion.
Does EY have any degrees in any relevant fields regarding FAI?
He can borrow mine. I don’t need to make any paper planes any time soon and I have found ways to earn cash without earning the approval of any HR guys.
Is EY backed by a large well funded research organization?
No.
Could EY get a technical job at such an organization?
He probably lacks the humility. Apart from that, probably yes if you gave him a year.
Does EY have a team of respected experts helping him make FAI?
There are experts in FAI?
Does EY have a long list of technical math and algorithm rich publications on any area regarding FAI?
I would like to see some of those. Not the algorithm rich ones (that’d be a bad sign indeed) but the math ones certainly. I’m not sure I would be comfortable with your definition of ‘rich’ either.
Has EY ever published a single math paper in for example a real math journal like AMS? Has he published findings on FAI in something like IEEE?
Has EY ever published a single math paper in for example a real math journal like AMS? Has he published findings on FAI in something like IEEE?
No. No. Both relevant.
Indeed, they are very relevant. As far as I can tell, Eliezer’s job description is “blogger”. He is, indeed, brilliant at it, but I haven’t seen evidence that he’s done anything else of value. As for TDT, everyone here ought to remember the rule for academic research: if it’s not published, it doesn’t count.
Which is why I don’t fault anyone for accusing Eliezer of not having done anything—because they’re right.
As for TDT, everyone here ought to remember the rule for academic research: if it’s not published, it doesn’t count.
It takes some hard work against my natural inclinations not to walk out on academia entirely so long as they have a rule like that. By nature and personality I have an extremely low tolerance for tribal inbreeding rules and dimwit status games. If it were anything other than Academia—a corporation with the same rule about its own internal journals, say—I would just shrug and write them off as idiots whose elaborate little games I have no interest in playing, and go off to find someone else who can scrape up some interest in reality rather than appearances.
As it stands I’m happy to see the non-direct-FAI-research end of SIAI trying to publish papers, but it seems to me that the direct-FAI-research end should have a pretty hard-and-fast rule of not wasting time and in dealing with reality rather than appearances. That sort of thing isn’t a one-off decision rule, it’s a lifestyle and a habit of thinking. For myself I’m not sure that I lose much from dealing only with academics who are willing to lower themselves to discuss a blog post. Sure, I must be losing something that I would gain if I magically by surgical intervention gained a PhD. Sure, there are people who are in fact smart who won’t in fact deal with me. But there are other smart people who are more relaxed and more grounded than that, so why not just deal with them instead?
There is no such clear-cut general rule wherein you are required to publish your results in a special ritual form to make an impact (in most cases, it’s merely a bureaucratic formality in the funding/hiring process; history abounds with informally-distributed works that were built upon, it’s just that in most cases good works are also published, ’cause “why not?”). There is a simple need for a clear self-contained explanation that it’s possible to understand for other people, without devoting a special research project to figuring out what you meant and hunting down notes on the margins. The same reason you are writing a book. Once a good explanation is prepared, there are usually ways to also “publish” it.
I agree with Nesov and can offer a personal example here. I have a crypto design that was only “published” to a mailing list and on my homepage, and it still got eighty-some citations according to Google Scholar.
Also, just because you (Eliezer) don’t like playing status games, doesn’t mean it’s not rational to play them. I hate status games too, but I can get away with ignoring them since I can work on things that interest me without needing external funding. Your plans, on the other hand, depend on donors, and most potential donors aren’t AI or decision theory experts. What do they have to go on except status? What Nesov calls “a bureaucratic formality in the funding/hiring process” is actually a human approximation to group rationality, I think.
That’s fair enough. In which case I can only answer that some things are higher-priority than others, which is why I’m writing that book and not a TDT paper.
I empathize rather strongly with the position you are taking here. Even so, I am very much looking forward to seeing (for example) a published TDT. I don’t particularly care whether it is published in blog format or in a journal. There are reasons for doing so that are not status related.
What’s the difference between an real artist and a poseur?
Artists ship.
And you, Eliezer Yudkowsky, haven’t shipped.
Until you publish something, you haven’t really done anything. Your “timeless decision theory”, for example, isn’t even published on a web page. It’s vaporware. Until you actually write down your ideas, you really can’t call yourself a scientist, any more than someone who hasn’t published a story can claim the title of author. If you get hit by a bus tomorrow, what great work will you have left behind? Is there something in the SIAI vault that I don’t know about, or is it all locked up in that head of yours where nobody can get to it? I don’t expect you to magically produce a FAI out of your hat, but any advance that isn’t written down might as well not exist, for all the good it will do.
Eh? TDT was explained in enough detail for Dai and some others to get it. It might not make sense to a lay audience but any philosophically competent fellow who’s read the referenced books could reconstruct TDT out of Ingredients of Timeless Decision Theory.
I don’t understand your concept of “shipping”. There are many things I want to understand, some I understand already, a few of those that I’ve gone so far as to explain for the sake of people who are actually interested in them, and anything beyond that falls under the heading of PR and publicity.
Not to put too fine a point on it, but I find that no matter how much I do, the people who previously told me that I hadn’t yet achieved it, find something else that I haven’t yet achieved to focus on. First it’s “show you can invent something new”, and then when you invent it, “show you can get it published in a journal”, and if my priority schedule ever gets to the point I can do that, I have no doubt that the same sort of people will turn around and say “Anyone can publish a paper, where are the prominent scholars who support you?” and after that they will say “Does the whole field agree with you?” I have no personal taste for any part of this endless sequence except the part where I actually figure something out. TDT is rare in that I can talk about it openly and it looks like other people are actually making progress on it.
TDT was explained in enough detail for Dai and some others to get it.
It’s explained in enough detail for me to get an intuitive understanding of it, and to obtain some inspirations and research ideas to follow up. But it’s not enough for me to try to find flaws in it. I think that should be the standard of detail in scientific publication: the description must be detailed enough that if the described idea or research were to have a flaw, then a reader would be able to find it from the description.
It might not make sense to a lay audience but any philosophically competent fellow who’s read the referenced books could reconstruct TDT out of Ingredients of Timeless Decision Theory.
Ok, but what if TDT is flawed? In that case, whoever is trying to reconstruct TDT would just get stuck somewhere before they got to a coherent theory, unless they recreated the same flaw by coincidence. If they do get stuck, how can they know or convince you that it’s your fault, and not theirs? Unless they have super high motivation and trust in you, they’ll just give up and do something else, or never attempt the reconstruction in the first place.
I already know it’s got a couple of flaws (the “Problems I Can’t Solve” post, you solved one of them). The “Ingredients” page should let someone get as far as I got, no further, if they had all the standard published background knowledge that I had.
The theory has two main formal parts that I know how to formalize. One is the “decision diagonal”, and I wrote that out as an equation. It contains a black box, but I haven’t finished formalizing that black box either! The other main part that needs formalizing is the causal network. Judea Pearl wrote all this up in great detail; why should I write it again? There’s an amendment of the causal network to include logical uncertainty. I can describe this in the same intuitive way that CDT theorists took for granted when they were having their counterfactual distributions fall out of the sky as manna from heaven, but I don’t know how to give it a Pearl-grade formalization.
Hear me well! If I wanted to look impressive, I could certainly attach Greek symbols to key concepts—just like the classical causal decision theory theorists did in order to make CDT look much more formalized than it actually was. This is status-seeking and self-deception and it got in the way of their noticing what work they had left to do. It was a mistake for them to pretend to formality that way. It is part of the explanation for how they bogged down. I don’t intend to make the same mistake.
It’s explained in enough detail for me to get an intuitive understanding of it, and to obtain some inspirations and research ideas to follow up. But it’s not enough for me to try to find flaws in it. I think that should be the standard of detail in scientific publication: the description must be detailed enough that if the described idea or research were to have a flaw, then a reader would be able to find it from the description.
This is where I get stuck. I can get an intuitive understanding of it easily enough. In fact, I got a reasonable intuitive understanding of it just from observing application to problem cases. But I know I don’t have enough to go on to find flaws. I would have to do quite a lot of further background research to construct the difficult parts of the theory and I know that even then I would not be able to fully trust my own reasoning without dedicating several years to related fields.
Basically, it would be easier for me to verify a completed theory if I just created it myself from the premise “a decision theory shouldn’t be bloody stupid”. That way I wouldn’t have to second guess someone else’s reasoning.
Since I know I do not have the alliances necessary to get a commensurate status pay-off for any work I put into such research that probably isn’t the best way to satisfy my curiosity. Ricardo would suggest that the most practical approach would be for me to spend my time leveraging my existing position to earn cash and making a donation earmarked for ‘getting someone to finish the TDT theory’.
Eh? TDT was explained in enough detail for Dai and some others to get it. It might not make sense to a lay audience but any philosophically competent fellow who’s read the referenced books could reconstruct TDT out of Ingredients of Timeless Decision Theory.
“Not to put too fine a point on it, but I find that no matter how much I do, the people who previously told me that I hadn’t yet achieved it, find something else that I haven’t yet achieved to focus on.”
Such is the price of being an innovator or claiming innovation...
“First it’s “show you can invent something new”, and then when you invent it, “show you can get it published in a journal”, and if my priority schedule ever gets to the point I can do that, I have no doubt that the same sort of people will turn around and say “Anyone can publish a paper, where are the prominent scholars who support you?”″
Sure, but you have not invented a decision theory using the example of TDT until you have math to back it up. Decision theory is a mathematical theory not just some philosophical ideas. What-is-more thanks to programs like Mathematica etc. there are easy ways to post equations online. For example “[Nu] Derivative[2][w][[Nu]] + 2 Derivative[1][w][[Nu]] + ArcCos[z]^2 [Nu] w[[Nu]] == 0 /; w[[Nu]] == Subscript[c, 1] GegenbauerC[[Nu], z] + Subscript[c, 2] (1/[Nu]) ChebyshevU[[Nu], z]” put this in mathematica and presto.
Further the publication of the theory is necessary part of getting the theory accepted be that good or bad. Not only that but it helps in formalizing ones ideas which is positive especially when working with other people and trying to explain what you are doing.
“and after that they will say “Does the whole field agree with you?” I have no personal taste for any part of this endless sequence except the part where I actually figure something out. TDT is rare in that I can talk about it openly and it looks like other people are actually making progress on it.”
There are huge areas of non-FAI specific work and people who’s help would be of value. For example knowledge representation, embodiment virtual or real, and sensory stimulus recognition… Each of these will need work to make FAI practical and there are people who can help you and probably know more about those specific areas then you.
How am I troll? Did I not make a valid point? Have I not made other valid points? You may disagree with how I say something but that in no way labels me a troll.
The intention of my comment was to find what the hope for EYs FAI goals are based on here. I was trying to make the point with the zero, zilch idea… that the faith in EY making FAI is essentially blind faith.
The intention of my comment was to find what the hope for EYs FAI goals are based on here. I was trying to make the point with the zero, zilch idea… that the faith in EY making FAI is essentially blind faith.
I am not sure who here has faith in EY making FAI. In fact, I don’t even recall EY claiming a high probability of such a success.
Agreed. As I recall, EY posted at one point that prior to thinking about existential risks and FAI, his conception of an adequate life goal was moving the Singularity up an hour. Sure doesn’t sound like he anticipates single-handedly making an FAI.
At best, he will make major progress toward a framework for friendliness. And in that aspect he is rather a specialist.
Agreed. I don’t know anyone at SIAI or FHI so absurdly overconfident as to expect to avert existential risk that would otherwise be fatal. The relevant question is whether their efforts, or supporting efforts, do more to reduce risk than alternative uses of their time or that of supporters.
You may disagree with how I say something but that in no way labels me a troll.
I’m not so sure. You don’t seem to be being downvoted for criticizing Eliezer’s strategy or sparse publication record: you got upvoted earlier, as did CronoDAS for making similar points. But the hostile and belligerent tone of many of your comments does come off as kind of, well, trollish.
Incidentally, I can’t help but notice that subject and style of your writing is remarkably similar to that of DS3618. Is that just a coincidence?
The same complaints and vitriol about Eliezer and LW, unsupported claims of technical experience convenient to conversational gambits (CMU graduate degree with no undergrad degree, AI and DARPA experience), and support for Intelligent Design creationism.
Plus sadly false claims of being done with Less Wrong because of his contempt for its participants.
The similarity of DS3618 and my posts is coincidental. As for mormon1 or psycho also coincidental. The fact that I have done work with DARPA in no way connects me unless you suppose only one person has ever worked with DARPA nor does AI connect me.
For Tiredoftrolls specifically:
The fact that you are blithely unaware of the possibility of and the reality of being smart enough to do a PhD without undergrad work is not my concern. The fact that I rail against EY and his lack of math should be something that more people do here. I do not agree with now nor have I ever agreed with ID or creationism or whatever you want to call that tripe.
To head off the obvious question why mormon2 because mormon and mormon1 was not available or didn’t work. I thought about mormonpreacher but decided against it.
Ok opinions on the relative merits of the AGI projects mentioned aside you did not answer my first and the question. The question I am actually most interested in the answer too which is where is the technical work? I was looking for some detail as to what part of step one you are working on? So if TDT is important to your FAI then how is the math coming? Are you updating LOGI or are you discarding it and doing it all over?
“The arc of Less Wrong read start to finish should be sufficient for an intelligent person to discard existing AGI projects—once your “mysterious answer to mysterious question” detector is initialized and switched on, and so on—so I consider my work of explanation in that area to be pretty much done. Anything left is public relations, taking an existing explanation and making it more persuasive.”
Ok, this being said where is your design? This reminds me of a movement in physics that wants to discard GR because it fails to explain some phenomena and is part of the rift in physics. Of course these people have nothing to replace GR with so the fact that you can argue that GR is not completely right is a bit pointless until you have something to replace it with, GR not being totally wrong. That being said how is your dismissal of the rest of AGI any better then that?
Its easy enough to sit back with no formal theories or in progress AGI code out for public review and say all these other AGI projects won’t work. Even if that is the case it begs the question where are your contributions, your code, and published papers etc.? Without your formal working being out for public review is it really fair to make statements that all the current AGI projects are wrong-headed essentially?
“So tell me have you worked with anyone from DARPA (I have worked with DARPA) or Intel? Have you ever work at a research organization with millions or billions of dollars to throw at R&D? If not how can you be so sure?”
So I take it from the fact that you didn’t answer the question that you have in fact not worked for Intel or DARPA etc. That being said I think a measure of humility is an order before you categorically dismiss them as being minor players in FAI. Sorry if that sounds harsh but there it is (I prefer to be blunt because it leaves no room for interpretation).
Really, we get it. We don’t have automated signatures on this system but we can all pretend that this is included in yours. All this serves is to create a jarring discord between the quality of your claims and your presumption of status.
The hypothesis is that yes, they won’t work as steps towards FAI. Worse, they might actually backfire. And FAI progress is not as “impressive”. What do you expect should be done, given this conclusion? Continue running to the abyss, just for the sake of preserving appearance of productivity?
Truth-seeking is not about fairness.
For this analogy to hold there would need to be an existing complete theory of AGI.
(There would also need to be something in the theory or proposed application analogous to “hey! We should make a black hole just outside our solar system because black holes are like way cool and powerful and stuff!”)
These are good questions. Particularly the TDT one. Even if the answer happened to be “not that important”.
I was working on something related to TDT this summer, can’t be more specific than that. If I get any of the remaining problems in TDT nailed down beyond what was already presented, and it’s not classified, I’ll let y’all know. Writing up the math I’ve already mentioned with impressive Greek symbols so it can be published is lower priority than the rationality book.
LOGI’s out the window, of course, as anyone who’s read the arc of LW could very easily guess.
I’m curious to know your reasoning behind this, if you can share it.
It seems to me that the publication of some high-quality technical papers would increase the chances of attracting and keeping the attention of one-in-a-million people like this much more than a rationality book would.
Thanks for the update. Hopefully one of the kids you invite to visit has a knack for translating into impressive and you can delegate.
Thank you thats all I wanted to know. You don’t have any math for TDT. TDT is just an idea and thats it just like the rest of your AI work. Its nothing more then nambi-pambi philosophical mumbo-jumbo… Well, I will spend my time reading people who have a chance of creating AGI or FAI and its not you…
To sum up you have nothing but some ideas for FAI, no theory, no math and the best defense you have is you don’t care about the academic community. The other key one is that you are the only person smart enough to make and understand FAI. This delusion is fueled by your LW followers.
The latest in lame excuses is this “classified” statement which is total (being honest here) BS. Maybe if you had it protected under NDA, or a patent pending, but neither are the case therefore since most LW people understanding the math is unlikely the most probable conclusion is your making excuses for your lack of due diligence in study and actually producing a single iota of a real theory.
Happy pretense of solving FAI… (hey we should have a holiday)
Further comments refer to the complaint department at 1-800-i dont care....
The problem is that even if nothing “impressive” is available at SIAI, there is no other source where something is. Nada. The only way to improve this situation is to work on the problem. Criticism would be constructive if you suggested a method of improvement on this situation, e.g. organize a new team that is expected to get to FAI more likely than SIAI. Merely arguing about status won’t help to solve the problem.
You keep ignoring the distinction between AGI and FAI, which doesn’t add sanity to this conversation. You may disagree that there is a difference, but that’s distinct from implying that people who believe there is a difference should also act as if there is none. To address the latter, you must directly engage this disagreement.
Feed ye not the trolls. No point in putting further comments underneath anything that’s been voted down under −2.
That comment was intended as the last if m. doesn’t suddenly turn reasonable, and more for the benefit of lurkers (since this topic isn’t frequently discussed).
As a curiosity, having one defector in a group who is visibly socially penalized is actually a positive influence on those who witness it (as distinct from having a significant minority, which is a negative influence.) I expect this to be particularly the case when the troll is unable to invoke a similarly childish response.
“As a curiosity, having one defector in a group who is visibly socially penalized is actually a positive influence on those who witness it (as distinct from having a significant minority, which is a negative influence.) I expect this to be particularly the case when the troll is unable to invoke a similarly childish response.”
Wow I say one negative thing and all of a sudden I am a troll.
Let’s consider the argument behind my comment:
Premises: Has EY ever constructed AI of any form FAI, AGI or narrow AI? Does EY have any degrees in any relevant fields regarding FAI? Is EY backed by a large well funded research organization? Could EY get a technical job at such an organization? Does EY have a team of respected experts helping him make FAI? Does EY have a long list of technical math and algorithm rich publications on any area regarding FAI? Has EY ever published a single math paper in for example a real math journal like AMS? Has he published findings on FAI in something like IEEE?
The answer to each of these questions is no.
The final question to consider is: If EY’s primary goal is to create FAI first then why is he spending most of his time blogging and working on a book on rationality (which would never be taken seriously outside of LW)?
Answer: this is counter to his stated goal.
So if all answers being in the negative then what hope should any here hold for EY making FAI? Answer: zero, zilch, none, zip...
If you have evidence to the contrary for example proof that not all the answers to the above questions are no then please… otherwise I rest my case. If you come back with this lame troll response I will consider my case proven, closed and done. Oh and to be clear I have no doubt in failing to sway any from the LW/EY worship cult but the exercise is useful for other reasons.
Nobody has done the first two (fortunately). I am not sure if he has created a narrow AI. I have, it took me a few years to realise that the whole subfield I was working in was utter bullshit. I don’t disrespect anyone else for reaching the same conclusion.
He can borrow mine. I don’t need to make any paper planes any time soon and I have found ways to earn cash without earning the approval of any HR guys.
No.
He probably lacks the humility. Apart from that, probably yes if you gave him a year.
There are experts in FAI?
I would like to see some of those. Not the algorithm rich ones (that’d be a bad sign indeed) but the math ones certainly. I’m not sure I would be comfortable with your definition of ‘rich’ either.
No. No. Both relevant.
Indeed, they are very relevant. As far as I can tell, Eliezer’s job description is “blogger”. He is, indeed, brilliant at it, but I haven’t seen evidence that he’s done anything else of value. As for TDT, everyone here ought to remember the rule for academic research: if it’s not published, it doesn’t count.
Which is why I don’t fault anyone for accusing Eliezer of not having done anything—because they’re right.
It takes some hard work against my natural inclinations not to walk out on academia entirely so long as they have a rule like that. By nature and personality I have an extremely low tolerance for tribal inbreeding rules and dimwit status games. If it were anything other than Academia—a corporation with the same rule about its own internal journals, say—I would just shrug and write them off as idiots whose elaborate little games I have no interest in playing, and go off to find someone else who can scrape up some interest in reality rather than appearances.
As it stands I’m happy to see the non-direct-FAI-research end of SIAI trying to publish papers, but it seems to me that the direct-FAI-research end should have a pretty hard-and-fast rule of not wasting time and in dealing with reality rather than appearances. That sort of thing isn’t a one-off decision rule, it’s a lifestyle and a habit of thinking. For myself I’m not sure that I lose much from dealing only with academics who are willing to lower themselves to discuss a blog post. Sure, I must be losing something that I would gain if I magically by surgical intervention gained a PhD. Sure, there are people who are in fact smart who won’t in fact deal with me. But there are other smart people who are more relaxed and more grounded than that, so why not just deal with them instead?
There is no such clear-cut general rule wherein you are required to publish your results in a special ritual form to make an impact (in most cases, it’s merely a bureaucratic formality in the funding/hiring process; history abounds with informally-distributed works that were built upon, it’s just that in most cases good works are also published, ’cause “why not?”). There is a simple need for a clear self-contained explanation that it’s possible to understand for other people, without devoting a special research project to figuring out what you meant and hunting down notes on the margins. The same reason you are writing a book. Once a good explanation is prepared, there are usually ways to also “publish” it.
I agree with Nesov and can offer a personal example here. I have a crypto design that was only “published” to a mailing list and on my homepage, and it still got eighty-some citations according to Google Scholar.
Also, just because you (Eliezer) don’t like playing status games, doesn’t mean it’s not rational to play them. I hate status games too, but I can get away with ignoring them since I can work on things that interest me without needing external funding. Your plans, on the other hand, depend on donors, and most potential donors aren’t AI or decision theory experts. What do they have to go on except status? What Nesov calls “a bureaucratic formality in the funding/hiring process” is actually a human approximation to group rationality, I think.
That’s fair enough. In which case I can only answer that some things are higher-priority than others, which is why I’m writing that book and not a TDT paper.
I empathize rather strongly with the position you are taking here. Even so, I am very much looking forward to seeing (for example) a published TDT. I don’t particularly care whether it is published in blog format or in a journal. There are reasons for doing so that are not status related.
What’s the difference between an real artist and a poseur?
Artists ship.
And you, Eliezer Yudkowsky, haven’t shipped.
Until you publish something, you haven’t really done anything. Your “timeless decision theory”, for example, isn’t even published on a web page. It’s vaporware. Until you actually write down your ideas, you really can’t call yourself a scientist, any more than someone who hasn’t published a story can claim the title of author. If you get hit by a bus tomorrow, what great work will you have left behind? Is there something in the SIAI vault that I don’t know about, or is it all locked up in that head of yours where nobody can get to it? I don’t expect you to magically produce a FAI out of your hat, but any advance that isn’t written down might as well not exist, for all the good it will do.
Eh? TDT was explained in enough detail for Dai and some others to get it. It might not make sense to a lay audience but any philosophically competent fellow who’s read the referenced books could reconstruct TDT out of Ingredients of Timeless Decision Theory.
I don’t understand your concept of “shipping”. There are many things I want to understand, some I understand already, a few of those that I’ve gone so far as to explain for the sake of people who are actually interested in them, and anything beyond that falls under the heading of PR and publicity.
Not to put too fine a point on it, but I find that no matter how much I do, the people who previously told me that I hadn’t yet achieved it, find something else that I haven’t yet achieved to focus on. First it’s “show you can invent something new”, and then when you invent it, “show you can get it published in a journal”, and if my priority schedule ever gets to the point I can do that, I have no doubt that the same sort of people will turn around and say “Anyone can publish a paper, where are the prominent scholars who support you?” and after that they will say “Does the whole field agree with you?” I have no personal taste for any part of this endless sequence except the part where I actually figure something out. TDT is rare in that I can talk about it openly and it looks like other people are actually making progress on it.
It’s explained in enough detail for me to get an intuitive understanding of it, and to obtain some inspirations and research ideas to follow up. But it’s not enough for me to try to find flaws in it. I think that should be the standard of detail in scientific publication: the description must be detailed enough that if the described idea or research were to have a flaw, then a reader would be able to find it from the description.
Ok, but what if TDT is flawed? In that case, whoever is trying to reconstruct TDT would just get stuck somewhere before they got to a coherent theory, unless they recreated the same flaw by coincidence. If they do get stuck, how can they know or convince you that it’s your fault, and not theirs? Unless they have super high motivation and trust in you, they’ll just give up and do something else, or never attempt the reconstruction in the first place.
I already know it’s got a couple of flaws (the “Problems I Can’t Solve” post, you solved one of them). The “Ingredients” page should let someone get as far as I got, no further, if they had all the standard published background knowledge that I had.
The theory has two main formal parts that I know how to formalize. One is the “decision diagonal”, and I wrote that out as an equation. It contains a black box, but I haven’t finished formalizing that black box either! The other main part that needs formalizing is the causal network. Judea Pearl wrote all this up in great detail; why should I write it again? There’s an amendment of the causal network to include logical uncertainty. I can describe this in the same intuitive way that CDT theorists took for granted when they were having their counterfactual distributions fall out of the sky as manna from heaven, but I don’t know how to give it a Pearl-grade formalization.
Hear me well! If I wanted to look impressive, I could certainly attach Greek symbols to key concepts—just like the classical causal decision theory theorists did in order to make CDT look much more formalized than it actually was. This is status-seeking and self-deception and it got in the way of their noticing what work they had left to do. It was a mistake for them to pretend to formality that way. It is part of the explanation for how they bogged down. I don’t intend to make the same mistake.
This is where I get stuck. I can get an intuitive understanding of it easily enough. In fact, I got a reasonable intuitive understanding of it just from observing application to problem cases. But I know I don’t have enough to go on to find flaws. I would have to do quite a lot of further background research to construct the difficult parts of the theory and I know that even then I would not be able to fully trust my own reasoning without dedicating several years to related fields.
Basically, it would be easier for me to verify a completed theory if I just created it myself from the premise “a decision theory shouldn’t be bloody stupid”. That way I wouldn’t have to second guess someone else’s reasoning.
Since I know I do not have the alliances necessary to get a commensurate status pay-off for any work I put into such research that probably isn’t the best way to satisfy my curiosity. Ricardo would suggest that the most practical approach would be for me to spend my time leveraging my existing position to earn cash and making a donation earmarked for ‘getting someone to finish the TDT theory’.
All right, then.
“Not to put too fine a point on it, but I find that no matter how much I do, the people who previously told me that I hadn’t yet achieved it, find something else that I haven’t yet achieved to focus on.”
Such is the price of being an innovator or claiming innovation...
“First it’s “show you can invent something new”, and then when you invent it, “show you can get it published in a journal”, and if my priority schedule ever gets to the point I can do that, I have no doubt that the same sort of people will turn around and say “Anyone can publish a paper, where are the prominent scholars who support you?”″
Sure, but you have not invented a decision theory using the example of TDT until you have math to back it up. Decision theory is a mathematical theory not just some philosophical ideas. What-is-more thanks to programs like Mathematica etc. there are easy ways to post equations online. For example “[Nu] Derivative[2][w][[Nu]] + 2 Derivative[1][w][[Nu]] + ArcCos[z]^2 [Nu] w[[Nu]] == 0 /; w[[Nu]] == Subscript[c, 1] GegenbauerC[[Nu], z] + Subscript[c, 2] (1/[Nu]) ChebyshevU[[Nu], z]” put this in mathematica and presto. Further the publication of the theory is necessary part of getting the theory accepted be that good or bad. Not only that but it helps in formalizing ones ideas which is positive especially when working with other people and trying to explain what you are doing.
“and after that they will say “Does the whole field agree with you?” I have no personal taste for any part of this endless sequence except the part where I actually figure something out. TDT is rare in that I can talk about it openly and it looks like other people are actually making progress on it.”
There are huge areas of non-FAI specific work and people who’s help would be of value. For example knowledge representation, embodiment virtual or real, and sensory stimulus recognition… Each of these will need work to make FAI practical and there are people who can help you and probably know more about those specific areas then you.
Because LaTeX has already been done.
Zero, zilch, none and zip are not probabilities but the one I would assign is rather low. (Here is where ‘shut up and do the impossible’ fits in.)
PS: Is it acceptable to respond to trolls when the post is voted up to (2 - my vote)?
How am I troll? Did I not make a valid point? Have I not made other valid points? You may disagree with how I say something but that in no way labels me a troll.
The intention of my comment was to find what the hope for EYs FAI goals are based on here. I was trying to make the point with the zero, zilch idea… that the faith in EY making FAI is essentially blind faith.
I am not sure who here has faith in EY making FAI. In fact, I don’t even recall EY claiming a high probability of such a success.
Agreed. As I recall, EY posted at one point that prior to thinking about existential risks and FAI, his conception of an adequate life goal was moving the Singularity up an hour. Sure doesn’t sound like he anticipates single-handedly making an FAI.
At best, he will make major progress toward a framework for friendliness. And in that aspect he is rather a specialist.
Agreed. I don’t know anyone at SIAI or FHI so absurdly overconfident as to expect to avert existential risk that would otherwise be fatal. The relevant question is whether their efforts, or supporting efforts, do more to reduce risk than alternative uses of their time or that of supporters.
I’m not so sure. You don’t seem to be being downvoted for criticizing Eliezer’s strategy or sparse publication record: you got upvoted earlier, as did CronoDAS for making similar points. But the hostile and belligerent tone of many of your comments does come off as kind of, well, trollish.
Incidentally, I can’t help but notice that subject and style of your writing is remarkably similar to that of DS3618. Is that just a coincidence?
Not to mention mormon1 and psycho.
The same complaints and vitriol about Eliezer and LW, unsupported claims of technical experience convenient to conversational gambits (CMU graduate degree with no undergrad degree, AI and DARPA experience), and support for Intelligent Design creationism.
Plus sadly false claims of being done with Less Wrong because of his contempt for its participants.
Responding to both Zack and Tiredoftrolls:
The similarity of DS3618 and my posts is coincidental. As for mormon1 or psycho also coincidental. The fact that I have done work with DARPA in no way connects me unless you suppose only one person has ever worked with DARPA nor does AI connect me.
For Tiredoftrolls specifically: The fact that you are blithely unaware of the possibility of and the reality of being smart enough to do a PhD without undergrad work is not my concern. The fact that I rail against EY and his lack of math should be something that more people do here. I do not agree with now nor have I ever agreed with ID or creationism or whatever you want to call that tripe.
To head off the obvious question why mormon2 because mormon and mormon1 was not available or didn’t work. I thought about mormonpreacher but decided against it.
Bullshit. Note, if the names aren’t evidence enough, the same misspelling of “namby-pamby” here and here.
I propose banning.