“Disregarding the fact that deleting a top level post is as easy as deleting a comment...how do you know this is his reason?”
Because he has done it in the past.
“Disregarding the fact that deleting a top level post is as easy as deleting a comment...how do you know this is his reason?”
Because he has done it in the past.
“it being a top-level post instead of Open Thread comment. Probably would’ve been a lot more forgiving if it’d been an Open Thread comment. . .”
Since I am already disliked lets just say it, the reason EY would prefer my post in the comments section of an open thread is two-fold: 1.) it can easily be deleted if he doesn’t like it 2.) Since I happen to be the exemplar here and most of you guys don’t like me (or don’t like being unwitting subjects of social experiments). You would quickly vote my post down to the point where the only way to find it would be to search my profile for it meaning that the post would go nowhere.
Maybe read a bit more carefully:
“I just wanted to see if anyone here could actually look past that (being the issues like spelling, grammar and tone etc.), specifically EY, and post some honest answers to the questions”
I apologize I rippled your pond.
“If not, I am not interested in what you think SIAI donors think.”
I never claimed to know what SIAI donors think I asked you to think about that. But I think the fact that SIAI has as little money as it does after all these years speaks volumes about SIAI.
“Given your other behavior, ”
Why because I ask questions that when answered honestly you don’t like? Or is it because I don’t blindly hang on every word you speak?
“I’m also not interested in any statements on your part that you might donate if only circumstances were X. Experience tells me better.”
I never claimed I would donate nor will I ever as long as I live. As for experience telling you better, you have none, and considering the lack of money SIAI has and your arrogance you probably never will so I will keep my own council on that part.
“If you are previously a donor to SIAI, I’ll be happy to answer you elsewhere.”
Why, because you don’t want to disrupt the LW image of Eliezer the genius? Or is it because you really are distracted as I suspect or have given up because you cannot solve the problem of FAI another good possibility? These questions are simple easy to answer and I see no real reason you can’t answer them here and now. If you find the answers embarrassing then change, if not then what have you got to loose?
If your next response is as feeble as the last ones have been don’t bother posting them for my sake. You claim you want to be a rationalist then try applying reason to your own actions and answer the questions asked honestly.
I am going to respond to the general overall direction of your responses.
That is feeble, and for those who don’t understand why let me explain it.
Eliezer works for SIAI which is a non-profit where his pay depends on donations. Many people on LW are interested in SIAI and some even donate to SIAI, others potentially could donate. When your pay depends on convincing people that your work is worthwhile it is always worth justifying what you are doing. This becomes even more important when it looks like you’re distracted from what you are being paid to do. (If you ever work with a VC and their money you’ll know what I mean.)
When it comes to ensuring that SIAI continues to pay especially when you are the FAI researcher there justifying why you are writing a book on rationality which in no way solves FAI becomes extremely important.
EY ask yourself this what percent of the people interested in SIAI and donate are interested FAI? Then ask what percent are interested in rationality with no clear plan of how that gets to FAI? If the answer to the first is greater then the second then you have a big problem, because one could interpret the use of your time writing this book on rationality as wasting donated money unless there is a clear reason how rationality books get you to FAI.
P.S. If you want to educate people to help you out as someone speculated you’d be better off teaching them computer science and mathematics.
Remember my post drew no conclusions so for Yvain I have cast no stones I merely ask questions.
Responding to both Zack and Tiredoftrolls:
The similarity of DS3618 and my posts is coincidental. As for mormon1 or psycho also coincidental. The fact that I have done work with DARPA in no way connects me unless you suppose only one person has ever worked with DARPA nor does AI connect me.
For Tiredoftrolls specifically: The fact that you are blithely unaware of the possibility of and the reality of being smart enough to do a PhD without undergrad work is not my concern. The fact that I rail against EY and his lack of math should be something that more people do here. I do not agree with now nor have I ever agreed with ID or creationism or whatever you want to call that tripe.
To head off the obvious question why mormon2 because mormon and mormon1 was not available or didn’t work. I thought about mormonpreacher but decided against it.
“Not to put too fine a point on it, but I find that no matter how much I do, the people who previously told me that I hadn’t yet achieved it, find something else that I haven’t yet achieved to focus on.”
Such is the price of being an innovator or claiming innovation...
“First it’s “show you can invent something new”, and then when you invent it, “show you can get it published in a journal”, and if my priority schedule ever gets to the point I can do that, I have no doubt that the same sort of people will turn around and say “Anyone can publish a paper, where are the prominent scholars who support you?”″
Sure, but you have not invented a decision theory using the example of TDT until you have math to back it up. Decision theory is a mathematical theory not just some philosophical ideas. What-is-more thanks to programs like Mathematica etc. there are easy ways to post equations online. For example “[Nu] Derivative[2][w][[Nu]] + 2 Derivative[1][w][[Nu]] + ArcCos[z]^2 [Nu] w[[Nu]] == 0 /; w[[Nu]] == Subscript[c, 1] GegenbauerC[[Nu], z] + Subscript[c, 2] (1/[Nu]) ChebyshevU[[Nu], z]” put this in mathematica and presto. Further the publication of the theory is necessary part of getting the theory accepted be that good or bad. Not only that but it helps in formalizing ones ideas which is positive especially when working with other people and trying to explain what you are doing.
“and after that they will say “Does the whole field agree with you?” I have no personal taste for any part of this endless sequence except the part where I actually figure something out. TDT is rare in that I can talk about it openly and it looks like other people are actually making progress on it.”
There are huge areas of non-FAI specific work and people who’s help would be of value. For example knowledge representation, embodiment virtual or real, and sensory stimulus recognition… Each of these will need work to make FAI practical and there are people who can help you and probably know more about those specific areas then you.
How am I troll? Did I not make a valid point? Have I not made other valid points? You may disagree with how I say something but that in no way labels me a troll.
The intention of my comment was to find what the hope for EYs FAI goals are based on here. I was trying to make the point with the zero, zilch idea… that the faith in EY making FAI is essentially blind faith.
“As a curiosity, having one defector in a group who is visibly socially penalized is actually a positive influence on those who witness it (as distinct from having a significant minority, which is a negative influence.) I expect this to be particularly the case when the troll is unable to invoke a similarly childish response.”
Wow I say one negative thing and all of a sudden I am a troll.
Let’s consider the argument behind my comment:
Premises: Has EY ever constructed AI of any form FAI, AGI or narrow AI? Does EY have any degrees in any relevant fields regarding FAI? Is EY backed by a large well funded research organization? Could EY get a technical job at such an organization? Does EY have a team of respected experts helping him make FAI? Does EY have a long list of technical math and algorithm rich publications on any area regarding FAI? Has EY ever published a single math paper in for example a real math journal like AMS? Has he published findings on FAI in something like IEEE?
The answer to each of these questions is no.
The final question to consider is: If EY’s primary goal is to create FAI first then why is he spending most of his time blogging and working on a book on rationality (which would never be taken seriously outside of LW)?
Answer: this is counter to his stated goal.
So if all answers being in the negative then what hope should any here hold for EY making FAI? Answer: zero, zilch, none, zip...
If you have evidence to the contrary for example proof that not all the answers to the above questions are no then please… otherwise I rest my case. If you come back with this lame troll response I will consider my case proven, closed and done. Oh and to be clear I have no doubt in failing to sway any from the LW/EY worship cult but the exercise is useful for other reasons.
Thank you thats all I wanted to know. You don’t have any math for TDT. TDT is just an idea and thats it just like the rest of your AI work. Its nothing more then nambi-pambi philosophical mumbo-jumbo… Well, I will spend my time reading people who have a chance of creating AGI or FAI and its not you…
To sum up you have nothing but some ideas for FAI, no theory, no math and the best defense you have is you don’t care about the academic community. The other key one is that you are the only person smart enough to make and understand FAI. This delusion is fueled by your LW followers.
The latest in lame excuses is this “classified” statement which is total (being honest here) BS. Maybe if you had it protected under NDA, or a patent pending, but neither are the case therefore since most LW people understanding the math is unlikely the most probable conclusion is your making excuses for your lack of due diligence in study and actually producing a single iota of a real theory.
Happy pretense of solving FAI… (hey we should have a holiday)
Further comments refer to the complaint department at 1-800-i dont care....
Ok opinions on the relative merits of the AGI projects mentioned aside you did not answer my first and the question. The question I am actually most interested in the answer too which is where is the technical work? I was looking for some detail as to what part of step one you are working on? So if TDT is important to your FAI then how is the math coming? Are you updating LOGI or are you discarding it and doing it all over?
“The arc of Less Wrong read start to finish should be sufficient for an intelligent person to discard existing AGI projects—once your “mysterious answer to mysterious question” detector is initialized and switched on, and so on—so I consider my work of explanation in that area to be pretty much done. Anything left is public relations, taking an existing explanation and making it more persuasive.”
Ok, this being said where is your design? This reminds me of a movement in physics that wants to discard GR because it fails to explain some phenomena and is part of the rift in physics. Of course these people have nothing to replace GR with so the fact that you can argue that GR is not completely right is a bit pointless until you have something to replace it with, GR not being totally wrong. That being said how is your dismissal of the rest of AGI any better then that?
Its easy enough to sit back with no formal theories or in progress AGI code out for public review and say all these other AGI projects won’t work. Even if that is the case it begs the question where are your contributions, your code, and published papers etc.? Without your formal working being out for public review is it really fair to make statements that all the current AGI projects are wrong-headed essentially?
“So tell me have you worked with anyone from DARPA (I have worked with DARPA) or Intel? Have you ever work at a research organization with millions or billions of dollars to throw at R&D? If not how can you be so sure?”
So I take it from the fact that you didn’t answer the question that you have in fact not worked for Intel or DARPA etc. That being said I think a measure of humility is an order before you categorically dismiss them as being minor players in FAI. Sorry if that sounds harsh but there it is (I prefer to be blunt because it leaves no room for interpretation).
“That’s my end of the problem.”
Ok, so where are you in the process? Where is the math for TDT? Where is the updated version of LOGI?
“Not nearly high-end enough. International Math Olympiad, programming olympiads, young superstars of other types, older superstars with experience, and as much diversity of genius as I can manage to pack into a very small group. The professional skills I need don’t exist, and so I look for proof of relevant talent and learning rate.”
So tell me have you worked with anyone from DARPA (I have worked with DARPA) or Intel? Have you ever work at a research organization with millions or billions of dollars to throw at R&D? If not how can you be so sure?
“Most people who consider this problem do not realize the degree to which it is sheerly impossible to put up a job ad for the skills you need. LW itself is probably as close as it gets.”
If thats the case why does Ben Goertzel have a company working on AGI the very problem your trying to solve? Why does he actually have design and some portions implemented and you do not have any portions implemented? What about all the other AGI work being done like LIDA, SOAR, and what ever Peter Voss calls his AGI project, so are all of those just misguided since I would imagine they hire the people who work on the projects?
Just an aside for some posters above this post who have been talking about Java as the superior choice to C++ what planet do you come from? Java is slower then C++ because of all the overheads of running the code. You are much better off with C++ or Ct or some other language like that without all the overheads esp. since one can use OpenCL or CUDA to take advantage of the GPU for more computing power.
Is it just me or does this seem a bit backwards? SIAI is trying to make FAI yet so much of the time spent is spent on risks and benefits of this FAI that doesn’t exist. For a task that is estimated to be so dangerous and so world changing would it not behoove SIAI to be the first to make FAI? If this be the case then I am a bit confused as to the strategy SIAI is employing to accomplish the goal of FAI.
Also if FAI is the primary goal here then it seems to me that one should be looking not at LessWrong but at gathering people from places like Google, Intel, IBM, and DARPA… Why would you choose to pull from a predominantly amateur talent pool like LW (sorry to say that but there it is)?
I am going to take a shortcut and respond to both posts:
komponisto: Interesting because I would define success in terms of the goals you set for yourself or others have set for you and how well you have met those goals.
In terms of respect I would question the claim not within SIAI or within this community necessarily but within the larger community of experts in the AI field. How many people really know who he is? How many people who need to know, because even if he won’t admit it EY will need help from academia and the industry to make FAI, know him and more importantly respect his opinion?
ABranco: I would not say success is a personal measure I would say in many ways its defined by the culture. For example in America I think its fair to say that many would associate wealth and possessions with success. This may or may not be right but it cannot be ignored.
I think your last point is on the right track with EY starting SIAI and LessWrong with his lack of formal education. Though one could argue the relative level of significance or the level of success those two things dictate.
“You’ve achieved a high level of success as a self-learner, without the aid of formal education.”
How do you define high level of success?
I recommend some reading: http://en.wikipedia.org/wiki/Quantum_computer Start with this and then if you want more detail look at: http://arxiv.org/pdf/quant-ph/9812037v1 The math isn’t to difficult if you are familiar with math involved in QM, things like vectors, and matrices etc. http://www.fxpal.com/publications/FXPAL-PR-07-396.pdf This paper I skimmed it seems worth a read.
As to the author of the post to whom your responding what is your level of knowledge of quantum computing and quantum mechanics? By this I mean is your reading on the topic confined to Scientific American and what Eliezer has written or have you read for example Bohm on Quantum Theory?
“In what contexts is the action you mention worth performing?”
If the paper was endorsed by the top minds who support the singularity. Ideally if it was written by them. So for example Ray Kurzweil whether you agree with him or not he is a big voice for the singularity.
“Why are “critics” a relevant concern?”
Because technical science moves forward through peer-review and the proving and the disproving of hypotheses. The critics help prevent the circle jerk phenomena in science assuming they are well thought out critiques. Because outside review can sometimes see fatal flaws in ideas that are not necessarily caught by those who work in the field.
“In my perception, normal technical science doesn’t progress by criticism, it works by improving on some of existing work and forgetting the rest. New developments allow to see some old publications as uninteresting or wrong.”
Have you ever published in a peer-review journal? If not the last portion of your post I will ignore, if so perhaps your could expound on it a bit more.
“Since I don’t expect senior traditional-AI-folk to pay me any such attention short of spending a HUGE amount of effort to get it and probably not even then, I haven’t, well, expended a huge amount of effort to get it.
Why? If you expect to make FAI you will undoubtedly need people in the academic communities’ help; unless you plan to do this whole project by yourself or with purely amateur help. …”
“That ‘probably not even then’ part is significant.”
My implication was that the idea that he can create FAI completely outside the academic or professional world is ridiculous when you’re speaking from an organization like SIAI which does not have the people or money to get the job done. In fact SIAI doesn’t have enough money to pay for the computing hardware to make human level AI.
“Now that is an interesting question. To what extent would Eliezer say that conclusion followed? Certainly less than the implied ‘1’ and probably more than ‘0’ too.”
If he doesn’t agree with it now, I am sure he will when he runs into the problem of not having the money to build his AI or not having enough time in the day to solve the problems that will be associated with constructing the AI. Not even mentioning the fact that when you close yourself to outside influence that much you often end up with ideas that are riddled with problems, that if someone on the outside had looked at the idea they would have pointed the problems out.
If you have never taken an idea from idea to product this can be hard to understand.
“How would you act if you were Eliezer?”
If I made claims of having a TDT I would post the math. I would publish papers. I would be sure I had accomplishments to back up the authority with which I speak. I would not spend a single second blogging about rationality. If I used a blog it would be to discuss the current status of my AI work and to have a select group of intelligent people who could read and comment on it. If I thought FAI was that important I would be spending as much time as possible finding the best people possible to work with and would never resort to a blog to try to attract the right sort of people (I cite LW as evidence of the failure of blogging to attract the right people).
Oh and for the record I would never start a non-profit to do FAI research. I also would do away with the Singularity Summit and replace it with more AGI conferences. I would also do away the most of SIAI’s programs and replace them, and the money they cost, with researchers and scientists along with some devoted angel funders.