You admit you have not done much to make it easy to show them your reasons. You have not written up your key arguments in a compact form using standard style and terminology and submitted it to standard journals. You also admit you have not contacted any of them to ask them for their reasons; Horvitz would have have to “show up” for you to listen to him. This looks a lot like a status pissing contest; the obvious interpretation: Since you think you are better than them, you won’t ask them for their reasons, and you won’t make it easy for them to understand your reasons, as that would admit they are higher status. They will instead have to acknowledge your higher status by coming to you and doing things your way. And of course they won’t since by ordinary standard they have higher status. So you ensure there will be no conversation, and with no conversation you can invoke your “traditional” (non-Bayesian) rationality standard to declare you have no need to consider their opinions.
You’re being slightly silly. I simply don’t expect them to pay any attention to me one way or another. As it stands, if e.g. Horvitz showed up and asked questions, I’d immediately direct him to http://singinst.org/AIRisk.pdf (the chapter I did for Bostrom), and then take out whatever time was needed to collect the OB/LW posts in our discussion into a sequence with summaries. Since I don’t expect senior traditional-AI-folk to pay me any such attention short of spending a HUGE amount of effort to get it and probably not even then, I haven’t, well, expended a huge amount of effort to get it.
FYI, I’ve talked with Peter Norvig a bit. He was mostly interested in the CEV / FAI-spec part of the problem—I don’t think we discussed hard takeoffs much per se. I certainly wouldn’t have brushed him off if he’d started asking!
“and then take out whatever time was needed to collect the OB/LW posts in our discussion into a sequence with summaries.”
Why? No one in the academic community would spend that much time reading all that blog material for answers that would be best given in a concise form in a published academic paper. So why not spend the time? Unless you think you are that much of an expert in the field as to not need the academic community. If that be the case where are your publications and where are your credentials, where is the proof of this expertise (expert being a term that is applied based on actual knowledge and accomplishments)?
“Since I don’t expect senior traditional-AI-folk to pay me any such attention short of spending a HUGE amount of effort to get it and probably not even then, I haven’t, well, expended a huge amount of effort to get it.”
Why? If you expect to make FAI you will undoubtedly need people in the academic communities’ help; unless you plan to do this whole project by yourself or with purely amateur help. I think you would admit that in its current form SIAI has a 0 probability of creating FAI first. That being said your best hope is to convince others that the cause is worthwhile and if that be the case you are looking at the professional and academic AI community.
I am sorry I prefer to be blunt.. that way there is no mistaking meanings...
Since I don’t expect senior traditional-AI-folk to pay me any such attention short of spending a HUGE amount of effort to get it and probably not even then, I haven’t, well, expended a huge amount of effort to get it.
Why? If you expect to make FAI you will undoubtedly need people in the academic communities’ help; unless you plan to do this whole project by yourself or with purely amateur help. …
That ‘probably not even then’ part is significant.
That being said your best hope is to convince others that the cause is worthwhile and if that be the case you are looking at the professional and academic AI community.
Now that is an interesting question. To what extent would Eliezer say that conclusion followed? Certainly less than the implied ‘1’ and probably more than ‘0’ too.
“Since I don’t expect senior traditional-AI-folk to pay me any such attention short of spending a HUGE amount of effort to get it and probably not even then, I haven’t, well, expended a huge amount of effort to get it.
Why? If you expect to make FAI you will undoubtedly need people in the academic communities’ help; unless you plan to do this whole project by yourself or with purely amateur help. …”
“That ‘probably not even then’ part is significant.”
My implication was that the idea that he can create FAI completely outside the academic or professional world is ridiculous when you’re speaking from an organization like SIAI which does not have the people or money to get the job done. In fact SIAI doesn’t have enough money to pay for the computing hardware to make human level AI.
“Now that is an interesting question. To what extent would Eliezer say that conclusion followed? Certainly less than the implied ‘1’ and probably more than ‘0’ too.”
If he doesn’t agree with it now, I am sure he will when he runs into the problem of not having the money to build his AI or not having enough time in the day to solve the problems that will be associated with constructing the AI. Not even mentioning the fact that when you close yourself to outside influence that much you often end up with ideas that are riddled with problems, that if someone on the outside had looked at the idea they would have pointed the problems out.
If you have never taken an idea from idea to product this can be hard to understand.
No need to disclaim, your figures are sound enough and I took them as a demonstration of another rather significant difference between the assumptions of Eliezer and mormon2 (or mormon2′s sources).
If there is a status pissing contest, they started it! ;-)
“On the latter, some panelists believe that the AAAI study was held amidst a perception of urgency by non-experts (e.g., a book and a forthcoming movie titled “The Singularity is Near”), and focus of attention, expectation, and concern growing among the general population.”
Agree with them that there is much scaremongering going on in the field—but disagree with them about there not being much chance of an intelligence explosion.
I wondered why these folk got so much press. My guess is that the media probably thought the “AAAI Presidential Panel on Long-Term AI Futures” had something to do with the a report commisioned indirectly for the country’s president. In fact it just refers to the president of their organisation. A media-savvy move—though it probably represents deliberately misleading information.
You admit you have not done much to make it easy to show them your reasons. You have not written up your key arguments in a compact form using standard style and terminology and submitted it to standard journals. You also admit you have not contacted any of them to ask them for their reasons; Horvitz would have have to “show up” for you to listen to him. This looks a lot like a status pissing contest; the obvious interpretation: Since you think you are better than them, you won’t ask them for their reasons, and you won’t make it easy for them to understand your reasons, as that would admit they are higher status. They will instead have to acknowledge your higher status by coming to you and doing things your way. And of course they won’t since by ordinary standard they have higher status. So you ensure there will be no conversation, and with no conversation you can invoke your “traditional” (non-Bayesian) rationality standard to declare you have no need to consider their opinions.
You’re being slightly silly. I simply don’t expect them to pay any attention to me one way or another. As it stands, if e.g. Horvitz showed up and asked questions, I’d immediately direct him to http://singinst.org/AIRisk.pdf (the chapter I did for Bostrom), and then take out whatever time was needed to collect the OB/LW posts in our discussion into a sequence with summaries. Since I don’t expect senior traditional-AI-folk to pay me any such attention short of spending a HUGE amount of effort to get it and probably not even then, I haven’t, well, expended a huge amount of effort to get it.
FYI, I’ve talked with Peter Norvig a bit. He was mostly interested in the CEV / FAI-spec part of the problem—I don’t think we discussed hard takeoffs much per se. I certainly wouldn’t have brushed him off if he’d started asking!
“and then take out whatever time was needed to collect the OB/LW posts in our discussion into a sequence with summaries.”
Why? No one in the academic community would spend that much time reading all that blog material for answers that would be best given in a concise form in a published academic paper. So why not spend the time? Unless you think you are that much of an expert in the field as to not need the academic community. If that be the case where are your publications and where are your credentials, where is the proof of this expertise (expert being a term that is applied based on actual knowledge and accomplishments)?
“Since I don’t expect senior traditional-AI-folk to pay me any such attention short of spending a HUGE amount of effort to get it and probably not even then, I haven’t, well, expended a huge amount of effort to get it.”
Why? If you expect to make FAI you will undoubtedly need people in the academic communities’ help; unless you plan to do this whole project by yourself or with purely amateur help. I think you would admit that in its current form SIAI has a 0 probability of creating FAI first. That being said your best hope is to convince others that the cause is worthwhile and if that be the case you are looking at the professional and academic AI community.
I am sorry I prefer to be blunt.. that way there is no mistaking meanings...
No.
That ‘probably not even then’ part is significant.
Now that is an interesting question. To what extent would Eliezer say that conclusion followed? Certainly less than the implied ‘1’ and probably more than ‘0’ too.
“Since I don’t expect senior traditional-AI-folk to pay me any such attention short of spending a HUGE amount of effort to get it and probably not even then, I haven’t, well, expended a huge amount of effort to get it.
Why? If you expect to make FAI you will undoubtedly need people in the academic communities’ help; unless you plan to do this whole project by yourself or with purely amateur help. …”
“That ‘probably not even then’ part is significant.”
My implication was that the idea that he can create FAI completely outside the academic or professional world is ridiculous when you’re speaking from an organization like SIAI which does not have the people or money to get the job done. In fact SIAI doesn’t have enough money to pay for the computing hardware to make human level AI.
“Now that is an interesting question. To what extent would Eliezer say that conclusion followed? Certainly less than the implied ‘1’ and probably more than ‘0’ too.”
If he doesn’t agree with it now, I am sure he will when he runs into the problem of not having the money to build his AI or not having enough time in the day to solve the problems that will be associated with constructing the AI. Not even mentioning the fact that when you close yourself to outside influence that much you often end up with ideas that are riddled with problems, that if someone on the outside had looked at the idea they would have pointed the problems out.
If you have never taken an idea from idea to product this can be hard to understand.
And so the utter difference of working assumptions is revealed.
Back of a napkin math:
10^4 neurons per supercomputer
10^11 neurons per brain
10^7 supercomputers per brain
1.3*10^6 dollars per supercomputer
1.3*10^13 dollars per brain
Edit: Disclaimer: Edit: NOT!
Another difference in working assumptions.
It’s a fact stated by the guy in the video, not an assumption.
No need to disclaim, your figures are sound enough and I took them as a demonstration of another rather significant difference between the assumptions of Eliezer and mormon2 (or mormon2′s sources).
I have. I’ve also failed to take other ideas to products and so agree with that part of your position, just not the argument as it relates to context.
If there is a status pissing contest, they started it! ;-)
“On the latter, some panelists believe that the AAAI study was held amidst a perception of urgency by non-experts (e.g., a book and a forthcoming movie titled “The Singularity is Near”), and focus of attention, expectation, and concern growing among the general population.”
Agree with them that there is much scaremongering going on in the field—but disagree with them about there not being much chance of an intelligence explosion.
I wondered why these folk got so much press. My guess is that the media probably thought the “AAAI Presidential Panel on Long-Term AI Futures” had something to do with the a report commisioned indirectly for the country’s president. In fact it just refers to the president of their organisation. A media-savvy move—though it probably represents deliberately misleading information.