The thread about EY’s failure to make make many falsifiable predictions is better ad hominem
I meant to provide priors for the expected value of communication with SI. Sorry, can’t be done in non ad hominem way. There’s been video or two where Eliezer was called “world’s foremost expert on recursive self improvement”, which normally implies making something self improve.
the speculation about launching terrorist attacks on fab plants is a much more compelling display of potential risk to life and property.
Ahh right, should of also linked this one. I see it was edited replacing ‘we’ with ‘world government’ and ‘sabotage’ with sanctions and military action. BTW that speculation is by gwern, is he working at SIAI?
What probability would you give to FinalState’s assertion of having a working AGI?
AGI is ill defined. Of something that would foom as to pose potential danger, infinitesimally small.
Ultimately: I think risk to his safety is small, and payoff is negligible, while the risk from his software is pretty much nonexistent.
It nonetheless results in significant presentation bias, what ever is the cause.
My priors, for one thing, were way off in SI’s favour. My own cascade of updates was triggered by seeing Alexei say that he plans to make a computer game to make money to donate to SIAI. Before which I sort of assumed that the AI discussions here were about some sorta infinite power super-intelligence in scifi, not unlike Vinge’s beyond, intellectually pleasurable game of wits (I even participated a little once or twice along the lines of how you can’t debug superintelligence). I assumed that Eliezer had achievements from which he got the attitude (I sort of confused him with Hanson to some extent), etc etc etc. I looked into it more accurately since.
I meant to provide priors for the expected value of communication with SI. Sorry, can’t be done in non ad hominem way. There’s been video or two where Eliezer was called “world’s foremost expert on recursive self improvement”, which normally implies making something self improve.
Ahh right, should of also linked this one. I see it was edited replacing ‘we’ with ‘world government’ and ‘sabotage’ with sanctions and military action. BTW that speculation is by gwern, is he working at SIAI?
AGI is ill defined. Of something that would foom as to pose potential danger, infinitesimally small.
Ultimately: I think risk to his safety is small, and payoff is negligible, while the risk from his software is pretty much nonexistent.
This usually happens when the person being introduced wasn’t consulted about the choice of introduction.
It nonetheless results in significant presentation bias, what ever is the cause.
My priors, for one thing, were way off in SI’s favour. My own cascade of updates was triggered by seeing Alexei say that he plans to make a computer game to make money to donate to SIAI. Before which I sort of assumed that the AI discussions here were about some sorta infinite power super-intelligence in scifi, not unlike Vinge’s beyond, intellectually pleasurable game of wits (I even participated a little once or twice along the lines of how you can’t debug superintelligence). I assumed that Eliezer had achievements from which he got the attitude (I sort of confused him with Hanson to some extent), etc etc etc. I looked into it more accurately since.