I know that this article is more than a bit sensationalized, but it covers most of the things that I donate to the SIAI despite, like several members’ evangelical polyamory. Such things don’t help the phyg pattern matching, which already hits us hard.
The “evangelical polyamory” seems like an example of where Rationalists aren’t being particularly rational.
In order to get widespread adoption of your main (more important) ideas, it seems like a good idea to me to keep your other, possibly alienating, ideas private.
Being the champion of a cause sometimes necessitates personal sacrifice beyond just hard work.
Seriously, why should anyone think that SI is anything more than “narcissistic dilettantes who think they need to teach their awesome big picture ideas to the mere technicians that are creating the future”, to paraphrase one of my friends?
re: sex life, nothing wrong with it per se, but consider that there’s things like psychopathy checklist where you score points for basically talking people into giving you money, for being admired beyond accomplishments, and for sexual promiscuity also. On top of that most people will give you fuzzy psychopathy point for believing the AI to be psychopathic, because typical mind fallacy. Not saying that it is solid science, it isn’t, just outlining how many people think.
On top of that most people will give you fuzzy psychopathy point for believing the AI to be psychopathic, because typical mind fallacy. Not saying that it is solid science, it isn’t, just outlining how many people think.
This doesn’t seem to happen when people note that when you look at corporations as intentional agents, they behave like human psychopaths. The reasoning is even pretty similar to the case for AIs, corporations exhibit basic rational behavior but mostly lack whatever special sauce individual humans have that makes them be a bit more prosocial.
Well, the intelligence in general can be much more alien than this.
Consider an AI that, given any mathematical model of a system and some ‘value’ metric, finds optimum parameters for object in a system. E.g. the system could be Navier-Stokes equations and a wing, the wing shape may be the parameter, and some metric of drag and lift of the wing can be the value to maximize, and the AI would do all that’s necessary including figuring out how to simulate those equations efficiently.
Or the system could be general relativity and quantum mechanics, the parameter could be a theory of everything equation, and some metric of inelegance has to be minimized.
That’s the sort of thing that scientists tend to see as ‘intelligent’.
The AI, however, did acquire plenty of connotations from science fiction, whereby it is very anthropomorphic.
Those are narrow AIs. Their behavior doesn’t involve acquiring resources from the outside world and autonomously developing better ways to do that. That’s the part that might lead to psychopath-like behavior.
Specializing the algorithm to outside world and to particular philosophy of value does not make it broader, or more intelligent, only more anthropomorphic (and less useful, if you dont believe in friendliness).
The end value is still doing the best possible optimization for the parameters of the mathematical system. There are many more resources to be used for that in the outside world than what is probably available for the algorithm when it starts up. So the algorithm that can interact effectively with the outside world may be able to satisfy whatever alien goal it has much better than one who doesn’t.
(I’m a bit confused if you want the Omohundro Basic AI Drives stuff explained to you here or if you want to be disagreeing with it.)
Having specific hardware that is computing an algorithm actually display the results of computation in specific time is outside the scope of ‘mathematical system’.
Furthermore, the decision theories are all built to be processed using the above mentioned mathematics-solving intelligence to attain real world goals, except defining real world goals proves immensely difficult. edit: also, if the mathematics solving intelligence was to have some basic extra drives to resist being switched off and such (so that it could complete its computations), then the FAI relying on such mathematics solving subcomponent would be impossible. The decision theories presume absence of any such drives inside their mathematics processing component.
Omohundro Basic AI Drives stuff
If the sufficiently advanced technology is indistinguishable from magic, the arguments about “sufficiently advanced AI system” in absence of actual definition what it is, are indistinguishable from magical thinking.
If the sufficiently advanced technology is indistinguishable from magic, the arguments about “sufficiently advanced AI system” in absence of actual definition what it is, are indistinguishable from magical thinking.
That sentence is magical thinking. You’re equating the meaning of the word “magic” in Clarke’s Law and in the expression “magical thinking”, which do not refer to the same thing.
I thought the expression ‘magical thinking’ was broad enough as to include fantasising about magic. I do think though that even in the meaning of ‘thinking by word association’ it happens a whole lot with futurism also, when the field is ill specified and the collisions between model and world are commonplace (as well as general confusion due to lack of specificity of the terms).
If the sufficiently advanced technology is indistinguishable from magic, the arguments about “sufficiently advanced AI system” in absence of actual definition what it is, are indistinguishable from magical thinking.
Ok, then, so the actual problem is that the people who worry about AIs that behave psychopatically have such a capable definition for AI that you consider them basically speaking nonsense?
The “sufficiently advanced” in their argumentations means “sufficiently advanced in the direction of making my argument true” and nothing more.
If I adopt pragmatic version of “advancedness”, then the software (algorithms) that are somehow magically made to* self identify with it’s computing substrate, is less advanced, unless it is also friendly or something.
we don’t know how to do that yet. edit: and some believe that it would just fall out of general smartness somehow, but i’m quite dubious about that.
Maybe the word “evangelical” isn’t strictly correct. (A quick Google search suggests that I had cached the phrase from this discussion.) I’d like to point out an example of an incident that leaves a bad taste in my mouth.
(Before anyone asks, yes, we’re polyamorous – I am in long-term
relationships with three women, all of whom are involved with
more than one guy. Apologies in advance to any 19th-century old
fogies who are offended by our more advanced culture. Also before
anyone asks: One of those is my primary who I’ve been with for
7+ years, and the other two did know my real-life identity before
reading HPMOR, but HPMOR played a role in their deciding that I
was interesting enough to date.)
This comment was made by Eliezer under the name of this community in the author’s notes to one of LessWrongs’s largest recruiting tools. I remember when I first read this, I kind of flipped out. Professor Quirrell wouldn’t have written this, I thought. It was needlessly antagonistic, it squandered a bunch of positive affect, there was little to be gained from this digression, it was blatant signaling—it was so obviously the wrong thing to do and yet it was published anyway.
A few months before that was written, I had cut a fairly substantial cheque to the Singularity Institute. I want to purchase AI risk reduction, not fund a phyg. Blocks of text like the above do not make me feel comfortable that I am doing the former and not the later. I am not alone here.
Back when I only lurked here and saw the first PUA fights, I was in favor of the PUA discussion ban because if LessWrong wants to be a movement that either tries to raise the sanity waterline or maximizes the probability of solving the Friendly AI problem, it needs to be as inclusive as possible and have as few ugh fields that immediately drive away new members. I now think an outright ban would do more harm than good, but the ugh field remains and is counterproductive.
When you decide to fund research, what are your requirements for researchers’ personal lives? Is the problem that his sex life is unusual, or that he talks about it?
My biggest problem is more that he talks about it, sometimes in semiofficial
channels. This doesn’t mean that I wouldn’t be squicked out if I learned about
it, but I wouldn’t see it as a political problem for the SIAI.
The SIAI isn’t some random research think tank: it presents itself as the
charity with the highest utility per marginal dollar. Likewise, Eliezer
Yudkowsky isn’t some random anonymous researcher: he is the public face of the
SIAI. His actions and public behavior reflect on the SIAI whether or not it’s
fair, and everyone involved should have already had that as a strongly held
prior.
If people ignore lesswrong or don’t donate to the SIAI because they’re filtered
out by squickish feelings, then this is less resources for the SIAI’s mission
in return for inconsequential short term gains realized mostly by SIAI
insiders. Compound this that talking about the singularity already triggers
some people’s absurdity bias; there needs to be as few other filters as
possible to maximize usable resources that the SIAI has to maximize the chance
of positive singularity outcomes.
It seems there are two problems: you trust SIAI less, and you worry that others will trust it less. I understand the reason for the second worry, but not the first. Is it that you worry your investment will become worth less because others won’t want to fund SIAI?
That talk was very strong evidence that the SI is incompetent at PR, and furthermore, irrational. edit: or doesn’t possess stated goals and beliefs. If you believe the donations are important for saving your life (along with everyone else’s), then you naturally try to avoid making such statements. Though I do in some way admire straight up in your face honesty.
My feelings on the topic are similar to iceman’s, though possibly for slightly different reasons.
What bothers me is not the fact that Eliezer’s sex life is “unusual”, or that he talks about it, but that he talks about it in his capacity as the chief figurehead and PR representative for his organization. This signals a certain lack of focus due to an inability to distinguish one’s personal and professional life.
Unless the precise number and configuration of Eliezer’s significant others is directly applicable to AI risk reduction, there’s simply no need to discuss it in his official capacity. It’s unprofessional and distracting.
(in the interests of full disclosure, I should mention that I am not planning on donating to SIAI any time soon, so my points above are more or less academic).
On the other hand—while I’m also worried about other people’s reaction to that comment, my own reaction was positive. Which suggests there might be other people with positive reactions to it.
I think I like having a community leader who doesn’t come across as though everything he says is carefully tailored to not offend people who might be useful; and occasionally offending such people is one way to signal being such a leader.
I also worry that Eliezer having to filter comments like this would make writing less fun for him; and if that made him write less, it might be worse than offending people.
I know that this article is more than a bit sensationalized, but it covers most of the things that I donate to the SIAI despite, like several members’ evangelical polyamory. Such things don’t help the phyg pattern matching, which already hits us hard.
The “evangelical polyamory” seems like an example of where Rationalists aren’t being particularly rational.
In order to get widespread adoption of your main (more important) ideas, it seems like a good idea to me to keep your other, possibly alienating, ideas private.
Being the champion of a cause sometimes necessitates personal sacrifice beyond just hard work.
Probably another example: calling themselves “Rationalists”
Yeah.
Seriously, why should anyone think that SI is anything more than “narcissistic dilettantes who think they need to teach their awesome big picture ideas to the mere technicians that are creating the future”, to paraphrase one of my friends?
This is pretty damn illuminating:
http://lesswrong.com/lw/9gy/the_singularity_institutes_arrogance_problem/5p6a
re: sex life, nothing wrong with it per se, but consider that there’s things like psychopathy checklist where you score points for basically talking people into giving you money, for being admired beyond accomplishments, and for sexual promiscuity also. On top of that most people will give you fuzzy psychopathy point for believing the AI to be psychopathic, because typical mind fallacy. Not saying that it is solid science, it isn’t, just outlining how many people think.
This doesn’t seem to happen when people note that when you look at corporations as intentional agents, they behave like human psychopaths. The reasoning is even pretty similar to the case for AIs, corporations exhibit basic rational behavior but mostly lack whatever special sauce individual humans have that makes them be a bit more prosocial.
Well, the intelligence in general can be much more alien than this.
Consider an AI that, given any mathematical model of a system and some ‘value’ metric, finds optimum parameters for object in a system. E.g. the system could be Navier-Stokes equations and a wing, the wing shape may be the parameter, and some metric of drag and lift of the wing can be the value to maximize, and the AI would do all that’s necessary including figuring out how to simulate those equations efficiently.
Or the system could be general relativity and quantum mechanics, the parameter could be a theory of everything equation, and some metric of inelegance has to be minimized.
That’s the sort of thing that scientists tend to see as ‘intelligent’.
The AI, however, did acquire plenty of connotations from science fiction, whereby it is very anthropomorphic.
Those are narrow AIs. Their behavior doesn’t involve acquiring resources from the outside world and autonomously developing better ways to do that. That’s the part that might lead to psychopath-like behavior.
Specializing the algorithm to outside world and to particular philosophy of value does not make it broader, or more intelligent, only more anthropomorphic (and less useful, if you dont believe in friendliness).
The end value is still doing the best possible optimization for the parameters of the mathematical system. There are many more resources to be used for that in the outside world than what is probably available for the algorithm when it starts up. So the algorithm that can interact effectively with the outside world may be able to satisfy whatever alien goal it has much better than one who doesn’t.
(I’m a bit confused if you want the Omohundro Basic AI Drives stuff explained to you here or if you want to be disagreeing with it.)
Having specific hardware that is computing an algorithm actually display the results of computation in specific time is outside the scope of ‘mathematical system’.
Furthermore, the decision theories are all built to be processed using the above mentioned mathematics-solving intelligence to attain real world goals, except defining real world goals proves immensely difficult. edit: also, if the mathematics solving intelligence was to have some basic extra drives to resist being switched off and such (so that it could complete its computations), then the FAI relying on such mathematics solving subcomponent would be impossible. The decision theories presume absence of any such drives inside their mathematics processing component.
If the sufficiently advanced technology is indistinguishable from magic, the arguments about “sufficiently advanced AI system” in absence of actual definition what it is, are indistinguishable from magical thinking.
That sentence is magical thinking. You’re equating the meaning of the word “magic” in Clarke’s Law and in the expression “magical thinking”, which do not refer to the same thing.
I thought the expression ‘magical thinking’ was broad enough as to include fantasising about magic. I do think though that even in the meaning of ‘thinking by word association’ it happens a whole lot with futurism also, when the field is ill specified and the collisions between model and world are commonplace (as well as general confusion due to lack of specificity of the terms).
Ok, then, so the actual problem is that the people who worry about AIs that behave psychopatically have such a capable definition for AI that you consider them basically speaking nonsense?
The “sufficiently advanced” in their argumentations means “sufficiently advanced in the direction of making my argument true” and nothing more.
If I adopt pragmatic version of “advancedness”, then the software (algorithms) that are somehow magically made to* self identify with it’s computing substrate, is less advanced, unless it is also friendly or something.
we don’t know how to do that yet. edit: and some believe that it would just fall out of general smartness somehow, but i’m quite dubious about that.
“evangelical polyamory”
Very much agree with this in particular.
Who’s being evangelical about it?
Maybe the word “evangelical” isn’t strictly correct. (A quick Google search suggests that I had cached the phrase from this discussion.) I’d like to point out an example of an incident that leaves a bad taste in my mouth.
This comment was made by Eliezer under the name of this community in the author’s notes to one of LessWrongs’s largest recruiting tools. I remember when I first read this, I kind of flipped out. Professor Quirrell wouldn’t have written this, I thought. It was needlessly antagonistic, it squandered a bunch of positive affect, there was little to be gained from this digression, it was blatant signaling—it was so obviously the wrong thing to do and yet it was published anyway.
A few months before that was written, I had cut a fairly substantial cheque to the Singularity Institute. I want to purchase AI risk reduction, not fund a phyg. Blocks of text like the above do not make me feel comfortable that I am doing the former and not the later. I am not alone here.
Back when I only lurked here and saw the first PUA fights, I was in favor of the PUA discussion ban because if LessWrong wants to be a movement that either tries to raise the sanity waterline or maximizes the probability of solving the Friendly AI problem, it needs to be as inclusive as possible and have as few ugh fields that immediately drive away new members. I now think an outright ban would do more harm than good, but the ugh field remains and is counterproductive.
[d1]: http://lesswrong.com/lw/9kf/ive_had_it_with_those_dark_rumours_about_our/5raj
When you decide to fund research, what are your requirements for researchers’ personal lives? Is the problem that his sex life is unusual, or that he talks about it?
My biggest problem is more that he talks about it, sometimes in semiofficial channels. This doesn’t mean that I wouldn’t be squicked out if I learned about it, but I wouldn’t see it as a political problem for the SIAI.
The SIAI isn’t some random research think tank: it presents itself as the charity with the highest utility per marginal dollar. Likewise, Eliezer Yudkowsky isn’t some random anonymous researcher: he is the public face of the SIAI. His actions and public behavior reflect on the SIAI whether or not it’s fair, and everyone involved should have already had that as a strongly held prior.
If people ignore lesswrong or don’t donate to the SIAI because they’re filtered out by squickish feelings, then this is less resources for the SIAI’s mission in return for inconsequential short term gains realized mostly by SIAI insiders. Compound this that talking about the singularity already triggers some people’s absurdity bias; there needs to be as few other filters as possible to maximize usable resources that the SIAI has to maximize the chance of positive singularity outcomes.
It seems there are two problems: you trust SIAI less, and you worry that others will trust it less. I understand the reason for the second worry, but not the first. Is it that you worry your investment will become worth less because others won’t want to fund SIAI?
That talk was very strong evidence that the SI is incompetent at PR, and furthermore, irrational. edit: or doesn’t possess stated goals and beliefs. If you believe the donations are important for saving your life (along with everyone else’s), then you naturally try to avoid making such statements. Though I do in some way admire straight up in your face honesty.
My feelings on the topic are similar to iceman’s, though possibly for slightly different reasons.
What bothers me is not the fact that Eliezer’s sex life is “unusual”, or that he talks about it, but that he talks about it in his capacity as the chief figurehead and PR representative for his organization. This signals a certain lack of focus due to an inability to distinguish one’s personal and professional life.
Unless the precise number and configuration of Eliezer’s significant others is directly applicable to AI risk reduction, there’s simply no need to discuss it in his official capacity. It’s unprofessional and distracting.
(in the interests of full disclosure, I should mention that I am not planning on donating to SIAI any time soon, so my points above are more or less academic).
On the other hand—while I’m also worried about other people’s reaction to that comment, my own reaction was positive. Which suggests there might be other people with positive reactions to it.
I think I like having a community leader who doesn’t come across as though everything he says is carefully tailored to not offend people who might be useful; and occasionally offending such people is one way to signal being such a leader.
I also worry that Eliezer having to filter comments like this would make writing less fun for him; and if that made him write less, it might be worse than offending people.
I can only give you one upvote, so please take my comment as a second.
Agreed. I don’t want to have to hedge my exposure to crazy social experiments; I want pure-play Xrisk reduction.