I don’t, but it is evidence that people disagree with the SIAI and think that there are more effective ways towards a positive Singularity.
Did you think that many LWers weren’t aware of this fact? I would have thought that everyone already knew...
Don’t forget that he once worked for the SIAI. If Michael Vassar was to leave the SIAI and start his own project, wouldn’t that be evidence about the SIAI?
I’m curious if you’ve seen this discussion, which occurred while Ben was still Research Director of SIAI.
ETA: I see that you commented in that thread several months after the initial discussion, so you must have read it. I suppose the problem from your perspective is that you can’t really distinguish between Eliezer and Ben. They each think their own approach to a positive Singularity is the best one, and think the other one is incompetent/harmless. You don’t know enough to judge the arguments on the object level. LW mostly favors Eliezer, but that might just be groupthink. I’m not really sure how to solve this problem, actually… anyone else have ideas?
Here’s an argument that may influence XiXiDu: people like Scott Aaronson and John Baez find Eliezer’s ideas worth discussing, while Ben doesn’t seem to have any ideas to discuss.
...while Ben doesn’t seem to have any ideas to discuss.
That an experimental approach is the way to go. I believe we don’t know enough about the nature of AGI to solely follow an theoretical approach right now. That is one of the most obvious shortcomings of the SIAI in my opinion.
Or perhaps it could be that Ben is too busy actually developing and researching AI to spend time discussing them ad nauseum? I stopped following many mailing lists or communities like this because I don’t actually have time to argue in circles with people.
(But make an exception when people start making up untruths about OpenCog)
Even if they don’t want to discuss their insights “ad nauseum”, I need some indication that they have new insights. Otherwise they won’t be able to build AI. “Busy developing and researching” doesn’t look very promising from the outside, considering how many other groups present themselves the same way.
Ben’s publishing several books (well, he’s already published several, but he’s publishing the already written “Building Better Minds” early 2012 and a pop sci version shortly there after which are more current regarding OpenCog). I’ll be writing a “practical” guide to OpenCog once we reach our 1.0 release at the end of 2012.
Ben actually does quite a lot of writing, theorizing and conferences. Whereas myself and a number of others are more concerned with the software development side of things.
New insights relative to the current state of academia. Many of us here are up-to-date with the relevant areas (or trying to be). I’m not sure what my knowledge of OpenCog has to do with anything, as I was asking for the benefit of all onlookers too.
Even if they don’t want to discuss their insights “ad nauseum”, I need some indication that they have new insights. Otherwise they won’t be able to build AI.
Evolution managed to do that without any capacity for having insights. It’s not out of the question that enough hard work without much understanding would suffice, particularly if you use the tools of mainstream AI (machine learning).
Also, just “success” is not something one would wish to support (success at exterminating humanity, say, is distinct from success in exterminating polio), so the query about which institution is more likely to succeed is seriously incomplete.
Unfortunately, there are only a few types of situations where it’s possible to operate successfully without an object level understanding—situations where you have a trustworthy authority to rely on, where you can apply trial and error, where you can use evolved instincts, or where the environment has already been made safe and predictable by someone who did have an object level understanding.
None of those would apply except relying on a trustworthy authority, but since no one has yet been able to demonstrate their ability to bring about a positive Singularity, XiXiDu is again stuck with difficult object level questions in deciding what basis he can use to decide who to trust.
I’m sympathetic to your position here, but precisely what kind of discussion about this possibility do you want more of?
Eliezer believes that building a superhuman intelligence is so dangerous that experimenting with it is irresponsible, and that instead some kind of theoretical/provable approach is necessary before you even get started.
The specific approach he has adopted is one built on a particular kind of decision theory, on the pervasive use of Bayes’ Theorem, and on the presumption that what humans value is so complicated that the best way to express it is by pointing at a bunch of humans and saying “There: that!”
SIAI is primarily populated by people who think Eliezer’s approach is a viable one.
Less Wrong is primarily populated by people who either think Eliezer’s strategy is compelling, or who don’t have building a superhuman intelligence as their primary focus in the first place.
People who have that as their primary focus and think that his strategy is a poor one go put their energy somewhere else that operates on a different strategy.
If he’s wrong, then he’ll fail, and SIAI will fail. If someone else has a different, viable, strategy, then that group will succeed. If nobody does, then nobody will.
This seems like exactly the way it’s supposed to work.
Sure, discussion is an important part of that. But Eliezer has written tens of thousands of words introducing his strategy and his reasons for finding it compelling, and hundreds of readers have written tens of thousands of words in response, arguing pro and con and presenting alternatives and clarifications and pointing out weaknesses.
I accept that you’re (as you say) unable to read thousands of comments, so you can’t know that, but in the nine months or so since I’ve found this site I have read thousands of comments, so I do know it. (Obviously, you don’t have to believe me. But you shouldn’t expect to convince me that it’s false, either, or anyone else who has read the same material.)
I’m not saying it’s a solved problem… it’s not. It is entirely justifiable to read all of that and simply not be convinced. Many people are in that position.
I’m saying it’s unlikely that we will make further progress along those lines by having more of the same kind of discussion. To make further progress in that conversation, you don’t need more discussion, you need a different kind of discussion.
In the meantime: maybe this is a groupthink-ridden cult. If so, it has a remarkable willingness to tolerate folks like me, who are mostly indifferent to its primary tenets. And the conversation is good.
There’s a lot of us around. Maybe we’re the equivalent of agnostics who go to church services because we’re bored on Sunday mornings; I dunno. But if so, I’m actually OK with that.
I feel that people here are way too emotional. If you tell them they’ll link you up to a sequence post on why being emotional can be a good thing. I feel that people here are not skeptic enough. If you tell them they’ll link you up to a sequence post on why being skeptic can be a bad thing. I feel that people here take some possibilities too seriously. If you tell them they’ll link you up...and so on. I could as well be talking to Yudkowsky only. And whether there is someone else, some expert or otherwise smart guy not agreeing then he is either accused of not having read the sequences or below their standards.
Eliezer believes that building a superhuman intelligence is so dangerous that experimenting with it is irresponsible...
The whole ‘too dangerous’ argument is perfect for everything from not having to prove any coding or engineering skills to dismissing openness and any kind of transparency up to things I am not even allowed to talk about here.
If he’s wrong, then he’ll fail, and SIAI will fail. If someone else has a different, viable, strategy, then that group will succeed. If nobody does, then nobody will.
Here we get to the problem. I have no good arguments against all of what I have hinted at above except that I have a strong gut feeling that something is wrong. So I’m trying to poke holes into it, I try to crumble the facade. Why? Well, they are causing me distress by telling me all those things about how possible galactic civilizations depend on my and your money. They are creating ethical dilemmas that make me feel committed to do something even though I’d really want to do something else. But before I do that I’ll first have to see if it holds water.
But Eliezer has written tens of thousands of words introducing his strategy and his reasons for finding it compelling...
Yup, I haven’t read most of the sequences but I did a lot spot tests and read what people linked me up to. I have yet to come across something novel. And I feel all that doesn’t really matter anyway. The basic argument is that high-risks can outweigh low probabilities, correct? That’s basically the whole fortification for why I am supposed to bother, everything else just being a side note. And that is also where I feel (yes gut feeling, no excuses here) something is wrong. I can’t judge it yet, maybe in 10 years when I learnt enough math, especially probability. But currently it just sounds wrong. If I thought that there was a low probability that running the LHC was going to open an invasion door for a fleet of aliens interested in torturing mammals then according to the aforementioned line of reasoning I could justify murdering a bunch of LHC scientists to prevent them from running the LHC. Everything else would be scope-insensitivity! Besides the obvious problems with that, I have a strong feeling that that line of reasoning is somehow bogus. I also don’t know jack shit about high-energy physics. And I feel Yudkowsky doesn’t know jack shit about intelligence (not that anyone else does know more about it). In other words, I feel we need to do more experiments first to understand what ‘intelligence’ is to ask people for their money to save the universe from paperclip maximizers.
See, I’m just someone who got dragged into something he thinks is bogus and of which he doesn’t want to be a part of but who nonetheless feels that he can’t ignore it either. So I’m just hoping it goes away if I try hard enough. How wrong and biased, huh? But I’m neither able to ignore it nor get myself to do something about it.
So, if you were simply skeptical about Yudkowsky/SIAI, you could dismiss them and walk away. But since you’re emotionally involved and feel like you have to make it all go away in order to feel better, that’s not an option for you.
The problem is, what you’re doing isn’t going to work for you either. You’re just setting yourself up for a rather pointless and bitter conflict.
Surely this isn’t a unique condition? I mean, there are plenty of groups out there who will tell you that there are various bad things that might happen if you don’t read their book, donate to their organization, etc., etc., and you don’t feel the emotional need to make them go away. You simply ignore them, or at least most of them.
How do you do that? Perhaps you can apply the same techniques here.
How do you do that? Perhaps you can apply the same techniques here.
I managed to do that with Jehovas Witnesses. I grew up being told that I have to tell people about Jehovas Witnesses so that they will be salvaged. It is my responsibility. But this here is on a much more sophisticated level. It includes all the elements of organized religion mixed up with science and math. Incidentally one of the first posts I read was Why Our Kind Can’t Cooperate:
The obvious wrong way to finish this thought is to say, “Let’s do what the Raelians do! Let’s add some nonsense to this meme!” [...]
When reading that I thought, “Wow, they are openly discussing what they are doing while dismissing it at the same time.” That post basically tells the story of how it all started.
So it’s probably not a good idea to cultivate a sense of violated entitlement at the thought that some other group, who you think ought to be inferior to you, has more money and followers.
‘Probably not’ my ass! :-)
The respected leader speaks, and there comes a chorus of pure agreement: if there are any who harbor inward doubts, they keep them to themselves. So all the individual members of the audience see this atmosphere of pure agreement, and they feel more confident in the ideas presented—even if they, personally, harbored inward doubts, why, everyone else seems to agree with it.
But this here is on a much more sophisticated level.
It is astonishing how effective it can be to systematize a skill that I learn on a simple problem, and then apply the systematized skill to more sophisticated problems.
So, OK: your goal is to find a way to disconnect emotionally from Less Wrong and from SIAI, and you already have the experience of disconnecting emotionally from the Jehovah’s Witnesses. How did you disconnect from them? Was there a particular event that transitioned you, or was it more of a gradual thing? Did it have to do with how they behaved, or with philosophical/intellectual opposition, or discovering a new social context, or something else...?
That sort of thing.
As for Why Our Kind Can’t Cooperate, etc. … (shrug). When I disagree with stuff or have doubts, I say so. Feel free to read through my first few months of comments here, if you want, and you’ll see plenty of that. And I see lots of other people doing the same.
I just don’t expect anyone to find what I say—whether in agreement or disagreement—more than peripherally interesting. It really isn’t about me.
Less Wrong ought to be about reasoning, as per Common Interest of Many Causes. Like you (I presume), I would like to see more posts about reasoning and fewer, despite my transhumanist sympathies, about boxed AIs, hypothetical torture scenarios, and the optimality of donating to the Friendly AI cause: focusing our efforts that way is more interesting, more broadly appealing, and ultimately more effective for everyone involved including the SIAI.
And I’d hazard a guess that the SIAI representatives here know that. A lot of people benefit from knowing how to think and act more effectively unqualified, but a site about improving reasoning skills that’s also an appendage to the SIAI party line limits its own effectiveness, and therefore its usefulness as a way of sharpening reasoning about AI (and, more cynically, as a source of smart and rational recruits), by being exclusionary. We’re doing a fair-to-middling job in that respect; we could definitely be doing a better one, if the above is a fair description of the intended topic according to the people who actually call the shots around here. That’s fine, and it does deserve further discussion.
But the topic of rationality isn’t at all well served by flogging criticisms of the SIAI viewpoint that have nothing to do with rationality, especially when they’re brought up out of the context of an existing SIAI discussion. Doing so might diminish perceived or actual groupthink re: galactic civilizations and your money, but it still lowers the signal-to-noise ratio, for the simple reason that the appealing qualities of this site are utterly indifferent to the pros and cons of dedicating your money to the Friendly AI cause except insofar as it serves as a case study in rational charity. Granted, there are signaling effects that might counter or overwhelm its usefulness as a case study, but the impression I get from talking to outsiders is that those are far from the most obvious or destructive signaling problems that the community exhibits.
Bottom line, I view the friendly AI topic as something between a historical quirk and a pet example among several of the higher-status people here, and I think you should too.
Like you (I presume), I would like to see more posts about reasoning and fewer, despite my transhumanist sympathies, about boxed AIs, hypothetical torture scenarios, and the optimality of donating to the Friendly AI cause: focusing our efforts that way is more interesting, more broadly appealing, and ultimately more effective for everyone involved including the SIAI.
Disagree on the “fewer” part. I’m not sure about SIAI, but I think at least my personal interests would not be better served by having fewer transhumanist posts. It might be a good idea to move such posts into a subforum though. (I think supporting such subforums was discussed in the past, but I don’t remember if it hasn’t been done due to lack of resources, or if there’s some downside to the idea.)
Fair enough. It ultimately comes down to whether or not tickling transhumanists’ brains wins us more than we’d gain from appearing however more approachable to non-transhumanist rationalists, and there’s enough unquantified values in that equation to leave room for disagreement. In a world where a magazine as poppy and mainstream as TIME likes to publish articles on the Singularity, I could easily be wrong.
I stand by my statements when it comes to SIAI-specific values, though.
I would like to see more posts about reasoning and fewer, despite my transhumanist sympathies, about boxed AIs, hypothetical torture scenarios, and the optimality of donating to the Friendly AI cause
One of these things is not like the others. One of these things is not about the topic which historically could not be named. One of them is just a building block that can be sometimes useful when discussing reasoning that involves decision making.
My objection to that one is slightly different, yes. But I think it does derive from the same considerations of vast utility/disutility that drive the historically forbidden topic, and is subject to some of the same pitfalls (as well as some others less relevant here).
There are also a few specific torture scenarios which are much more closely linked to the historically forbidden topic, and which come up, however obliquely, with remarkable frequency.
There are also a few specific torture scenarios which are much more closely linked to the historically forbidden topic, and which come up, however obliquely, with remarkable frequency.
Hmm...
Roko’s Basilisk
Boxed AI trying to extort you
The ’People Are Jerks” failure mode of CEV
I can’t think of any other possible examples off the top of my head. were these the ones you were thinking of?
Also Pascal’s mugging (though I suppose how closely related that is to the HFT depends on where you place the emphasis) and a few rarer variations, but you’ve hit the main ones.
Less Wrong ought to be about reasoning, as per Common Interest of Many Causes. Like you (I presume), I would like to see more posts about reasoning and fewer, despite my transhumanist sympathies, about boxed AIs, hypothetical torture scenarios, and the optimality of donating to the Friendly AI cause: focusing our efforts that way is more interesting, more broadly appealing, and ultimately more effective for everyone involved including the SIAI.
(...)
Bottom line, I view the friendly AI topic as something between a historical quirk and a pet example among several of the higher-status people here, and I think you should too.
Did you think that many LWers weren’t aware of this fact? I would have thought that everyone already knew...
I’m curious if you’ve seen this discussion, which occurred while Ben was still Research Director of SIAI.
ETA: I see that you commented in that thread several months after the initial discussion, so you must have read it. I suppose the problem from your perspective is that you can’t really distinguish between Eliezer and Ben. They each think their own approach to a positive Singularity is the best one, and think the other one is incompetent/harmless. You don’t know enough to judge the arguments on the object level. LW mostly favors Eliezer, but that might just be groupthink. I’m not really sure how to solve this problem, actually… anyone else have ideas?
Here’s an argument that may influence XiXiDu: people like Scott Aaronson and John Baez find Eliezer’s ideas worth discussing, while Ben doesn’t seem to have any ideas to discuss.
That an experimental approach is the way to go. I believe we don’t know enough about the nature of AGI to solely follow an theoretical approach right now. That is one of the most obvious shortcomings of the SIAI in my opinion.
Ben rarely seems short on ideas. For some recent ones, perhaps start with these:
The GOLEM Eats the Chinese Parent
Coherent Aggregated Volition
Or perhaps it could be that Ben is too busy actually developing and researching AI to spend time discussing them ad nauseum? I stopped following many mailing lists or communities like this because I don’t actually have time to argue in circles with people.
(But make an exception when people start making up untruths about OpenCog)
Even if they don’t want to discuss their insights “ad nauseum”, I need some indication that they have new insights. Otherwise they won’t be able to build AI. “Busy developing and researching” doesn’t look very promising from the outside, considering how many other groups present themselves the same way.
Ben’s publishing several books (well, he’s already published several, but he’s publishing the already written “Building Better Minds” early 2012 and a pop sci version shortly there after which are more current regarding OpenCog). I’ll be writing a “practical” guide to OpenCog once we reach our 1.0 release at the end of 2012.
Ben actually does quite a lot of writing, theorizing and conferences. Whereas myself and a number of others are more concerned with the software development side of things.
We also have a wiki: http://wiki.opencog.org
What new insights are there?
Well new is relative… so without any familiarity of your knowledge on OpenCog I can’t comment.
New insights relative to the current state of academia. Many of us here are up-to-date with the relevant areas (or trying to be). I’m not sure what my knowledge of OpenCog has to do with anything, as I was asking for the benefit of all onlookers too.
Evolution managed to do that without any capacity for having insights. It’s not out of the question that enough hard work without much understanding would suffice, particularly if you use the tools of mainstream AI (machine learning).
Also, just “success” is not something one would wish to support (success at exterminating humanity, say, is distinct from success in exterminating polio), so the query about which institution is more likely to succeed is seriously incomplete.
Unfortunately, there are only a few types of situations where it’s possible to operate successfully without an object level understanding—situations where you have a trustworthy authority to rely on, where you can apply trial and error, where you can use evolved instincts, or where the environment has already been made safe and predictable by someone who did have an object level understanding.
None of those would apply except relying on a trustworthy authority, but since no one has yet been able to demonstrate their ability to bring about a positive Singularity, XiXiDu is again stuck with difficult object level questions in deciding what basis he can use to decide who to trust.
Yes.
I commented on a specific new comment there and didn’t read the thread. Do you think newbies are able to read thousands of comments?
Indeed, this possibility isn’t discussed enough.
I’m sympathetic to your position here, but precisely what kind of discussion about this possibility do you want more of?
Eliezer believes that building a superhuman intelligence is so dangerous that experimenting with it is irresponsible, and that instead some kind of theoretical/provable approach is necessary before you even get started.
The specific approach he has adopted is one built on a particular kind of decision theory, on the pervasive use of Bayes’ Theorem, and on the presumption that what humans value is so complicated that the best way to express it is by pointing at a bunch of humans and saying “There: that!”
SIAI is primarily populated by people who think Eliezer’s approach is a viable one.
Less Wrong is primarily populated by people who either think Eliezer’s strategy is compelling, or who don’t have building a superhuman intelligence as their primary focus in the first place.
People who have that as their primary focus and think that his strategy is a poor one go put their energy somewhere else that operates on a different strategy.
If he’s wrong, then he’ll fail, and SIAI will fail. If someone else has a different, viable, strategy, then that group will succeed. If nobody does, then nobody will.
This seems like exactly the way it’s supposed to work.
Sure, discussion is an important part of that. But Eliezer has written tens of thousands of words introducing his strategy and his reasons for finding it compelling, and hundreds of readers have written tens of thousands of words in response, arguing pro and con and presenting alternatives and clarifications and pointing out weaknesses.
I accept that you’re (as you say) unable to read thousands of comments, so you can’t know that, but in the nine months or so since I’ve found this site I have read thousands of comments, so I do know it. (Obviously, you don’t have to believe me. But you shouldn’t expect to convince me that it’s false, either, or anyone else who has read the same material.)
I’m not saying it’s a solved problem… it’s not. It is entirely justifiable to read all of that and simply not be convinced. Many people are in that position.
I’m saying it’s unlikely that we will make further progress along those lines by having more of the same kind of discussion. To make further progress in that conversation, you don’t need more discussion, you need a different kind of discussion.
In the meantime: maybe this is a groupthink-ridden cult. If so, it has a remarkable willingness to tolerate folks like me, who are mostly indifferent to its primary tenets. And the conversation is good.
There’s a lot of us around. Maybe we’re the equivalent of agnostics who go to church services because we’re bored on Sunday mornings; I dunno. But if so, I’m actually OK with that.
I feel that people here are way too emotional. If you tell them they’ll link you up to a sequence post on why being emotional can be a good thing. I feel that people here are not skeptic enough. If you tell them they’ll link you up to a sequence post on why being skeptic can be a bad thing. I feel that people here take some possibilities too seriously. If you tell them they’ll link you up...and so on. I could as well be talking to Yudkowsky only. And whether there is someone else, some expert or otherwise smart guy not agreeing then he is either accused of not having read the sequences or below their standards.
The whole ‘too dangerous’ argument is perfect for everything from not having to prove any coding or engineering skills to dismissing openness and any kind of transparency up to things I am not even allowed to talk about here.
Here we get to the problem. I have no good arguments against all of what I have hinted at above except that I have a strong gut feeling that something is wrong. So I’m trying to poke holes into it, I try to crumble the facade. Why? Well, they are causing me distress by telling me all those things about how possible galactic civilizations depend on my and your money. They are creating ethical dilemmas that make me feel committed to do something even though I’d really want to do something else. But before I do that I’ll first have to see if it holds water.
Yup, I haven’t read most of the sequences but I did a lot spot tests and read what people linked me up to. I have yet to come across something novel. And I feel all that doesn’t really matter anyway. The basic argument is that high-risks can outweigh low probabilities, correct? That’s basically the whole fortification for why I am supposed to bother, everything else just being a side note. And that is also where I feel (yes gut feeling, no excuses here) something is wrong. I can’t judge it yet, maybe in 10 years when I learnt enough math, especially probability. But currently it just sounds wrong. If I thought that there was a low probability that running the LHC was going to open an invasion door for a fleet of aliens interested in torturing mammals then according to the aforementioned line of reasoning I could justify murdering a bunch of LHC scientists to prevent them from running the LHC. Everything else would be scope-insensitivity! Besides the obvious problems with that, I have a strong feeling that that line of reasoning is somehow bogus. I also don’t know jack shit about high-energy physics. And I feel Yudkowsky doesn’t know jack shit about intelligence (not that anyone else does know more about it). In other words, I feel we need to do more experiments first to understand what ‘intelligence’ is to ask people for their money to save the universe from paperclip maximizers.
See, I’m just someone who got dragged into something he thinks is bogus and of which he doesn’t want to be a part of but who nonetheless feels that he can’t ignore it either. So I’m just hoping it goes away if I try hard enough. How wrong and biased, huh? But I’m neither able to ignore it nor get myself to do something about it.
(nods) I see.
So, if you were simply skeptical about Yudkowsky/SIAI, you could dismiss them and walk away. But since you’re emotionally involved and feel like you have to make it all go away in order to feel better, that’s not an option for you.
The problem is, what you’re doing isn’t going to work for you either. You’re just setting yourself up for a rather pointless and bitter conflict.
Surely this isn’t a unique condition? I mean, there are plenty of groups out there who will tell you that there are various bad things that might happen if you don’t read their book, donate to their organization, etc., etc., and you don’t feel the emotional need to make them go away. You simply ignore them, or at least most of them.
How do you do that? Perhaps you can apply the same techniques here.
I managed to do that with Jehovas Witnesses. I grew up being told that I have to tell people about Jehovas Witnesses so that they will be salvaged. It is my responsibility. But this here is on a much more sophisticated level. It includes all the elements of organized religion mixed up with science and math. Incidentally one of the first posts I read was Why Our Kind Can’t Cooperate:
When reading that I thought, “Wow, they are openly discussing what they are doing while dismissing it at the same time.” That post basically tells the story of how it all started.
‘Probably not’ my ass! :-)
Not that you could encounter that here ;-)
It is astonishing how effective it can be to systematize a skill that I learn on a simple problem, and then apply the systematized skill to more sophisticated problems.
So, OK: your goal is to find a way to disconnect emotionally from Less Wrong and from SIAI, and you already have the experience of disconnecting emotionally from the Jehovah’s Witnesses. How did you disconnect from them? Was there a particular event that transitioned you, or was it more of a gradual thing? Did it have to do with how they behaved, or with philosophical/intellectual opposition, or discovering a new social context, or something else...?
That sort of thing.
As for Why Our Kind Can’t Cooperate, etc. … (shrug). When I disagree with stuff or have doubts, I say so. Feel free to read through my first few months of comments here, if you want, and you’ll see plenty of that. And I see lots of other people doing the same.
I just don’t expect anyone to find what I say—whether in agreement or disagreement—more than peripherally interesting. It really isn’t about me.
And you have expressed that feeling most passionately.
Less Wrong ought to be about reasoning, as per Common Interest of Many Causes. Like you (I presume), I would like to see more posts about reasoning and fewer, despite my transhumanist sympathies, about boxed AIs, hypothetical torture scenarios, and the optimality of donating to the Friendly AI cause: focusing our efforts that way is more interesting, more broadly appealing, and ultimately more effective for everyone involved including the SIAI.
And I’d hazard a guess that the SIAI representatives here know that. A lot of people benefit from knowing how to think and act more effectively unqualified, but a site about improving reasoning skills that’s also an appendage to the SIAI party line limits its own effectiveness, and therefore its usefulness as a way of sharpening reasoning about AI (and, more cynically, as a source of smart and rational recruits), by being exclusionary. We’re doing a fair-to-middling job in that respect; we could definitely be doing a better one, if the above is a fair description of the intended topic according to the people who actually call the shots around here. That’s fine, and it does deserve further discussion.
But the topic of rationality isn’t at all well served by flogging criticisms of the SIAI viewpoint that have nothing to do with rationality, especially when they’re brought up out of the context of an existing SIAI discussion. Doing so might diminish perceived or actual groupthink re: galactic civilizations and your money, but it still lowers the signal-to-noise ratio, for the simple reason that the appealing qualities of this site are utterly indifferent to the pros and cons of dedicating your money to the Friendly AI cause except insofar as it serves as a case study in rational charity. Granted, there are signaling effects that might counter or overwhelm its usefulness as a case study, but the impression I get from talking to outsiders is that those are far from the most obvious or destructive signaling problems that the community exhibits.
Bottom line, I view the friendly AI topic as something between a historical quirk and a pet example among several of the higher-status people here, and I think you should too.
Disagree on the “fewer” part. I’m not sure about SIAI, but I think at least my personal interests would not be better served by having fewer transhumanist posts. It might be a good idea to move such posts into a subforum though. (I think supporting such subforums was discussed in the past, but I don’t remember if it hasn’t been done due to lack of resources, or if there’s some downside to the idea.)
Fair enough. It ultimately comes down to whether or not tickling transhumanists’ brains wins us more than we’d gain from appearing however more approachable to non-transhumanist rationalists, and there’s enough unquantified values in that equation to leave room for disagreement. In a world where a magazine as poppy and mainstream as TIME likes to publish articles on the Singularity, I could easily be wrong.
I stand by my statements when it comes to SIAI-specific values, though.
One of these things is not like the others. One of these things is not about the topic which historically could not be named. One of them is just a building block that can be sometimes useful when discussing reasoning that involves decision making.
My objection to that one is slightly different, yes. But I think it does derive from the same considerations of vast utility/disutility that drive the historically forbidden topic, and is subject to some of the same pitfalls (as well as some others less relevant here).
There are also a few specific torture scenarios which are much more closely linked to the historically forbidden topic, and which come up, however obliquely, with remarkable frequency.
Hmm...
Roko’s Basilisk
Boxed AI trying to extort you
The ’People Are Jerks” failure mode of CEV
I can’t think of any other possible examples off the top of my head. were these the ones you were thinking of?
Also Pascal’s mugging (though I suppose how closely related that is to the HFT depends on where you place the emphasis) and a few rarer variations, but you’ve hit the main ones.
This should be a top-level post, if only to maximize the proportion of LessWrongers that will read it.
Upvoted for complete agreement, particularly:
Please do not downvote comments like the parent.