Why has this comment been downvoted so much? It’s well-written and makes some good points. I find it really disheartening every time I come on here to find that a community of “rationalists” is so quick to muffle anyone who disagrees with LW collective opinion.
It’s been downvoted—I guess—because it sits on the wrong side of a very interesting dynamic: what I call the “outside view dismissal” or “outside view attack”. It goes like this:
A: From the outside, far too many groups discover that their supported cause is the best donation avenue. Therefore, be skeptical of any group advocating their preferred cause as the best donation avenue.
B: Ah, but this group tries to the best of their objective abilities to determine the best donation avenue, and their cause has independently come out as the best donation avenue. You might say we prefer it because it’s the best, not the other way around.
A: From the outside, far too many groups claim to prefer it because it’s the best and not the other way around. Therefore, be skeptical of any group claiming they prefer a cause because it is the best.
B: Ah, but this group has spent a huge amount of time and effort training themselves to be good at determining what is best, and an equal amount of time training themselves to notice common failure modes like reversing causal flows because it looks better.
A: From the outside, far too many groups claim such training for it to be true. Therefore, be skeptical of any group making that claim.
B: Ah, but this group is well aware of that possibility; we specifically started from the outside view and used evidence to update properly to the level of these claims.
A: From the outside, far too many groups claim to have started skeptical and been convinced by evidence for it to be true. Therefore, be skeptical of any group making that claim.
B: No, we really, truly, did start out skeptical, and we really, truly, did get convinced by the evidence.
A: From the outside, far too many people claim they really did weigh the evidence for it to be true. Therefore, be skeptical of any person claiming to have really weighed the evidence.
B: Fine, you know what? Here’s the evidence, look at it yourself. You already know you’re starting from the position of maximum skepticism.
A: From the outside, there are far too many ‘convince even a skeptic’ collections of evidence for them all to be true. Therefore, I am suspicious that this collection might be indoctrination, not evidence.
And so on.
The problem is that the outside view is used not just to set a good prior, but also to discount any and all evidence presented to support a higher inside view. This is the opposite of an epistemically unreachable position—an epistemically stuck position, a flawed position (you can’t get anywhere from there), but try explaining that idea to A. Dollars to donuts you’ll get:
A: From the outside, far too many people accuse me of having a flawed or epistemically stuck position. Therefore, be skeptical of anyone making such an accusation.
And I am sure many people on LessWrong have had this discussion (probably in the form of ‘oh yeah? lots of people think they’re right and they’re wrong’ → ‘lots of people claim to work harder at being right too and they’re wrong’ → ‘lots of people resort to statistics and objective measurements that have probably been fudged to support their position’ → ‘lots people claim they haven’t fudged when they have’ and so on), and I am sure that the downvoted comment pattern-matches the beginning of such a discussion.
All of the evidence that an AI is possible¹, then the best method of setting your prior for the behavior of an AI².
¹. Our brains are proof of concept. That it is possible for a lump of flesh to be intelligent means AI is possible—even under pessimistic circumstances, even if it means simulating a brain with atomic precision and enough power to run the simulation faster than 1 second per second. Your pessimism would have to reach “the human brain is irreducible” in order to disagree with this proof, by which point you’d have neurobiologists pointing out you’re wrong.
². Which would be equal distribution over all possible points in relevant-thing-space, in this case mindspace.
Just to clarify: are you asserting that this comment, and the associated post about the size of mindspace, represent the “convince even a skeptic” collection of evidence you were alluding to in its grandparent (which XiXiDu quotes)?
Or was there a conversational disconnect somewhere along the line?
I didn’t provide all of the evidence that an AI is possible, just one strong piece. All the evidence, plus a good prior for how likely the AI is to turn us into more useful matter, should be enough to convince even a skeptic. However, the brain-as-proof-of-concept idea is really strong: try and formulate an argument against that position.
Unless they’re a skeptic like A above, or they’re an “UFAI-denier” (in the style of climate change deniers) posing as a skeptic, or they privilege what they want to believe over what they ought to believe. There are probably half a dozen more failure modes I haven’t spotted.
Sounds like a conversational disconnect to me, then: at least, going back through the sequence of comments, it seems the sequence began with an expression of skepticism of the claim that “a donation to the Singularity Institute is the most efficient charitable investment,” and ended with a presentation of an argument that UFAI is both possible and more likely than FAI.
Thanks for clarifying.
Just to pre-emptively avoid being misunderstood myself, since I have stepped into what may well be a minefield of overinterpretation, let me state some of my own related beliefs: I consider human-level, human-produced AGI possible (confidence level ~1) within the next century (C ~.85-.99, depending on just what “human-level” means and assuming we continue to work on the problem), likely not within the next 30 years (C<.15-.5, depending as above). I consider self-improving AGI and associated FOOM, given human-level AGI, a great big question mark: I’d say >99% of HLAGIs we develop will be architected in such a way that significant self-improvement is unlikely (much as our own architectures make it unlikely for us), but the important question is whether the actual number of exceptions is 0 or 1, and I have no confidence in my intuitions about that (see my comments elsewhere about expected results based on small probabilities of large magnitudes). I consider UFAI given self-improving AGI practically a certainty: >99% of SIAGIs will be UFAIs, and again the important question is whether the number of exceptions is 0 or 1, and whether the exception comes first. (The same thing is true about non-SI AGIs, but I care about that less.) Whether SIAI can influence that last question at all, and if so by how much and in what direction, I haven’t a clue about; if I wanted to develop an opinion about that I’d have to look into what SIAI actually does day-to-day.
If any of that is symptomatic of fallacy, I’d appreciate having it pointed out, though of course nobody is under any obligation to do so.
There’s an argument chain I didn’t make clear; “If UFAI is both more possible and more likely than FAI, then influencing this in favour of FAI is a critical goal” and “SIAI is the most effective charity working towards this goal”.
The only part I would inquire about is
I’d say >99% of HLAGIs we develop will be architected in such a way that significant self-improvement is unlikely (much as our own architectures make it unlikely for us),
Humans don’t have the ability to self-modify (at least, our neuroscience is too underdeveloped to count for that yet) but AGIs will probably be made from explicit programming code, and will probably have some level of command over programming code (it seems like one of the ways in which it would be expected to interact with the world, creating code that achieves its goals). So its architecture is more conducive to self-modification (and hence self-improvement) than ours is.
Of course, a more developed point is that humans are very likely to build a fixed AGI if they can. If you’re making that point, and not that AGIs simply won’t self-improve, then I see no issues.
Re: argument chain… I agree that those claims are salient.
Observations that differentially support those claims are also salient, of course, which is what I understood XiXiDu to be asking for, which is why I asked you initially to clarify what you thought you were providing.
Re: self-improvement… I agree that AGIs will be better-suited to modify code than humans are to modify neurons, both in terms of physical access and in terms of a functional understanding of what that code does.
I also think that if humans did have the equivalent ability to mess with their own neurons, >99% of us would either wirehead or accidentally self-lobotomize rather than successfully self-optimize.
I don’t think the reason for that is primarily in how difficult human brains are to optimize, because humans are also pretty dreadful at optimizing systems other than human brains. I think the problem is primarily in how bad human brains are at optimizing. (While still being way better at it than their competition.)
That is, the reasons have to do with our patterns of cognition and behavior, which are as much a part of our architecture as is the fact that our fingers can’t rewire our neural circuits.
Of course, maybe human-level AGIs would be way way better at this than humans would. But if so, it wouldn’t be just because they can write their own cognitive substrate, it would also be because their patterns of cognition and behavior were better suited for self-optimization.
I’m curious as to your estimate of what % of HLAGIs will successfully self-improve?
I’m curious as to your estimate of what % of HLAGIs will successfully self-improve?
I guess all AGIs that aren’t explicity forbidden to will self-modify (75%); self-modification will mostly start with a backup (code has this option) (95%), and maybe half the methods of backup/compare will approve improvements and throw out undesirable changes.
So 35% will self-improve successfully. I also estimate that humans will keep making AGIs until they get one that self-improves.
I guess all AGIs that aren’t explicity forbidden to will self-modify (75%); self-modification will mostly start with a backup (code has this option) (95%), and maybe half the methods of backup/compare will approve improvements and throw out undesirable changes.
Really? This seems to ignore that certain structures will have a lot of trouble self-modifying. For example, consider an AI that is a hard-encoded silicon chip with a fixed amount of RAM. Unless it is already very clever, there’s no way it can self-improve.
This actually illustrates nicely some issues with the whole notion of “self-improving.”
Suppose Sally is an AI on a hard-encoded silicon chip with fixed RAM. One day Sally is given the job of establishing a policy to control resource allocation at the Irrelevant Outputs factory, and concludes that the most efficient mechanism for doing so is to implement in software on the IO network the same algorithms that its own silicon chip implements in hardware, so it does so.
The program Sally just wrote can be thought of as a version of Sally that is not constrained to a particular silicon chip. (It probably also runs much slower, though that’s not entirely clear.)
In this scenario, is Sally self-modifying? Is it self-improving? I’m not even sure those are the right questions.
Hard-coding onto chips, or even making specific structures electromechanical in nature, is one way of how humans would achieve “explicitly forbidden to self-modify” in AIs. I estimated that one in every four AGI projects will desire to forbid their project from self-modification. I thought this was optimistic; I haven’t seen any discussion of fixed AGI. Although maybe that might be something military research and development is interested in.
This doesn’t address the most controversial aspect, which is that AI would go foom. If extreme fooming doesn’t occur this isn’t nearly as big an issue. That is an issue where many people have discussed it and not all have come away convinced. Robin Hanson had a long debate with Eliezer over this and Robin was not convinced. Personally, I consider fooming to be unlikely but plausible. But how likely one thinks it is matters a lot.
This doesn’t address the most controversial aspect, which is that [nuclear weapons] would [ignite the atmosphere]. If extreme [atmospheric ignition] doesn’t occur this isn’t nearly as big an issue.
Even without foom, AI is a major existential risk, in my opinion.
Foom is included in that proof concept. Human intelligence has made faster and faster computation; a human intelligence sped up could reasonably expect to increase the speed and amount of computation available to it; resulting in faster speeds, and so on.
You are repeating what amounts to a single cached thought. The claim in question is that there’s enough evidence to convince a skeptic. Giving a short line of logic for that isn’t at all the same. Moreover, the claim that such evidence exists is empirically very hard to justify given the Yudkowsky-Hanson debate. Hanson is very smart. Eliezer did his best to present a case for AI going foom. He didn’t convince Hanson.
You are repeating what amounts to a single cached thought.
I’m not allowed to cache thoughts that are right?
You seem to be taking “Hanson disagreed with Eliezer” as proof that all evidence Eliezer presented doesn’t amount to FOOM.
I’d note here that I started out learning from this site very skeptical, treating “I now believe in the Singularity” as a failure mode of my rationality, but something tells me you’d be suspicious of that too.
You are. But when people ask for evidence it is generally more helpful to actually point to the evidence rather than simply repeating a secondary cached thought that is part of the interpretation of the evidence.
You seem to be taking “Hanson disagreed with Eliezer” as proof that all evidence Eliezer presented doesn’t amount to FOOM.
No. I must have been unclear. I’m pointing to the fact that there are people who are clearly quite smart and haven’t become convinced by the claim after looking at it in detail. Which means that when someone like XiXiDu asks where the evidence is a one paragraph summary with zero links is probably not going to be sufficient.
I’d note here that I started out learning from this site very skeptical, treating “I now believe in the Singularity” as a failure mode of my rationality, but something tells me you’d be suspicious of that too.
I’m not suspicious of it. My own estimate for fooming has gone up since I’ve spent time here (mainly due to certain arguments made by cousin_it), but I don’t see why you think I’d be suspicious or not. Your personal opinion or my personal opinion just isn’t that relevant when someone has asked “where’s the evidence?” Maybe our personal opinions with all the logic and evidence drawn out in detail might matter. But that’s a very different sort of thing.
I can’t speak for anyone else, but I downvoted it because of the deadly combination of:
A. Unfriendly snarkiness, i.e. scare-quoting “rationalists” and making very general statements about the flaws of LW without any suggestions for improvements, and without a tone of constructive criticism.
B. Incorrect content, i.e. not referencing this article which is almost certainly the primary reason there are so many comments saying “I donated”, and the misuse of probability in the first paragraph.
If it were just A, then I could appreciate the comment for making a good point and do my best to ignore the antagonism. If it were just B, then the comment is cool because it creates an opportunity to correct a mistake in a way that benefits both the original commenter and others, and adds to the friendly atmosphere of the site.
The combination, though, results in comments that don’t add anything at all, which is why I downvoted srdiamond’s comment.
Downvoted parent and grandparent. The grandparent because:
It doesn’t deserve the above defence.
States obvious and trivial things as though they are deep insightful criticisms while applying them superficially
Sneaks through extra elements of an agenda via presumption.
I had left it alone until I saw it given unwarranted praise and a meta karma challenge.
I find it really disheartening every time I come on here to find that a community of “rationalists” is so quick to muffle anyone who disagrees with LW collective opinion.
Initially I wanted to downvote you but decided to upvote you for providing reasons for why you downvoted the above comments.
The reason for why I believe that the comments shouldn’t have been downvoted is that in this case something other than signaling disapproval of poor style and argumentation is more important. This post and thread are especially off-putting to skeptical outsiders. Downvoting critical comments will just reinforce this perception. Therefore, if you are fond of LW and the SIAI, you should account for public relations and kindly answer any critical or generally skeptical comments rather than simply downvoting them.
Downvoting critical comments will just reinforce this perception. Therefore, if you are fond of LW and the SIAI, you should account for public relations and kindly answer any critical or generally skeptical comments rather than simply downvoting them.
What is there to say in response to a comment like the one that started this thread? It was purely an outside-view argument that doesn’t make any specific claims against the efficacy of SIAI or against any of the reasons that people believe it is an important cause. It wasn’t an argument, it was a dismissal.
Your post right here seems like a good example. You could say something along the lines of “This is a dismissal, not an argument; merely naming a bias isn’t enough to convince me. If you provide some specific examples, I’d be happy to listen and respond as best as I can.” You can even tack on an “But until then, I’m downvoting this because it seems like it’s coming from hostility rather than a desire to find the truth together.”
Heck, you could even copy that and have it saved somewhere as a form response to comments like that.
I noticed the tendency on LW to portray comments as attacks. They may seem that way to trained rationalists and otherwise highly educated folks. But not every negative comment is actually intended to be just a rhetorical device or simple dismissal. It won’t help if you just downvote people or call them logical rude. Some people are honestly interested but fail to express themselves adequately. Usually newcomers won’t know about the abnormally high standards on LW. You have to tell them about it. You also have to take into account those who are linked to this post, or come across it by other means, who don’t know anything about LW. How does this thread appear to them, what are they likely to conclude, especially if no critical comment is being answered kindly but simply downvoted or snidely rejected?
Agreed that responding to criticism is important, but I think it’s especially beneficial to respond only to non-nasty criticism. Responding nicely to people who are behaving like jerks can create an atmosphere where jerkiness is encouraged.
This is the internet, though; skins are assumed to be tough. There is some benefit to saying “It looks like you wanted to say ‘X’. Please try to be less nasty next time. Here’s why I don’t agree with X” instead of just “wow, you’re nasty.”
Why has this comment been downvoted so much? It’s well-written and makes some good points. I find it really disheartening every time I come on here to find that a community of “rationalists” is so quick to muffle anyone who disagrees with LW collective opinion.
It’s been downvoted—I guess—because it sits on the wrong side of a very interesting dynamic: what I call the “outside view dismissal” or “outside view attack”. It goes like this:
A: From the outside, far too many groups discover that their supported cause is the best donation avenue. Therefore, be skeptical of any group advocating their preferred cause as the best donation avenue.
B: Ah, but this group tries to the best of their objective abilities to determine the best donation avenue, and their cause has independently come out as the best donation avenue. You might say we prefer it because it’s the best, not the other way around.
A: From the outside, far too many groups claim to prefer it because it’s the best and not the other way around. Therefore, be skeptical of any group claiming they prefer a cause because it is the best.
B: Ah, but this group has spent a huge amount of time and effort training themselves to be good at determining what is best, and an equal amount of time training themselves to notice common failure modes like reversing causal flows because it looks better.
A: From the outside, far too many groups claim such training for it to be true. Therefore, be skeptical of any group making that claim.
B: Ah, but this group is well aware of that possibility; we specifically started from the outside view and used evidence to update properly to the level of these claims.
A: From the outside, far too many groups claim to have started skeptical and been convinced by evidence for it to be true. Therefore, be skeptical of any group making that claim.
B: No, we really, truly, did start out skeptical, and we really, truly, did get convinced by the evidence.
A: From the outside, far too many people claim they really did weigh the evidence for it to be true. Therefore, be skeptical of any person claiming to have really weighed the evidence.
B: Fine, you know what? Here’s the evidence, look at it yourself. You already know you’re starting from the position of maximum skepticism.
A: From the outside, there are far too many ‘convince even a skeptic’ collections of evidence for them all to be true. Therefore, I am suspicious that this collection might be indoctrination, not evidence.
And so on.
The problem is that the outside view is used not just to set a good prior, but also to discount any and all evidence presented to support a higher inside view. This is the opposite of an epistemically unreachable position—an epistemically stuck position, a flawed position (you can’t get anywhere from there), but try explaining that idea to A. Dollars to donuts you’ll get:
A: From the outside, far too many people accuse me of having a flawed or epistemically stuck position. Therefore, be skeptical of anyone making such an accusation.
And I am sure many people on LessWrong have had this discussion (probably in the form of ‘oh yeah? lots of people think they’re right and they’re wrong’ → ‘lots of people claim to work harder at being right too and they’re wrong’ → ‘lots of people resort to statistics and objective measurements that have probably been fudged to support their position’ → ‘lots people claim they haven’t fudged when they have’ and so on), and I am sure that the downvoted comment pattern-matches the beginning of such a discussion.
Where is the evidence?
All of the evidence that an AI is possible¹, then the best method of setting your prior for the behavior of an AI².
¹. Our brains are proof of concept. That it is possible for a lump of flesh to be intelligent means AI is possible—even under pessimistic circumstances, even if it means simulating a brain with atomic precision and enough power to run the simulation faster than 1 second per second. Your pessimism would have to reach “the human brain is irreducible” in order to disagree with this proof, by which point you’d have neurobiologists pointing out you’re wrong.
². Which would be equal distribution over all possible points in relevant-thing-space, in this case mindspace.
Just to clarify: are you asserting that this comment, and the associated post about the size of mindspace, represent the “convince even a skeptic” collection of evidence you were alluding to in its grandparent (which XiXiDu quotes)?
Or was there a conversational disconnect somewhere along the line?
I didn’t provide all of the evidence that an AI is possible, just one strong piece. All the evidence, plus a good prior for how likely the AI is to turn us into more useful matter, should be enough to convince even a skeptic. However, the brain-as-proof-of-concept idea is really strong: try and formulate an argument against that position.
Unless they’re a skeptic like A above, or they’re an “UFAI-denier” (in the style of climate change deniers) posing as a skeptic, or they privilege what they want to believe over what they ought to believe. There are probably half a dozen more failure modes I haven’t spotted.
Sounds like a conversational disconnect to me, then: at least, going back through the sequence of comments, it seems the sequence began with an expression of skepticism of the claim that “a donation to the Singularity Institute is the most efficient charitable investment,” and ended with a presentation of an argument that UFAI is both possible and more likely than FAI.
Thanks for clarifying.
Just to pre-emptively avoid being misunderstood myself, since I have stepped into what may well be a minefield of overinterpretation, let me state some of my own related beliefs: I consider human-level, human-produced AGI possible (confidence level ~1) within the next century (C ~.85-.99, depending on just what “human-level” means and assuming we continue to work on the problem), likely not within the next 30 years (C<.15-.5, depending as above). I consider self-improving AGI and associated FOOM, given human-level AGI, a great big question mark: I’d say >99% of HLAGIs we develop will be architected in such a way that significant self-improvement is unlikely (much as our own architectures make it unlikely for us), but the important question is whether the actual number of exceptions is 0 or 1, and I have no confidence in my intuitions about that (see my comments elsewhere about expected results based on small probabilities of large magnitudes). I consider UFAI given self-improving AGI practically a certainty: >99% of SIAGIs will be UFAIs, and again the important question is whether the number of exceptions is 0 or 1, and whether the exception comes first. (The same thing is true about non-SI AGIs, but I care about that less.) Whether SIAI can influence that last question at all, and if so by how much and in what direction, I haven’t a clue about; if I wanted to develop an opinion about that I’d have to look into what SIAI actually does day-to-day.
If any of that is symptomatic of fallacy, I’d appreciate having it pointed out, though of course nobody is under any obligation to do so.
There’s an argument chain I didn’t make clear; “If UFAI is both more possible and more likely than FAI, then influencing this in favour of FAI is a critical goal” and “SIAI is the most effective charity working towards this goal”.
The only part I would inquire about is
Humans don’t have the ability to self-modify (at least, our neuroscience is too underdeveloped to count for that yet) but AGIs will probably be made from explicit programming code, and will probably have some level of command over programming code (it seems like one of the ways in which it would be expected to interact with the world, creating code that achieves its goals). So its architecture is more conducive to self-modification (and hence self-improvement) than ours is.
Of course, a more developed point is that humans are very likely to build a fixed AGI if they can. If you’re making that point, and not that AGIs simply won’t self-improve, then I see no issues.
Re: argument chain… I agree that those claims are salient.
Observations that differentially support those claims are also salient, of course, which is what I understood XiXiDu to be asking for, which is why I asked you initially to clarify what you thought you were providing.
Re: self-improvement… I agree that AGIs will be better-suited to modify code than humans are to modify neurons, both in terms of physical access and in terms of a functional understanding of what that code does.
I also think that if humans did have the equivalent ability to mess with their own neurons, >99% of us would either wirehead or accidentally self-lobotomize rather than successfully self-optimize.
I don’t think the reason for that is primarily in how difficult human brains are to optimize, because humans are also pretty dreadful at optimizing systems other than human brains. I think the problem is primarily in how bad human brains are at optimizing. (While still being way better at it than their competition.)
That is, the reasons have to do with our patterns of cognition and behavior, which are as much a part of our architecture as is the fact that our fingers can’t rewire our neural circuits.
Of course, maybe human-level AGIs would be way way better at this than humans would. But if so, it wouldn’t be just because they can write their own cognitive substrate, it would also be because their patterns of cognition and behavior were better suited for self-optimization.
I’m curious as to your estimate of what % of HLAGIs will successfully self-improve?
I guess all AGIs that aren’t explicity forbidden to will self-modify (75%); self-modification will mostly start with a backup (code has this option) (95%), and maybe half the methods of backup/compare will approve improvements and throw out undesirable changes.
So 35% will self-improve successfully. I also estimate that humans will keep making AGIs until they get one that self-improves.
Really? This seems to ignore that certain structures will have a lot of trouble self-modifying. For example, consider an AI that is a hard-encoded silicon chip with a fixed amount of RAM. Unless it is already very clever, there’s no way it can self-improve.
This actually illustrates nicely some issues with the whole notion of “self-improving.”
Suppose Sally is an AI on a hard-encoded silicon chip with fixed RAM. One day Sally is given the job of establishing a policy to control resource allocation at the Irrelevant Outputs factory, and concludes that the most efficient mechanism for doing so is to implement in software on the IO network the same algorithms that its own silicon chip implements in hardware, so it does so.
The program Sally just wrote can be thought of as a version of Sally that is not constrained to a particular silicon chip. (It probably also runs much slower, though that’s not entirely clear.)
In this scenario, is Sally self-modifying? Is it self-improving? I’m not even sure those are the right questions.
Hard-coding onto chips, or even making specific structures electromechanical in nature, is one way of how humans would achieve “explicitly forbidden to self-modify” in AIs. I estimated that one in every four AGI projects will desire to forbid their project from self-modification. I thought this was optimistic; I haven’t seen any discussion of fixed AGI. Although maybe that might be something military research and development is interested in.
My point was that even in some cases where people aren’t thinking about self-modification, self-modification won’t happen by default.
This doesn’t address the most controversial aspect, which is that AI would go foom. If extreme fooming doesn’t occur this isn’t nearly as big an issue. That is an issue where many people have discussed it and not all have come away convinced. Robin Hanson had a long debate with Eliezer over this and Robin was not convinced. Personally, I consider fooming to be unlikely but plausible. But how likely one thinks it is matters a lot.
Even without foom, AI is a major existential risk, in my opinion.
Foom is included in that proof concept. Human intelligence has made faster and faster computation; a human intelligence sped up could reasonably expect to increase the speed and amount of computation available to it; resulting in faster speeds, and so on.
You are repeating what amounts to a single cached thought. The claim in question is that there’s enough evidence to convince a skeptic. Giving a short line of logic for that isn’t at all the same. Moreover, the claim that such evidence exists is empirically very hard to justify given the Yudkowsky-Hanson debate. Hanson is very smart. Eliezer did his best to present a case for AI going foom. He didn’t convince Hanson.
I’m not allowed to cache thoughts that are right?
You seem to be taking “Hanson disagreed with Eliezer” as proof that all evidence Eliezer presented doesn’t amount to FOOM.
I’d note here that I started out learning from this site very skeptical, treating “I now believe in the Singularity” as a failure mode of my rationality, but something tells me you’d be suspicious of that too.
You are. But when people ask for evidence it is generally more helpful to actually point to the evidence rather than simply repeating a secondary cached thought that is part of the interpretation of the evidence.
No. I must have been unclear. I’m pointing to the fact that there are people who are clearly quite smart and haven’t become convinced by the claim after looking at it in detail. Which means that when someone like XiXiDu asks where the evidence is a one paragraph summary with zero links is probably not going to be sufficient.
I’m not suspicious of it. My own estimate for fooming has gone up since I’ve spent time here (mainly due to certain arguments made by cousin_it), but I don’t see why you think I’d be suspicious or not. Your personal opinion or my personal opinion just isn’t that relevant when someone has asked “where’s the evidence?” Maybe our personal opinions with all the logic and evidence drawn out in detail might matter. But that’s a very different sort of thing.
I can’t speak for anyone else, but I downvoted it because of the deadly combination of:
A. Unfriendly snarkiness, i.e. scare-quoting “rationalists” and making very general statements about the flaws of LW without any suggestions for improvements, and without a tone of constructive criticism.
B. Incorrect content, i.e. not referencing this article which is almost certainly the primary reason there are so many comments saying “I donated”, and the misuse of probability in the first paragraph.
If it were just A, then I could appreciate the comment for making a good point and do my best to ignore the antagonism. If it were just B, then the comment is cool because it creates an opportunity to correct a mistake in a way that benefits both the original commenter and others, and adds to the friendly atmosphere of the site.
The combination, though, results in comments that don’t add anything at all, which is why I downvoted srdiamond’s comment.
Downvoted parent and grandparent. The grandparent because:
It doesn’t deserve the above defence.
States obvious and trivial things as though they are deep insightful criticisms while applying them superficially
Sneaks through extra elements of an agenda via presumption.
I had left it alone until I saw it given unwarranted praise and a meta karma challenge.
See the replies to all similar complaints.
Initially I wanted to downvote you but decided to upvote you for providing reasons for why you downvoted the above comments.
The reason for why I believe that the comments shouldn’t have been downvoted is that in this case something other than signaling disapproval of poor style and argumentation is more important. This post and thread are especially off-putting to skeptical outsiders. Downvoting critical comments will just reinforce this perception. Therefore, if you are fond of LW and the SIAI, you should account for public relations and kindly answer any critical or generally skeptical comments rather than simply downvoting them.
What is there to say in response to a comment like the one that started this thread? It was purely an outside-view argument that doesn’t make any specific claims against the efficacy of SIAI or against any of the reasons that people believe it is an important cause. It wasn’t an argument, it was a dismissal.
Your post right here seems like a good example. You could say something along the lines of “This is a dismissal, not an argument; merely naming a bias isn’t enough to convince me. If you provide some specific examples, I’d be happy to listen and respond as best as I can.” You can even tack on an “But until then, I’m downvoting this because it seems like it’s coming from hostility rather than a desire to find the truth together.”
Heck, you could even copy that and have it saved somewhere as a form response to comments like that.
I noticed the tendency on LW to portray comments as attacks. They may seem that way to trained rationalists and otherwise highly educated folks. But not every negative comment is actually intended to be just a rhetorical device or simple dismissal. It won’t help if you just downvote people or call them logical rude. Some people are honestly interested but fail to express themselves adequately. Usually newcomers won’t know about the abnormally high standards on LW. You have to tell them about it. You also have to take into account those who are linked to this post, or come across it by other means, who don’t know anything about LW. How does this thread appear to them, what are they likely to conclude, especially if no critical comment is being answered kindly but simply downvoted or snidely rejected?
Agreed that responding to criticism is important, but I think it’s especially beneficial to respond only to non-nasty criticism. Responding nicely to people who are behaving like jerks can create an atmosphere where jerkiness is encouraged.
This is the internet, though; skins are assumed to be tough. There is some benefit to saying “It looks like you wanted to say ‘X’. Please try to be less nasty next time. Here’s why I don’t agree with X” instead of just “wow, you’re nasty.”
I have noted that trying to take that sort of response seems to lead to negative consequences more often than not.
Our experiences disagree, then; I can think of many plausible explanations that leave both of us justified, so I will leave it at this.
I agree that it’s been downvoted too much. (At −6 as of this comment, up from −7 due to my own upvote.)