Excellent news. Considered together with the announcement of AI scientists endorsing a statement in favour of researching how to make AI beneficial, this is the best weeks for AI safety that i can remember.
Taken together with the publication of Superintelligence, founding of FLI and CSER, and teansition of SI into a research organisation MIRI, it’s becoming clearer that the last few years have started to usher in a new chapter in AI safety.
I know that machine learning capabilities are also increasing but let’s celebrate successes like these!
Is it excellent news? Ignoring the good that will come from the money, shouldn’t the fact that Musk is donating the funds increase our estimate that AI is indeed an existential threat? Imagine you have a condition that a very few people think will probably kill you, but most think is harmless. Then a really smart doctor examines you, says you should be worried, and pays for part of your treatment. Although this doctor has helped you, he has also lowered your estimate of how long you are going to live.
Should not update our beliefs much. Musk is a smart guy, he has access to roughly the same information as we do, and his interpretation of that information is that the danger is enough to justify him in spending millions on it. But not, e.g., enough for him to drop everything else and dedicate his life to trying to solve it.
I think most of us should adjust our beliefs about the danger either up or down a little in response to that.
Disagree. Meet a lot of the Less Wrong style people in real life and a totally different respectable elite emerge than what you see on forums and some people collapse. Musk is far more trustworthy. Less wrong people over-estimate themselves.
Uh. I don’t know, you see many more dimensions that causes you to harshly devalue a significant amount of individuals while finding you missed out of many good people. Less Wrong people are incredibly hit or miss, and many are “effective narcissists” and have highly acute issues that they use their high verbal intelligence to argue against.
Also there exists a tendency for speaking in extreme declarative statements and using meta-navigation in conversations as a crutch for lack of fundamental social skills. Furthermore I have met many quasi-famous LW people that are unethical in a straightforward fashion.
A large chunk of less wrong people you meet, including named individuals turn out to be not so great, or great in ways other than intelligence that you can appreciate them for. The great people you do meet however significantly make up for and surpass losses.
When people talk about “smart LW people” they often judge via forum posts or something, when that turns out to be only a moderately useful metric. If you ever meet the extended community I’m sure you will agree. It’s hard for me to explain.
tl;dr Musk is just more trustworthy and competent overall unless you are restricting yourself to a strict subset of Less Wrong people. Also LW people tend to overestimate how advanced they are compared to other epistemic blocs that are as elite, or are more elite.
The halo effect is when your brain tricks you in to collapsing all of a person’s varied attributes and abilities in to a single dimension of how much you respect them. Dan Quayle’s success in politics provides limited evidence that he’s a good speller. Satoshi Nakamoto’s high status as the inventor of Bitcoin provides limited evidence that he is good looking. Justin Bieber’s success as a pop star provides limited evidence that he’s good at math. Etc.
Elon Musk is famous for being an extremely accomplished in the hi-tech world. This provides strong evidence that Musk is “competent”. “Trustworthy” I’m not as sure about.
Less Wrong users can be highly rational and make accurate predictions worth listening to while lacking “fundamental social skills”.
An individual Less Wronger who has lots of great ideas and writes well online might be too socially anxious to be a good conversationalist.
I was being very generous in my post. Less Wrong has many people that have megalomaniac tendencies. This would be almost impossible to argue against. I gave a wide margin and said many great people, but to pretend that there aren’t illegitimate people who hide and are many in number is something else entirely.
Elon Musk is certainly trustworthy. You can calculate trustworthiness via amount of accumulated capital because the effect trust has on said serial accumulation.
You have referenced relatively elementary mistakes that do not apply in this situation. Your examples are extremely off base
Dan Quayle as a good speller is irrelevant and unbelievable that is not even in the same ballpark of what I was saying
Satoshi Nakamoto being good looking because of his invention is ridiculous
Just Bieber example is worse.
Social approximation is far more robust than peoples assumed identities and online persona’s. I am not foolishly judging people randomly. Most of the “effective altruists” you meet are complete pushovers and if they wish to justify their megalomaniac tendencies constantly going on and on about how effective they are, they should be able to put their foot down and stop bad people and set boundaries, or cut the effective act until they are able to.
They use their “effective altruism” to make up for the fact of the huge ethical opportunity costs they miss by what they DO NOT do. They then engage in extremely obscurant arguments as cover.
As an example see the lack of ethics that many people complain about in mathematics ie Grothendieck or Perelman.
Mathematicians trends towards passivity and probably are “good people” but they do not stop their peers in engaging unethical behavior and thus that is the sordid state of mathematics. Stopping bad people is primary, doing good things is second. Effective altruism is incomplete until they admit that it is not doing good things, but stopping bad things, and you need a robust personality structure to do so.
What about determining how much money investors will spend attempting to develop an AI, or how difficult it would be to coordinate the activities of programmers to create an AI, or how much to trust the people who claim that unfriendly AI is a threat.
I was already pretty convinced it was a problem, but it turns out very pessimistic about the chances of anyone taking it seriously, so the effect on the latter greatly outweighs the effect on the former for me.
Would it be better to remain ignorant? It’s a false choice if you think the comparison is between being told the enemy is about to destroy us vs the enemy being where we thought they were. The enemy is about to destroy us, whether we know about it or not. The real alternative is remaining ignorant until the very last moment. It is better to be told the truth, no matter how much you hope for reality to be otherwise.
No, and I think Musk is doing a great thing, but the fact that he thinks it needs to be done is not “excellent news”. I think we are talking past each other.
Excellent news. Considered together with the announcement of AI scientists endorsing a statement in favour of researching how to make AI beneficial, this is the best weeks for AI safety that i can remember.
Taken together with the publication of Superintelligence, founding of FLI and CSER, and teansition of SI into a research organisation MIRI, it’s becoming clearer that the last few years have started to usher in a new chapter in AI safety.
I know that machine learning capabilities are also increasing but let’s celebrate successes like these!
Now we can say that we were into AI risk before it was cool.
If you’d asked me two years ago I would have put today’s situation in the most optimistic 10% of outcomes. It’s nice to be wrong in that direction :)
Damn. Good point. Woo!
Is it excellent news? Ignoring the good that will come from the money, shouldn’t the fact that Musk is donating the funds increase our estimate that AI is indeed an existential threat? Imagine you have a condition that a very few people think will probably kill you, but most think is harmless. Then a really smart doctor examines you, says you should be worried, and pays for part of your treatment. Although this doctor has helped you, he has also lowered your estimate of how long you are going to live.
Musk’s position on AI risk is useful because he is contributing his social status and money to the cause.
However, other than being smart, he has no special qualifications in the subject—he got his ideas from other people.
So, his opinion should not update our beliefs very much.
Should not update our beliefs much. Musk is a smart guy, he has access to roughly the same information as we do, and his interpretation of that information is that the danger is enough to justify him in spending millions on it. But not, e.g., enough for him to drop everything else and dedicate his life to trying to solve it.
I think most of us should adjust our beliefs about the danger either up or down a little in response to that.
Disagree. Meet a lot of the Less Wrong style people in real life and a totally different respectable elite emerge than what you see on forums and some people collapse. Musk is far more trustworthy. Less wrong people over-estimate themselves.
Will you elaborate?
Uh. I don’t know, you see many more dimensions that causes you to harshly devalue a significant amount of individuals while finding you missed out of many good people. Less Wrong people are incredibly hit or miss, and many are “effective narcissists” and have highly acute issues that they use their high verbal intelligence to argue against.
Also there exists a tendency for speaking in extreme declarative statements and using meta-navigation in conversations as a crutch for lack of fundamental social skills. Furthermore I have met many quasi-famous LW people that are unethical in a straightforward fashion.
A large chunk of less wrong people you meet, including named individuals turn out to be not so great, or great in ways other than intelligence that you can appreciate them for. The great people you do meet however significantly make up for and surpass losses.
When people talk about “smart LW people” they often judge via forum posts or something, when that turns out to be only a moderately useful metric. If you ever meet the extended community I’m sure you will agree. It’s hard for me to explain.
tl;dr Musk is just more trustworthy and competent overall unless you are restricting yourself to a strict subset of Less Wrong people. Also LW people tend to overestimate how advanced they are compared to other epistemic blocs that are as elite, or are more elite.
http://lesswrong.com/user/pengvado/ <---- is some one I would trust. Not every other LW regular.
The halo effect is when your brain tricks you in to collapsing all of a person’s varied attributes and abilities in to a single dimension of how much you respect them. Dan Quayle’s success in politics provides limited evidence that he’s a good speller. Satoshi Nakamoto’s high status as the inventor of Bitcoin provides limited evidence that he is good looking. Justin Bieber’s success as a pop star provides limited evidence that he’s good at math. Etc.
Elon Musk is famous for being an extremely accomplished in the hi-tech world. This provides strong evidence that Musk is “competent”. “Trustworthy” I’m not as sure about.
Less Wrong users can be highly rational and make accurate predictions worth listening to while lacking “fundamental social skills”.
An individual Less Wronger who has lots of great ideas and writes well online might be too socially anxious to be a good conversationalist.
I was being very generous in my post. Less Wrong has many people that have megalomaniac tendencies. This would be almost impossible to argue against. I gave a wide margin and said many great people, but to pretend that there aren’t illegitimate people who hide and are many in number is something else entirely.
Elon Musk is certainly trustworthy. You can calculate trustworthiness via amount of accumulated capital because the effect trust has on said serial accumulation.
You have referenced relatively elementary mistakes that do not apply in this situation. Your examples are extremely off base
Dan Quayle as a good speller is irrelevant and unbelievable that is not even in the same ballpark of what I was saying
Satoshi Nakamoto being good looking because of his invention is ridiculous
Just Bieber example is worse.
Social approximation is far more robust than peoples assumed identities and online persona’s. I am not foolishly judging people randomly. Most of the “effective altruists” you meet are complete pushovers and if they wish to justify their megalomaniac tendencies constantly going on and on about how effective they are, they should be able to put their foot down and stop bad people and set boundaries, or cut the effective act until they are able to.
They use their “effective altruism” to make up for the fact of the huge ethical opportunity costs they miss by what they DO NOT do. They then engage in extremely obscurant arguments as cover.
As an example see the lack of ethics that many people complain about in mathematics ie Grothendieck or Perelman.
Mathematicians trends towards passivity and probably are “good people” but they do not stop their peers in engaging unethical behavior and thus that is the sordid state of mathematics. Stopping bad people is primary, doing good things is second. Effective altruism is incomplete until they admit that it is not doing good things, but stopping bad things, and you need a robust personality structure to do so.
Your comment is enlightening, thanks for sharing your thoughts.
Test
What about determining how much money investors will spend attempting to develop an AI, or how difficult it would be to coordinate the activities of programmers to create an AI, or how much to trust the people who claim that unfriendly AI is a threat.
I was already pretty convinced it was a problem, but it turns out very pessimistic about the chances of anyone taking it seriously, so the effect on the latter greatly outweighs the effect on the former for me.
Map, territory.
Sorry general the map you have been using is wrong, the correct one shows that the enemy is about to destroy us. This would be horrible news.
Would it be better to remain ignorant? It’s a false choice if you think the comparison is between being told the enemy is about to destroy us vs the enemy being where we thought they were. The enemy is about to destroy us, whether we know about it or not. The real alternative is remaining ignorant until the very last moment. It is better to be told the truth, no matter how much you hope for reality to be otherwise.
No, and I think Musk is doing a great thing, but the fact that he thinks it needs to be done is not “excellent news”. I think we are talking past each other.