I don’t think it has to be an explosion at all, just smarter-than-human.
I feel there are too many assumptions in what you state to come up with estimations like a 1% probability of uFAI turning everything into paperclips.
I think you overestimate my estimation of the friendliness of friendly AI.
You are right, never mind what I said.
I see all of these threats as being developed simultaneously...
Yeah and how is their combined probability less worrying than that of AI? That doesn’t speak against the effectiveness of donating all to the SIAI of course. Creating your own God to fix the problems the imagined one can’t is indeed a promising and appealing idea, given it is feasible.
I haven’t seen you name any other organization you’re donating to or who might compete with SIAI.
I’m mainly concerned about my own well-being. If I was threated by something near-term within Germany, that would be my top-priority. So the matter is more complicated for me than for the people who are merely conerned about the well-being of all beings.
As I said before, it is not my intention to discredit the SIAI but to steer some critical discussion for us non-expert, uneducated but concerned people.
That there are no other does not mean we shouldn’t be keen to create them, to establish competition.
Absolutely agreed. Though I’m barely motivated enough to click on a PayPal link, so there isn’t much hope of my contributing to that effort. And I’d hope they’d be created in such a way as to expand total funding, rather than cannibalizing SIAI’s efforts.
I’m not sure about this.
Certainly there are other ways to look at value / utility / whatever and how to measure it. That’s why I mentioned I had a particular theory I was applying. I wouldn’t expect you to come to the same conclusions, since I haven’t fully outlined how it works. Sorry.
I feel there are too many assumptions in what you state to come up with estimations like a 1% probability of uFAI turning everything into paperclips.
I’m not sure what this is saying. I think UFAI is far more likely than FAI, and I also think that donating to SIAI contributes somewhat to UFAI, though I think it contributes more to FAI, such that in the race I was talking about, FAI should come out ahead. At least, that’s the theory. There may be no way to save us.
Yeah and how is their combined probability less worrying than that of AI?
AI is one of the things on the list racing against FAI. I think AI is actually the most dangerous of them, and from what I’ve read, so does Eliezer, which is why he’s working on that problem instead of, say, nanotech.
I’m mainly concerned about my own well-being.
I’ve mentioned before that I’m somewhat depressed, so I consider my philanthropy to be a good portion ‘lack of caring about self’ more than ‘being concerned about the well-being of all beings’. Again, a subtractive process.
As I said before, it is [...] my intention [...] to steer some critical discussion for us non-expert, uneducated but concerned people.
Thanks! I think that’s probably a good idea, though I would also appreciate more critical discussion from experts and educated people, a sort of technical minded anti-Summit, without all the useless politics of the IEET and the like.
It’s more likely that the Klingon warbird can overpower the USS Enterprise.
I think AI is actually the most dangerous of them...
Why? Because EY told you? I’m not trying to make snide remarks here but how people arrived at this conclusion was what I have been inquiring about in the first place.
...though I would also appreciate more critical discussion from experts and educated people...
Me too, but I was the only one around willing to start one at this point. That’s the sorry state of critical examination.
It’s more likely that the Klingon warbird can overpower the USS Enterprise.
To pick my own metaphor, it’s more likely that randomly chosen matter will form clumps of useless crap than a shiny new laptop. As defined, UFAI is likely the default state for AGI, which is one reason I put such low hope on our future. I call myself an optimistic pessimist: I think we’re going to create wonderful, cunning, incredibly powerful technology, and I think we’re going to misuse it to destroy ourselves.
Why [is AI the most dangerous threat]?
Because intelligent beings are the most awesome and scary things I’ve ever seen. The History Channel is a far better guide than Eliezer in that respect. And with all our intelligence and technology, I can’t see us holding back from trying to tweak intelligence itself. I view it as inevitable.
Me too [I also would appreciate more critical discussion from experts]
I’m hoping that the Visiting Fellows program and the papers written with the money from the latest Challenge will provide peer review in other respected venues.
What I was trying to show you by the Star Trek metaphor is that you are making estimations within a framework of ideas of which I’m not convinced to be based on firm ground.
That there are no other does not mean we shouldn’t be keen to create them, to establish competition. Or do it at all at this point.
I’m not sure about this.
I feel there are too many assumptions in what you state to come up with estimations like a 1% probability of uFAI turning everything into paperclips.
You are right, never mind what I said.
Yeah and how is their combined probability less worrying than that of AI? That doesn’t speak against the effectiveness of donating all to the SIAI of course. Creating your own God to fix the problems the imagined one can’t is indeed a promising and appealing idea, given it is feasible.
I’m mainly concerned about my own well-being. If I was threated by something near-term within Germany, that would be my top-priority. So the matter is more complicated for me than for the people who are merely conerned about the well-being of all beings.
As I said before, it is not my intention to discredit the SIAI but to steer some critical discussion for us non-expert, uneducated but concerned people.
Absolutely agreed. Though I’m barely motivated enough to click on a PayPal link, so there isn’t much hope of my contributing to that effort. And I’d hope they’d be created in such a way as to expand total funding, rather than cannibalizing SIAI’s efforts.
Certainly there are other ways to look at value / utility / whatever and how to measure it. That’s why I mentioned I had a particular theory I was applying. I wouldn’t expect you to come to the same conclusions, since I haven’t fully outlined how it works. Sorry.
I’m not sure what this is saying. I think UFAI is far more likely than FAI, and I also think that donating to SIAI contributes somewhat to UFAI, though I think it contributes more to FAI, such that in the race I was talking about, FAI should come out ahead. At least, that’s the theory. There may be no way to save us.
AI is one of the things on the list racing against FAI. I think AI is actually the most dangerous of them, and from what I’ve read, so does Eliezer, which is why he’s working on that problem instead of, say, nanotech.
I’ve mentioned before that I’m somewhat depressed, so I consider my philanthropy to be a good portion ‘lack of caring about self’ more than ‘being concerned about the well-being of all beings’. Again, a subtractive process.
Thanks! I think that’s probably a good idea, though I would also appreciate more critical discussion from experts and educated people, a sort of technical minded anti-Summit, without all the useless politics of the IEET and the like.
It’s more likely that the Klingon warbird can overpower the USS Enterprise.
Why? Because EY told you? I’m not trying to make snide remarks here but how people arrived at this conclusion was what I have been inquiring about in the first place.
Me too, but I was the only one around willing to start one at this point. That’s the sorry state of critical examination.
To pick my own metaphor, it’s more likely that randomly chosen matter will form clumps of useless crap than a shiny new laptop. As defined, UFAI is likely the default state for AGI, which is one reason I put such low hope on our future. I call myself an optimistic pessimist: I think we’re going to create wonderful, cunning, incredibly powerful technology, and I think we’re going to misuse it to destroy ourselves.
Because intelligent beings are the most awesome and scary things I’ve ever seen. The History Channel is a far better guide than Eliezer in that respect. And with all our intelligence and technology, I can’t see us holding back from trying to tweak intelligence itself. I view it as inevitable.
I’m hoping that the Visiting Fellows program and the papers written with the money from the latest Challenge will provide peer review in other respected venues.
What I was trying to show you by the Star Trek metaphor is that you are making estimations within a framework of ideas of which I’m not convinced to be based on firm ground.
I’m not a very good convincer. I’d suggest reading the original material.
Can we get some links up in here? I’m not putting the burden on you in particular, but I think more linkage would be helpful in this discussion.
This thread has Eliezer’s request for specific links, which appear in replies.