I accept this. Although I’m not sure if the big picture should be a top priority right now. And as I wrote, I’m unable to survey the utility calculations at this point.
It’s the simplest of the ‘good outcome’ possibilities.
So you replace a simple view that is evidende based with one that might or might not be based on really shaky ideas such as an intelligence explosion.
I don’t see anything else to give me hope.
I think you overestimate the friendliness of friendly AI. Too bad Roko’s posts have been censored.
It’s cheap and easy to do so on a meaningful scale.
I want to believe.
They’re thinking about the same things I am.
Beware of those who agree with you?
I don’t think we have much time.
Maybe we do have enough time regarding AI and the kind of threats depicted on this site. Maybe we don’t have enough time regarding other kinds of threats.
I think it’s very likely SIAI will fail in their mission in every way. They’re just what’s left after a long process of elimination. Give me a better path and I’ll switch my donations. But I don’t see any other group that comes close.
I can accept that. But I’m unable to follow the process of elimination yet.
Who else is working directly on creating smarter-than-human intelligence with non-commercial goals? And if there are any, are they self-reflective enough to recognize its potential failure modes?
No one else cares about the big picture.
I accept this. Although I’m not sure if the big picture should be a top priority right now. And as I wrote, I’m unable to survey the utility calculations at this point.
I used something I developed which I call Point-In-Time Utility to guide my thinking on this matter. It basically boils down to, ‘the longest view wins’, and I don’t see anyone else talking about potentially real pangalactic empires.
It’s the simplest of the ‘good outcome’ possibilities.
So you replace a simple view that is evidende based with one that might or might not be based on really shaky ideas such as an intelligence explosion.
I don’t think it has to be an explosion at all, just smarter-than-human. I’m willing to take things one step at a time, if necessary. Though it seems unlikely we could build a smarter-than-human intelligence without understanding what intelligence is, and thus knowing where to tweak, if even retroactively. That said, I consider intelligence tweaking itself to be a shaky idea, though I view alternatives as failure modes.
I don’t see anything else to give me hope.
I think you overestimate the friendliness of friendly AI. Too bad Roko’s posts have been censored.
I think you overestimate my estimation of the friendliness of friendly AI. Note that at the end of my post I said it is very likely SIAI will fail. My hope total is fairly small. Roko deleted his own posts, and I was able to read the article Eliezer deleted since it was still in my RSS feed. It didn’t change my thinking on the matter; I’d heard arguments like it before.
They’re thinking about the same things I am.
Beware of those who agree with you?
Hi. I’m human. At least, last I checked. I didn’t say all my reasons were purely rational. This one is dangerous (reinforcement), but I do a lot of reading of opposing opinions as well, and there’s still a lot I disagree with regarding SIAI’s positions.
I don’t think we have much time.
Maybe we do have enough time regarding AI and the kind of threats depicted on this site. Maybe we don’t have enough time regarding other kinds of threats.
The latter is what I’m worried about. I see all of these threats as being developed simultaneously, in a race to see which one passes the threshold into reality first. I’m hoping that Friendly AI beats them.
I think it’s very likely SIAI will fail in their mission in every way. They’re just what’s left after a long process of elimination. Give me a better path and I’ll switch my donations. But I don’t see any other group that comes close.
I can accept that. But I’m unable to follow the process of elimination yet.
I haven’t seen you name any other organization you’re donating to or who might compete with SIAI. Aside from the Future of Humanity Institute or the Lifeboat Foundation, both of which seem more like theoretical study groups than action-takers, people just don’t seem to be working on these problems. Even the Methuselah Foundation is working on a very narrow portion which, although very useful and awesome if it succeeds, doesn’t guard against the threats we’re facing.
I don’t think it has to be an explosion at all, just smarter-than-human.
I feel there are too many assumptions in what you state to come up with estimations like a 1% probability of uFAI turning everything into paperclips.
I think you overestimate my estimation of the friendliness of friendly AI.
You are right, never mind what I said.
I see all of these threats as being developed simultaneously...
Yeah and how is their combined probability less worrying than that of AI? That doesn’t speak against the effectiveness of donating all to the SIAI of course. Creating your own God to fix the problems the imagined one can’t is indeed a promising and appealing idea, given it is feasible.
I haven’t seen you name any other organization you’re donating to or who might compete with SIAI.
I’m mainly concerned about my own well-being. If I was threated by something near-term within Germany, that would be my top-priority. So the matter is more complicated for me than for the people who are merely conerned about the well-being of all beings.
As I said before, it is not my intention to discredit the SIAI but to steer some critical discussion for us non-expert, uneducated but concerned people.
That there are no other does not mean we shouldn’t be keen to create them, to establish competition.
Absolutely agreed. Though I’m barely motivated enough to click on a PayPal link, so there isn’t much hope of my contributing to that effort. And I’d hope they’d be created in such a way as to expand total funding, rather than cannibalizing SIAI’s efforts.
I’m not sure about this.
Certainly there are other ways to look at value / utility / whatever and how to measure it. That’s why I mentioned I had a particular theory I was applying. I wouldn’t expect you to come to the same conclusions, since I haven’t fully outlined how it works. Sorry.
I feel there are too many assumptions in what you state to come up with estimations like a 1% probability of uFAI turning everything into paperclips.
I’m not sure what this is saying. I think UFAI is far more likely than FAI, and I also think that donating to SIAI contributes somewhat to UFAI, though I think it contributes more to FAI, such that in the race I was talking about, FAI should come out ahead. At least, that’s the theory. There may be no way to save us.
Yeah and how is their combined probability less worrying than that of AI?
AI is one of the things on the list racing against FAI. I think AI is actually the most dangerous of them, and from what I’ve read, so does Eliezer, which is why he’s working on that problem instead of, say, nanotech.
I’m mainly concerned about my own well-being.
I’ve mentioned before that I’m somewhat depressed, so I consider my philanthropy to be a good portion ‘lack of caring about self’ more than ‘being concerned about the well-being of all beings’. Again, a subtractive process.
As I said before, it is [...] my intention [...] to steer some critical discussion for us non-expert, uneducated but concerned people.
Thanks! I think that’s probably a good idea, though I would also appreciate more critical discussion from experts and educated people, a sort of technical minded anti-Summit, without all the useless politics of the IEET and the like.
It’s more likely that the Klingon warbird can overpower the USS Enterprise.
I think AI is actually the most dangerous of them...
Why? Because EY told you? I’m not trying to make snide remarks here but how people arrived at this conclusion was what I have been inquiring about in the first place.
...though I would also appreciate more critical discussion from experts and educated people...
Me too, but I was the only one around willing to start one at this point. That’s the sorry state of critical examination.
It’s more likely that the Klingon warbird can overpower the USS Enterprise.
To pick my own metaphor, it’s more likely that randomly chosen matter will form clumps of useless crap than a shiny new laptop. As defined, UFAI is likely the default state for AGI, which is one reason I put such low hope on our future. I call myself an optimistic pessimist: I think we’re going to create wonderful, cunning, incredibly powerful technology, and I think we’re going to misuse it to destroy ourselves.
Why [is AI the most dangerous threat]?
Because intelligent beings are the most awesome and scary things I’ve ever seen. The History Channel is a far better guide than Eliezer in that respect. And with all our intelligence and technology, I can’t see us holding back from trying to tweak intelligence itself. I view it as inevitable.
Me too [I also would appreciate more critical discussion from experts]
I’m hoping that the Visiting Fellows program and the papers written with the money from the latest Challenge will provide peer review in other respected venues.
What I was trying to show you by the Star Trek metaphor is that you are making estimations within a framework of ideas of which I’m not convinced to be based on firm ground.
Yeah, that’s why I’m donating as well.
Sure, but why the SIAI?
I accept this. Although I’m not sure if the big picture should be a top priority right now. And as I wrote, I’m unable to survey the utility calculations at this point.
So you replace a simple view that is evidende based with one that might or might not be based on really shaky ideas such as an intelligence explosion.
I think you overestimate the friendliness of friendly AI. Too bad Roko’s posts have been censored.
I want to believe.
Beware of those who agree with you?
Maybe we do have enough time regarding AI and the kind of threats depicted on this site. Maybe we don’t have enough time regarding other kinds of threats.
I can accept that. But I’m unable to follow the process of elimination yet.
Who else is working directly on creating smarter-than-human intelligence with non-commercial goals? And if there are any, are they self-reflective enough to recognize its potential failure modes?
I used something I developed which I call Point-In-Time Utility to guide my thinking on this matter. It basically boils down to, ‘the longest view wins’, and I don’t see anyone else talking about potentially real pangalactic empires.
I don’t think it has to be an explosion at all, just smarter-than-human. I’m willing to take things one step at a time, if necessary. Though it seems unlikely we could build a smarter-than-human intelligence without understanding what intelligence is, and thus knowing where to tweak, if even retroactively. That said, I consider intelligence tweaking itself to be a shaky idea, though I view alternatives as failure modes.
I think you overestimate my estimation of the friendliness of friendly AI. Note that at the end of my post I said it is very likely SIAI will fail. My hope total is fairly small. Roko deleted his own posts, and I was able to read the article Eliezer deleted since it was still in my RSS feed. It didn’t change my thinking on the matter; I’d heard arguments like it before.
Hi. I’m human. At least, last I checked. I didn’t say all my reasons were purely rational. This one is dangerous (reinforcement), but I do a lot of reading of opposing opinions as well, and there’s still a lot I disagree with regarding SIAI’s positions.
The latter is what I’m worried about. I see all of these threats as being developed simultaneously, in a race to see which one passes the threshold into reality first. I’m hoping that Friendly AI beats them.
I haven’t seen you name any other organization you’re donating to or who might compete with SIAI. Aside from the Future of Humanity Institute or the Lifeboat Foundation, both of which seem more like theoretical study groups than action-takers, people just don’t seem to be working on these problems. Even the Methuselah Foundation is working on a very narrow portion which, although very useful and awesome if it succeeds, doesn’t guard against the threats we’re facing.
That there are no other does not mean we shouldn’t be keen to create them, to establish competition. Or do it at all at this point.
I’m not sure about this.
I feel there are too many assumptions in what you state to come up with estimations like a 1% probability of uFAI turning everything into paperclips.
You are right, never mind what I said.
Yeah and how is their combined probability less worrying than that of AI? That doesn’t speak against the effectiveness of donating all to the SIAI of course. Creating your own God to fix the problems the imagined one can’t is indeed a promising and appealing idea, given it is feasible.
I’m mainly concerned about my own well-being. If I was threated by something near-term within Germany, that would be my top-priority. So the matter is more complicated for me than for the people who are merely conerned about the well-being of all beings.
As I said before, it is not my intention to discredit the SIAI but to steer some critical discussion for us non-expert, uneducated but concerned people.
Absolutely agreed. Though I’m barely motivated enough to click on a PayPal link, so there isn’t much hope of my contributing to that effort. And I’d hope they’d be created in such a way as to expand total funding, rather than cannibalizing SIAI’s efforts.
Certainly there are other ways to look at value / utility / whatever and how to measure it. That’s why I mentioned I had a particular theory I was applying. I wouldn’t expect you to come to the same conclusions, since I haven’t fully outlined how it works. Sorry.
I’m not sure what this is saying. I think UFAI is far more likely than FAI, and I also think that donating to SIAI contributes somewhat to UFAI, though I think it contributes more to FAI, such that in the race I was talking about, FAI should come out ahead. At least, that’s the theory. There may be no way to save us.
AI is one of the things on the list racing against FAI. I think AI is actually the most dangerous of them, and from what I’ve read, so does Eliezer, which is why he’s working on that problem instead of, say, nanotech.
I’ve mentioned before that I’m somewhat depressed, so I consider my philanthropy to be a good portion ‘lack of caring about self’ more than ‘being concerned about the well-being of all beings’. Again, a subtractive process.
Thanks! I think that’s probably a good idea, though I would also appreciate more critical discussion from experts and educated people, a sort of technical minded anti-Summit, without all the useless politics of the IEET and the like.
It’s more likely that the Klingon warbird can overpower the USS Enterprise.
Why? Because EY told you? I’m not trying to make snide remarks here but how people arrived at this conclusion was what I have been inquiring about in the first place.
Me too, but I was the only one around willing to start one at this point. That’s the sorry state of critical examination.
To pick my own metaphor, it’s more likely that randomly chosen matter will form clumps of useless crap than a shiny new laptop. As defined, UFAI is likely the default state for AGI, which is one reason I put such low hope on our future. I call myself an optimistic pessimist: I think we’re going to create wonderful, cunning, incredibly powerful technology, and I think we’re going to misuse it to destroy ourselves.
Because intelligent beings are the most awesome and scary things I’ve ever seen. The History Channel is a far better guide than Eliezer in that respect. And with all our intelligence and technology, I can’t see us holding back from trying to tweak intelligence itself. I view it as inevitable.
I’m hoping that the Visiting Fellows program and the papers written with the money from the latest Challenge will provide peer review in other respected venues.
What I was trying to show you by the Star Trek metaphor is that you are making estimations within a framework of ideas of which I’m not convinced to be based on firm ground.
I’m not a very good convincer. I’d suggest reading the original material.
Can we get some links up in here? I’m not putting the burden on you in particular, but I think more linkage would be helpful in this discussion.
This thread has Eliezer’s request for specific links, which appear in replies.