It’s more likely that the Klingon warbird can overpower the USS Enterprise.
I think AI is actually the most dangerous of them...
Why? Because EY told you? I’m not trying to make snide remarks here but how people arrived at this conclusion was what I have been inquiring about in the first place.
...though I would also appreciate more critical discussion from experts and educated people...
Me too, but I was the only one around willing to start one at this point. That’s the sorry state of critical examination.
It’s more likely that the Klingon warbird can overpower the USS Enterprise.
To pick my own metaphor, it’s more likely that randomly chosen matter will form clumps of useless crap than a shiny new laptop. As defined, UFAI is likely the default state for AGI, which is one reason I put such low hope on our future. I call myself an optimistic pessimist: I think we’re going to create wonderful, cunning, incredibly powerful technology, and I think we’re going to misuse it to destroy ourselves.
Why [is AI the most dangerous threat]?
Because intelligent beings are the most awesome and scary things I’ve ever seen. The History Channel is a far better guide than Eliezer in that respect. And with all our intelligence and technology, I can’t see us holding back from trying to tweak intelligence itself. I view it as inevitable.
Me too [I also would appreciate more critical discussion from experts]
I’m hoping that the Visiting Fellows program and the papers written with the money from the latest Challenge will provide peer review in other respected venues.
What I was trying to show you by the Star Trek metaphor is that you are making estimations within a framework of ideas of which I’m not convinced to be based on firm ground.
It’s more likely that the Klingon warbird can overpower the USS Enterprise.
Why? Because EY told you? I’m not trying to make snide remarks here but how people arrived at this conclusion was what I have been inquiring about in the first place.
Me too, but I was the only one around willing to start one at this point. That’s the sorry state of critical examination.
To pick my own metaphor, it’s more likely that randomly chosen matter will form clumps of useless crap than a shiny new laptop. As defined, UFAI is likely the default state for AGI, which is one reason I put such low hope on our future. I call myself an optimistic pessimist: I think we’re going to create wonderful, cunning, incredibly powerful technology, and I think we’re going to misuse it to destroy ourselves.
Because intelligent beings are the most awesome and scary things I’ve ever seen. The History Channel is a far better guide than Eliezer in that respect. And with all our intelligence and technology, I can’t see us holding back from trying to tweak intelligence itself. I view it as inevitable.
I’m hoping that the Visiting Fellows program and the papers written with the money from the latest Challenge will provide peer review in other respected venues.
What I was trying to show you by the Star Trek metaphor is that you are making estimations within a framework of ideas of which I’m not convinced to be based on firm ground.
I’m not a very good convincer. I’d suggest reading the original material.
Can we get some links up in here? I’m not putting the burden on you in particular, but I think more linkage would be helpful in this discussion.
This thread has Eliezer’s request for specific links, which appear in replies.