It really isn’t. One of the reasons for the founding of this forum, yes. But what this forum is meant to be for is advancing the art of human rationality. If compelling evidence comes along that AI safety research is useless and AI research is vanishingly unlikely to have the sort of terrible consequences feared by the likes of MIRI, then “this forum” should be very much in the business of advocating against AI safety research.
In support of your point, MIRI itself changed (in the opposite direction) from its former stance on AI research.
You’ve been around long enough to know this, but for others: The former ambition of MIRI in the early 2000s—back when it was called the SIAI—was to create artificial superintelligence, but that ambition changed to ensuring AI friendliness after considering the “terrible consequences [now] feared by the likes of MIRI”.
(Disclaimer: I don’t speak for SingInst, nor am I presently affiliated with them.)
But recall that the old name was “Singularity Institute for Artificial Intelligence,” chosen before the inherent dangers of AI were understood. The unambiguous for is no longer appropriate, and “Singularity Institute about Artificial Intelligence” might seem awkward.
I seem to remember someone saying back in 2008 that the organization should rebrand as the “Singularity Institute For or Against Artificial Intelligence Depending on Which Seems to Be a Better Idea Upon Due Consideration,” but obviously that was only a joke.
I’ve always thought it’s a shame they picked the name MIRI over SIFAAIDWSBBIUDC.
Ha! It’s wonderful news that you can take it off! For me you’re the closest human (?) correlate to the man with the hat from XKCD, and I mean that as a compliment.
You’re right, but. The whole story goes like this: Eliezer founded this forum to advancing the art of human rationality, so that people would stop making silly objections to the issue of AI safety like “intelligence would surely bring about morality” and things like that. The focus of LW is human rationality and of MIRI is AI safety, but as far as I can tell, we still haven’t found any valid objections to the orthogonality thesis. On the contrary, the issue of autonomous agents safety is gaining traction and recognition. I do agree that if we found a strong objections we should change perspective, but we still haven’t and indeed we are seeing more and more worrisome examples.
It really isn’t. One of the reasons for the founding of this forum, yes. But what this forum is meant to be for is advancing the art of human rationality. If compelling evidence comes along that AI safety research is useless and AI research is vanishingly unlikely to have the sort of terrible consequences feared by the likes of MIRI, then “this forum” should be very much in the business of advocating against AI safety research.
In support of your point, MIRI itself changed (in the opposite direction) from its former stance on AI research.
You’ve been around long enough to know this, but for others: The former ambition of MIRI in the early 2000s—back when it was called the SIAI—was to create artificial superintelligence, but that ambition changed to ensuring AI friendliness after considering the “terrible consequences [now] feared by the likes of MIRI”.
In the words of Zack_M_Davis 6 years ago:
I’ve always thought it’s a shame they picked the name MIRI over SIFAAIDWSBBIUDC.
Or maybe because SIAI realized their ability to actually create an AI is non-existent
Ha! It’s wonderful news that you can take it off!
For me you’re the closest human (?) correlate to the man with the hat from XKCD, and I mean that as a compliment.
I take it as such :-)
You do mean the black hat guy, right? (there is also a white hat guy who doesn’t pop up as frequently).
Yes, the black hatter. I totally forgot about the white hat guy...
You’re right, but.
The whole story goes like this: Eliezer founded this forum to advancing the art of human rationality, so that people would stop making silly objections to the issue of AI safety like “intelligence would surely bring about morality” and things like that.
The focus of LW is human rationality and of MIRI is AI safety, but as far as I can tell, we still haven’t found any valid objections to the orthogonality thesis. On the contrary, the issue of autonomous agents safety is gaining traction and recognition.
I do agree that if we found a strong objections we should change perspective, but we still haven’t and indeed we are seeing more and more worrisome examples.