Couldn’t the money spent on AI safety research be better spend on, say, AI research?
There’s something like 100 times as much funding for AI research as there is for AI safety research. In general, it seems like it would be weird to have only 1% of the effort in a project spent on making sure the project is doing the thing that it should be doing.
My proposal is that we should stop doing AI in its simple definition of just improving the decision-making capabilities of systems. […] With civil engineering, we don’t call it “building bridges that don’t fall down” — we just call it “building bridges.” Of course we don’t want them to fall down. And we should think the same way about AI: of course AI systems should be designed so that their actions are well-aligned with what human beings want. But it’s a difficult unsolved problem that hasn’t been part of the research agenda up to now.
It really isn’t. One of the reasons for the founding of this forum, yes. But what this forum is meant to be for is advancing the art of human rationality. If compelling evidence comes along that AI safety research is useless and AI research is vanishingly unlikely to have the sort of terrible consequences feared by the likes of MIRI, then “this forum” should be very much in the business of advocating against AI safety research.
In support of your point, MIRI itself changed (in the opposite direction) from its former stance on AI research.
You’ve been around long enough to know this, but for others: The former ambition of MIRI in the early 2000s—back when it was called the SIAI—was to create artificial superintelligence, but that ambition changed to ensuring AI friendliness after considering the “terrible consequences [now] feared by the likes of MIRI”.
(Disclaimer: I don’t speak for SingInst, nor am I presently affiliated with them.)
But recall that the old name was “Singularity Institute for Artificial Intelligence,” chosen before the inherent dangers of AI were understood. The unambiguous for is no longer appropriate, and “Singularity Institute about Artificial Intelligence” might seem awkward.
I seem to remember someone saying back in 2008 that the organization should rebrand as the “Singularity Institute For or Against Artificial Intelligence Depending on Which Seems to Be a Better Idea Upon Due Consideration,” but obviously that was only a joke.
I’ve always thought it’s a shame they picked the name MIRI over SIFAAIDWSBBIUDC.
Ha! It’s wonderful news that you can take it off! For me you’re the closest human (?) correlate to the man with the hat from XKCD, and I mean that as a compliment.
You’re right, but. The whole story goes like this: Eliezer founded this forum to advancing the art of human rationality, so that people would stop making silly objections to the issue of AI safety like “intelligence would surely bring about morality” and things like that. The focus of LW is human rationality and of MIRI is AI safety, but as far as I can tell, we still haven’t found any valid objections to the orthogonality thesis. On the contrary, the issue of autonomous agents safety is gaining traction and recognition. I do agree that if we found a strong objections we should change perspective, but we still haven’t and indeed we are seeing more and more worrisome examples.
If we are talking about “extremes”, what is the base set here: people’s usual spending habits? Because I don’t think cryonics is more selfish than e.g. buying an expensive car.
I mean, couldn’t the money spent on cryopreserving oneself be better spend on, say, AI safety research?
Well, ‘better’ here does all the work. It depends on your model and ethics: for example if you think that resuscitation is probably nearer then full AGI, then it’s better to be frozen.
2) Would the human race be eradicated if there is a worst-possible-scenario nuclear incident? Or merely a lot of people?
This question I couldn’t parse correctly. A nuclear war is improbable to wipe out humanity in its entirety, whereby a lot of people is th exact opposite of extinction, so...?
3) Is the study linking nut consumption to longevity found in the link below convincing?
This is far from a stupid question. The sample sizes are at least large, but it has the usual problem of using p-values, which are notoriously very fragile. It would require someone acquainted with statistics to judge better the thing, if it can be done at all.
Thanks for this topic! Stupid questions are my specialty, for better or worse.
1) Isn’t cryonics extremely selfish? I mean, couldn’t the money spent on cryopreserving oneself be better spend on, say, AI safety research?
2) Would the human race be eradicated if there is a worst-possible-scenario nuclear incident? Or merely a lot of people?
3) Is the study linking nut consumption to longevity found in the link below convincing?
http://jamanetwork.com/journals/jamainternalmedicine/fullarticle/2173094
And if so, is it worth a lot of effort promoting nut consumption in moderation?
Here comes another “stupid question” from this one.
Couldn’t the money spent on AI safety research be better spend on, say, AI research?
There’s something like 100 times as much funding for AI research as there is for AI safety research. In general, it seems like it would be weird to have only 1% of the effort in a project spent on making sure the project is doing the thing that it should be doing.
For this specific question, I like Stuart Russell’s approach:
Well, the whole point of this forum is to convince someone that the answer is most definitely not.
It really isn’t. One of the reasons for the founding of this forum, yes. But what this forum is meant to be for is advancing the art of human rationality. If compelling evidence comes along that AI safety research is useless and AI research is vanishingly unlikely to have the sort of terrible consequences feared by the likes of MIRI, then “this forum” should be very much in the business of advocating against AI safety research.
In support of your point, MIRI itself changed (in the opposite direction) from its former stance on AI research.
You’ve been around long enough to know this, but for others: The former ambition of MIRI in the early 2000s—back when it was called the SIAI—was to create artificial superintelligence, but that ambition changed to ensuring AI friendliness after considering the “terrible consequences [now] feared by the likes of MIRI”.
In the words of Zack_M_Davis 6 years ago:
I’ve always thought it’s a shame they picked the name MIRI over SIFAAIDWSBBIUDC.
Or maybe because SIAI realized their ability to actually create an AI is non-existent
Ha! It’s wonderful news that you can take it off!
For me you’re the closest human (?) correlate to the man with the hat from XKCD, and I mean that as a compliment.
I take it as such :-)
You do mean the black hat guy, right? (there is also a white hat guy who doesn’t pop up as frequently).
Yes, the black hatter. I totally forgot about the white hat guy...
You’re right, but.
The whole story goes like this: Eliezer founded this forum to advancing the art of human rationality, so that people would stop making silly objections to the issue of AI safety like “intelligence would surely bring about morality” and things like that.
The focus of LW is human rationality and of MIRI is AI safety, but as far as I can tell, we still haven’t found any valid objections to the orthogonality thesis. On the contrary, the issue of autonomous agents safety is gaining traction and recognition.
I do agree that if we found a strong objections we should change perspective, but we still haven’t and indeed we are seeing more and more worrisome examples.
I know that. But the whole point of this thread is to ask stupid questions, isn’t it?
And sometimes apparently the stupidest question, isn’t stupid after all.
Yes.
If we are talking about “extremes”, what is the base set here: people’s usual spending habits? Because I don’t think cryonics is more selfish than e.g. buying an expensive car.
Well, ‘better’ here does all the work. It depends on your model and ethics: for example if you think that resuscitation is probably nearer then full AGI, then it’s better to be frozen.
This question I couldn’t parse correctly. A nuclear war is improbable to wipe out humanity in its entirety, whereby a lot of people is th exact opposite of extinction, so...?
This is far from a stupid question. The sample sizes are at least large, but it has the usual problem of using p-values, which are notoriously very fragile. It would require someone acquainted with statistics to judge better the thing, if it can be done at all.