The reasoning seems straightforward to me: If you’re wrong, why talk? If you’re right, you’re accelerating the end.
I can’t in general endorse “first do no harm”, but it becomes better and better in any specific case the less way there is to help. If you can’t save your family, at least don’t personally help kill them; it lacks dignity.
I think that is an example of the huge potential damage of “security mindset” gone wrong. If you can’t save your family, as in “bring them to safety”, at least make them marginally safer.
(Sorry for the tone of the following—it is not intended at you personally, who did much more than your fair share)
Create a closed community that you mostly trust, and let that community speak freely about how to win. Invent another damn safety patch that will make it marginally harder for the monster to eat them, in hope that it chooses to eat the moon first. I heard you say that most of your probability of survival comes from the possibility that you are wrong—trying to protect your family is trying to at least optimize for such miracle.
There is no safe way out of a war zone. Hiding behind a rock is not therfore the answer.
It is readable; it is however generally not read by academia and engineers.
I disagree with them about why—I do think solutions can be found by thinking outside of the box and outside of immediate applications, and without an academic degree, and I very much value the rational and creative discourse here.
But many here specifically advocate against getting a university degree or working in academia, thus shitting on things academics have sweat blood for. They also tend not to follow the formats and metrics that count in academia to be heard, such as publications and mathematical precision and usable code. There is also a surprisingly limited attempt in engaging with academics and engineers on their terms, providing things they can actually use and act upon.
So I doubt they will check this forum for inspiration on which problems need to be cracked. That is irrational of them, so I understand why you do not respect it, but that is how it is.
On the other hand, understanding the existing obstacles may give us a better idea of how much time we still have, and which limitations emerging AGI will have, which is useful information.
I meant to criticize moving too far toward “do no harm” policy in general due to inability to achieve a solution that would satisfy us if we had the choice. I agree specifically that if anyone knows of a bottleneck unnoticed by people like Bengio and LeCun, LW is not the right forum to discuss it.
Is there a place like that though? I may be vastly misinformed, but last time I checked MIRI gave the impression of aiming at very different directions (“bringing to safety” mindset) - though I admit that I didn’t watch it closely, and it may not be obvious from the outside what kind of work is done and not published.
[Edit: “moving toward ‘do no harm’”—“moving to” was a grammar mistake that make it contrary to position you stated above—sorry]
I think there are a number of ways in which talking might be good given that one is right about there being obstacles—one that appeals to me in particular is the increased tractability of misuse arising from the relevant obstacles.
[Edit: *relevant obstacles I have in mind. (I’m trying to be vague here)]
The reasoning seems straightforward to me: If you’re wrong, why talk? If you’re right, you’re accelerating the end.
I can’t in general endorse “first do no harm”, but it becomes better and better in any specific case the less way there is to help. If you can’t save your family, at least don’t personally help kill them; it lacks dignity.
I think that is an example of the huge potential damage of “security mindset” gone wrong. If you can’t save your family, as in “bring them to safety”, at least make them marginally safer.
(Sorry for the tone of the following—it is not intended at you personally, who did much more than your fair share)
Create a closed community that you mostly trust, and let that community speak freely about how to win. Invent another damn safety patch that will make it marginally harder for the monster to eat them, in hope that it chooses to eat the moon first. I heard you say that most of your probability of survival comes from the possibility that you are wrong—trying to protect your family is trying to at least optimize for such miracle.
There is no safe way out of a war zone. Hiding behind a rock is not therfore the answer.
This is not a closed community, it is a world-readable Internet forum.
It is readable; it is however generally not read by academia and engineers.
I disagree with them about why—I do think solutions can be found by thinking outside of the box and outside of immediate applications, and without an academic degree, and I very much value the rational and creative discourse here.
But many here specifically advocate against getting a university degree or working in academia, thus shitting on things academics have sweat blood for. They also tend not to follow the formats and metrics that count in academia to be heard, such as publications and mathematical precision and usable code. There is also a surprisingly limited attempt in engaging with academics and engineers on their terms, providing things they can actually use and act upon.
So I doubt they will check this forum for inspiration on which problems need to be cracked. That is irrational of them, so I understand why you do not respect it, but that is how it is.
On the other hand, understanding the existing obstacles may give us a better idea of how much time we still have, and which limitations emerging AGI will have, which is useful information.
I meant to criticize moving too far toward “do no harm” policy in general due to inability to achieve a solution that would satisfy us if we had the choice. I agree specifically that if anyone knows of a bottleneck unnoticed by people like Bengio and LeCun, LW is not the right forum to discuss it.
Is there a place like that though? I may be vastly misinformed, but last time I checked MIRI gave the impression of aiming at very different directions (“bringing to safety” mindset) - though I admit that I didn’t watch it closely, and it may not be obvious from the outside what kind of work is done and not published.
[Edit: “moving toward ‘do no harm’”—“moving to” was a grammar mistake that make it contrary to position you stated above—sorry]
I think there are a number of ways in which talking might be good given that one is right about there being obstacles—one that appeals to me in particular is the increased tractability of misuse arising from the relevant obstacles.
[Edit: *relevant obstacles I have in mind. (I’m trying to be vague here)]