Forget about what the social consensus is. If you have technical understanding of current AIs, do you truly believe there are any major obstacles left? The kind of problems that AGI companies could reliably not tear down with their resources? If you do, state so in the comments, but please do not state what those obstacles are.
I think this request, absent a really strong compelling argument that is spelled out, creates an unhealthy epistemic environment. It is possible that you think this is false or that it’s worth the cost, but you don’t really argue for either in this post. You encourage people to question others and not trust blindly in other parts of the post, but this portion expects people to not elaborate on their opinions without an explanation as to why. You repeat this again by saying “So our message is: things are worse than what is described in the post!” without justifying yourselves or, imo, properly conveying the level of caution people should be treating such an unsubstantiated claim.
I’m tempted to write a post replying with why I think there are obstacles to AGI, what broadly they are with a few examples, and why it’s important to discuss them. (I’m not going to do so atm because it’s late and I know better then to publically share something that people implied to me is infohazaradous without carefully thinking it over (and discussing doing so with friends as well).)
(I’m also happy to post it as a comment here instead but assume you would prefer not and this is your post to moderate.)
The reasoning seems straightforward to me: If you’re wrong, why talk? If you’re right, you’re accelerating the end.
I can’t in general endorse “first do no harm”, but it becomes better and better in any specific case the less way there is to help. If you can’t save your family, at least don’t personally help kill them; it lacks dignity.
I think that is an example of the huge potential damage of “security mindset” gone wrong. If you can’t save your family, as in “bring them to safety”, at least make them marginally safer.
(Sorry for the tone of the following—it is not intended at you personally, who did much more than your fair share)
Create a closed community that you mostly trust, and let that community speak freely about how to win. Invent another damn safety patch that will make it marginally harder for the monster to eat them, in hope that it chooses to eat the moon first. I heard you say that most of your probability of survival comes from the possibility that you are wrong—trying to protect your family is trying to at least optimize for such miracle.
There is no safe way out of a war zone. Hiding behind a rock is not therfore the answer.
It is readable; it is however generally not read by academia and engineers.
I disagree with them about why—I do think solutions can be found by thinking outside of the box and outside of immediate applications, and without an academic degree, and I very much value the rational and creative discourse here.
But many here specifically advocate against getting a university degree or working in academia, thus shitting on things academics have sweat blood for. They also tend not to follow the formats and metrics that count in academia to be heard, such as publications and mathematical precision and usable code. There is also a surprisingly limited attempt in engaging with academics and engineers on their terms, providing things they can actually use and act upon.
So I doubt they will check this forum for inspiration on which problems need to be cracked. That is irrational of them, so I understand why you do not respect it, but that is how it is.
On the other hand, understanding the existing obstacles may give us a better idea of how much time we still have, and which limitations emerging AGI will have, which is useful information.
I meant to criticize moving too far toward “do no harm” policy in general due to inability to achieve a solution that would satisfy us if we had the choice. I agree specifically that if anyone knows of a bottleneck unnoticed by people like Bengio and LeCun, LW is not the right forum to discuss it.
Is there a place like that though? I may be vastly misinformed, but last time I checked MIRI gave the impression of aiming at very different directions (“bringing to safety” mindset) - though I admit that I didn’t watch it closely, and it may not be obvious from the outside what kind of work is done and not published.
[Edit: “moving toward ‘do no harm’”—“moving to” was a grammar mistake that make it contrary to position you stated above—sorry]
I think there are a number of ways in which talking might be good given that one is right about there being obstacles—one that appeals to me in particular is the increased tractability of misuse arising from the relevant obstacles.
[Edit: *relevant obstacles I have in mind. (I’m trying to be vague here)]
No idea about original reasons, but I can imagine a projected chain of reasoning:
there is a finite number of conjunctive obstacles
if a single person can only think of a subset of obstacles, they will try to solve those obstacles first, making slow(-ish) progress as they discover more obstacles over time
if a group shares their lists, each individual will become aware of more obstacles and will be able to solve more of them at once, potentially making faster progress
I’m someone with 4 year timelines who would love to be wrong. If you send me a message sketching what obstacles you think there are, or even just naming them, I’d be grateful. I’m not working on capabilities & am happy to promise to never use whatever I learn from you for that purpose etc.
I think I agree with this in many cases but am skeptical of such a norm when the requests are related to criticism of the post or arguments as to why a claim it makes is wrong. I think I agree that the specific request to not respond shouldn’t ideally make someone more likely to respond to the rest of the post, but I think that neither should it make someone less likely to respond.
I think this request, absent a really strong compelling argument that is spelled out, creates an unhealthy epistemic environment. It is possible that you think this is false or that it’s worth the cost, but you don’t really argue for either in this post. You encourage people to question others and not trust blindly in other parts of the post, but this portion expects people to not elaborate on their opinions without an explanation as to why. You repeat this again by saying “So our message is: things are worse than what is described in the post!” without justifying yourselves or, imo, properly conveying the level of caution people should be treating such an unsubstantiated claim.
I’m tempted to write a post replying with why I think there are obstacles to AGI, what broadly they are with a few examples, and why it’s important to discuss them. (I’m not going to do so atm because it’s late and I know better then to publically share something that people implied to me is infohazaradous without carefully thinking it over (and discussing doing so with friends as well).)
(I’m also happy to post it as a comment here instead but assume you would prefer not and this is your post to moderate.)
The reasoning seems straightforward to me: If you’re wrong, why talk? If you’re right, you’re accelerating the end.
I can’t in general endorse “first do no harm”, but it becomes better and better in any specific case the less way there is to help. If you can’t save your family, at least don’t personally help kill them; it lacks dignity.
I think that is an example of the huge potential damage of “security mindset” gone wrong. If you can’t save your family, as in “bring them to safety”, at least make them marginally safer.
(Sorry for the tone of the following—it is not intended at you personally, who did much more than your fair share)
Create a closed community that you mostly trust, and let that community speak freely about how to win. Invent another damn safety patch that will make it marginally harder for the monster to eat them, in hope that it chooses to eat the moon first. I heard you say that most of your probability of survival comes from the possibility that you are wrong—trying to protect your family is trying to at least optimize for such miracle.
There is no safe way out of a war zone. Hiding behind a rock is not therfore the answer.
This is not a closed community, it is a world-readable Internet forum.
It is readable; it is however generally not read by academia and engineers.
I disagree with them about why—I do think solutions can be found by thinking outside of the box and outside of immediate applications, and without an academic degree, and I very much value the rational and creative discourse here.
But many here specifically advocate against getting a university degree or working in academia, thus shitting on things academics have sweat blood for. They also tend not to follow the formats and metrics that count in academia to be heard, such as publications and mathematical precision and usable code. There is also a surprisingly limited attempt in engaging with academics and engineers on their terms, providing things they can actually use and act upon.
So I doubt they will check this forum for inspiration on which problems need to be cracked. That is irrational of them, so I understand why you do not respect it, but that is how it is.
On the other hand, understanding the existing obstacles may give us a better idea of how much time we still have, and which limitations emerging AGI will have, which is useful information.
I meant to criticize moving too far toward “do no harm” policy in general due to inability to achieve a solution that would satisfy us if we had the choice. I agree specifically that if anyone knows of a bottleneck unnoticed by people like Bengio and LeCun, LW is not the right forum to discuss it.
Is there a place like that though? I may be vastly misinformed, but last time I checked MIRI gave the impression of aiming at very different directions (“bringing to safety” mindset) - though I admit that I didn’t watch it closely, and it may not be obvious from the outside what kind of work is done and not published.
[Edit: “moving toward ‘do no harm’”—“moving to” was a grammar mistake that make it contrary to position you stated above—sorry]
I think there are a number of ways in which talking might be good given that one is right about there being obstacles—one that appeals to me in particular is the increased tractability of misuse arising from the relevant obstacles.
[Edit: *relevant obstacles I have in mind. (I’m trying to be vague here)]
No idea about original reasons, but I can imagine a projected chain of reasoning:
there is a finite number of conjunctive obstacles
if a single person can only think of a subset of obstacles, they will try to solve those obstacles first, making slow(-ish) progress as they discover more obstacles over time
if a group shares their lists, each individual will become aware of more obstacles and will be able to solve more of them at once, potentially making faster progress
I’m someone with 4 year timelines who would love to be wrong. If you send me a message sketching what obstacles you think there are, or even just naming them, I’d be grateful. I’m not working on capabilities & am happy to promise to never use whatever I learn from you for that purpose etc.
Imo we should have a norm of respecting requests not to act, if we wouldn’t have acted absent their post. Else they won’t post in the first place.
I think I agree with this in many cases but am skeptical of such a norm when the requests are related to criticism of the post or arguments as to why a claim it makes is wrong. I think I agree that the specific request to not respond shouldn’t ideally make someone more likely to respond to the rest of the post, but I think that neither should it make someone less likely to respond.