I am with him on this. The level of AI alarmism that is being put forward especially in this community is uncalled for. I was just reading Yudkowski and Scott’s chat exchange and all the doom arguments I captured were of the form “what if?”. What about we just return to the way we do engineering: keep building and innovating and dealing with negative side effects along the way?
To borrow Thiel’s analogies, the same could also be said by proponents of further developments in nuclear weapons or ‘gain-of-function’ research of viruses… which raises the interesting question of whether he intended his speech to be partially self-negating one level further in.
AI risk is still at another level of concern. If you ask me to list what can go wrong with gain of function research, I can probably cite a lot of things. Now if you ask me what dangers LLM’s can cause to humanity, I will have a much more inoffensive list.
Current* large language models are not general intelligences. This community is mostly concerned with existential risk from future AIs, not the extremely minor risks from misuse of current AIs.
That’s exactly my point. We don’t even know how these future technologies will look like. Gain of function research has potential major negative effects right now, so I think it’s reasonable to be cautious. AI is not currently at this point. It may potentially be in the future, but by then we will be better equipped to deal with it and assess the risk-benefit profile we are willing to put up with.
but by then we will be better equipped to deal with it
This is precisely the point with which others disagree; especially the implicit assertion that we will be sufficiently equipped to handle the problem rather than just “better”.
That’s still a theoretical problem; something we should consider but not overly update on, in my opinion. Besides, can you think of any technology people could foresee it would be developed and specialists managed to successfully plan a framework before implementation? That wasn’t the case even with nuclear bombs.
Besides, can you think of any technology people could foresee it would be developed and specialists managed to successfully plan a framework before implementation?
That’s part of the reason why Eliezer Yudkowsky thinks we’re doomed and Robin Hanson thinks that we shouldn’t try to do much now. The difference between the two is take-off speed: For EY we either solve alignment before arrival of superintelligence (which is unlikely) or be doomed, RH thinks we have time to make alignment work during arrival of superintelligence.
Well, Eliezer is the one making extraordinary claims, so I think I am justified in applying a high dose of skepticism before evidence of AI severely acting against humanity’s best interest pops up.
Hm, I think this is way too confident of a take here. It is possible LLMs simply can’t scale, but you need to avoid making such a rightly controversial claim as a response to someone.
I am with him on this. The level of AI alarmism that is being put forward especially in this community is uncalled for. I was just reading Yudkowski and Scott’s chat exchange and all the doom arguments I captured were of the form “what if?”. What about we just return to the way we do engineering: keep building and innovating and dealing with negative side effects along the way?
To borrow Thiel’s analogies, the same could also be said by proponents of further developments in nuclear weapons or ‘gain-of-function’ research of viruses… which raises the interesting question of whether he intended his speech to be partially self-negating one level further in.
AI risk is still at another level of concern. If you ask me to list what can go wrong with gain of function research, I can probably cite a lot of things. Now if you ask me what dangers LLM’s can cause to humanity, I will have a much more inoffensive list.
Current* large language models are not general intelligences. This community is mostly concerned with existential risk from future AIs, not the extremely minor risks from misuse of current AIs.
That’s exactly my point. We don’t even know how these future technologies will look like. Gain of function research has potential major negative effects right now, so I think it’s reasonable to be cautious. AI is not currently at this point. It may potentially be in the future, but by then we will be better equipped to deal with it and assess the risk-benefit profile we are willing to put up with.
This is precisely the point with which others disagree; especially the implicit assertion that we will be sufficiently equipped to handle the problem rather than just “better”.
That’s still a theoretical problem; something we should consider but not overly update on, in my opinion. Besides, can you think of any technology people could foresee it would be developed and specialists managed to successfully plan a framework before implementation? That wasn’t the case even with nuclear bombs.
That’s part of the reason why Eliezer Yudkowsky thinks we’re doomed and Robin Hanson thinks that we shouldn’t try to do much now. The difference between the two is take-off speed: For EY we either solve alignment before arrival of superintelligence (which is unlikely) or be doomed, RH thinks we have time to make alignment work during arrival of superintelligence.
Well, Eliezer is the one making extraordinary claims, so I think I am justified in applying a high dose of skepticism before evidence of AI severely acting against humanity’s best interest pops up.
Are you able to strong man the argument in favor of AI being an existential risk to humanity?
Well....Eliezer does think we’re doomed so doesn’t necessarily contradict his worldview
Hm, I think this is way too confident of a take here. It is possible LLMs simply can’t scale, but you need to avoid making such a rightly controversial claim as a response to someone.
Added a word then.