If we don’t understand the forward pass of a LLM, then according to this use of “understanding” there are lots of other things we don’t understand that we nevertheless are deeply comfortable with.
Solid points; I think my response to Steven broadly covers them too, though. In essence, the reasons we’re comfortable with some phenomenon/technology usually aren’t based on just one factor. And I think in the case of AIs, the assumption they’re legible and totally comprehended is one of the load-bearing reasons a lot of people are comfortable with them to begin with. “Just software”.
So explaining how very unlike normal software they are – that they’re as uncontrollable and chaotic as weather, as moody and incomprehensible as human brains – would… not actually sound irrelevant let alone reassuring to them.
I think this is more of a disagreement on messaging than a disagreement on facts.
I don’t see anyone disputing the “the AI is about as unpredictable as weather” claim, but it’s quite a stretch to summarize that as “we have no idea how the AI works.”
I understand that abbreviated and exaggerated messaging can be optimal for public messaging, but I don’t think there’s enough clarification in this post between direct in-group claims and examples of public messaging.
I would break this into three parts, to avoid misunderstandings from poorly contextualized language: 1. What is our level of understanding of AIs? 2. What is the general public’s expectation of our level of understanding? 3. What’s the best messaging to resolve this probable overestimation?
Solid points; I think my response to Steven broadly covers them too, though. In essence, the reasons we’re comfortable with some phenomenon/technology usually aren’t based on just one factor. And I think in the case of AIs, the assumption they’re legible and totally comprehended is one of the load-bearing reasons a lot of people are comfortable with them to begin with. “Just software”.
So explaining how very unlike normal software they are – that they’re as uncontrollable and chaotic as weather, as moody and incomprehensible as human brains – would… not actually sound irrelevant let alone reassuring to them.
I think this is more of a disagreement on messaging than a disagreement on facts.
I don’t see anyone disputing the “the AI is about as unpredictable as weather” claim, but it’s quite a stretch to summarize that as “we have no idea how the AI works.”
I understand that abbreviated and exaggerated messaging can be optimal for public messaging, but I don’t think there’s enough clarification in this post between direct in-group claims and examples of public messaging.
I would break this into three parts, to avoid misunderstandings from poorly contextualized language:
1. What is our level of understanding of AIs?
2. What is the general public’s expectation of our level of understanding?
3. What’s the best messaging to resolve this probable overestimation?