Part of my point is that there is a difference between the fact of the matter and what we know. Some things are safe despite our ignorance, and some are unsafe despite our ignorance.
Sure, I agree with that, and so perhaps the title should have been “Systems that cannot be reasonably claimed to be unsafe in specific ways cannot be claimed to be safe in those ways, because what does that even mean?”
If you say something is “qwrgz,” I can’t agree or disagree, I can only ask what you mean. If you say something is “safe,” I generally assume you are making a claim about something you know. My problem is that people claim that something is safe, despite not having stated any idea about what they would call unsafe. But again, that seems fundamentally confused about what safety means for such systems.
People do actually have a somewhat-shared set of criteria in mind when they talk about whether a thing is safe, though, in a way that they (or at least I) don’t when talking about its qwrgzness. e.g., if it kills 99% of life on earth over a ten year period, I’m pretty sure almost everyone would agree that it’s unsafe. No further specification work is required. It doesn’t seem fundamentally confused to refer to a thing as “unsafe” if you think it might do that.
I do think that some people are clearly talking about meanings of the word “safe” that aren’t so clear-cut (e.g. Sam Altman saying GPT-4 is the safest model yet™️), and in those cases I agree that these statements are much closer to “meaningless”.
I do think that some people are clearly talking about meanings of the word “safe” that aren’t so clear-cut (e.g. Sam Altman saying GPT-4 is the safest model yet™️), and in those cases I agree that these statements are much closer to “meaningless”.
The people in the world who actually build these models are doing the thing that I pointed out. That’s the issue I was addressing.
People do actually have a somewhat-shared set of criteria in mind when they talk about whether a thing is safe, though, in a way that they (or at least I) don’t when talking about its qwrgzness. e.g., if it kills 99% of life on earth over a ten year period, I’m pretty sure almost everyone would agree that it’s unsafe. No further specification work is required. It doesn’t seem fundamentally confused to refer to a thing as “unsafe” if you think it might do that.
I don’t understand this distinction. If ” I’m pretty sure almost everyone would agree that it’s unsafe,” that’s an informal but concrete ability for the system to be unsafe, and it would not be confused to say something is unsafe if you think it could do that, nor to claim that it is safe if you have clear reason to believe it will not.
My problem is, as you mentioned, that people in the world of ML are not making that class of claim. They don’t seem to ground their claims about safety in any conceptual model about what the risks or possible failures are whatsoever, and that does seem fundamentally confused.
Part of my point is that there is a difference between the fact of the matter and what we know. Some things are safe despite our ignorance, and some are unsafe despite our ignorance.
Sure, I agree with that, and so perhaps the title should have been “Systems that cannot be reasonably claimed to be unsafe in specific ways cannot be claimed to be safe in those ways, because what does that even mean?”
If you say something is “qwrgz,” I can’t agree or disagree, I can only ask what you mean. If you say something is “safe,” I generally assume you are making a claim about something you know. My problem is that people claim that something is safe, despite not having stated any idea about what they would call unsafe. But again, that seems fundamentally confused about what safety means for such systems.
I would agree more with your rephrased title.
People do actually have a somewhat-shared set of criteria in mind when they talk about whether a thing is safe, though, in a way that they (or at least I) don’t when talking about its qwrgzness. e.g., if it kills 99% of life on earth over a ten year period, I’m pretty sure almost everyone would agree that it’s unsafe. No further specification work is required. It doesn’t seem fundamentally confused to refer to a thing as “unsafe” if you think it might do that.
I do think that some people are clearly talking about meanings of the word “safe” that aren’t so clear-cut (e.g. Sam Altman saying GPT-4 is the safest model yet™️), and in those cases I agree that these statements are much closer to “meaningless”.
The people in the world who actually build these models are doing the thing that I pointed out. That’s the issue I was addressing.
I don’t understand this distinction. If ” I’m pretty sure almost everyone would agree that it’s unsafe,” that’s an informal but concrete ability for the system to be unsafe, and it would not be confused to say something is unsafe if you think it could do that, nor to claim that it is safe if you have clear reason to believe it will not.
My problem is, as you mentioned, that people in the world of ML are not making that class of claim. They don’t seem to ground their claims about safety in any conceptual model about what the risks or possible failures are whatsoever, and that does seem fundamentally confused.