I think arguing against Platonism is a good segue into arguing for pragmatism. We often use the word “knowledge” different ways in different contexts, and I think that’s fine.
When the context is about math we can “know” statements that are theorems of some axioms (given either explicitly or implicitly), but we can also use “know” in other ways, as in “we know P!=NP but we can’t prove it.”
And when the context is about the world, we can have “knowledge” that’s about correspondence between our beliefs and reality. We can even use “knowledge” in a way that lets us know false things, as in “I knew he was dead, until he showed up on my doorstep.”
I don’t think this directly helps with ELK, but if anything it highlights a way the problem can be extra tricky—you have to somehow understand what “knowledge” the human is asking for.
To be pedantic, “pragmatism” in the context of theories of knowledge means “knowledge is whatever the scientific community eventually agrees on” (or something along those lines—I have not read deeply on it). [A pragmatist approach to ELK would, then, rule out “the predictor’s knowledge goes beyond human science” type counterexamples on principle.]
What you’re arguing for is more commonly called contextualism. (The standards for “knowledge” depend on context.)
I totally agree with contextualism as a description of linguistic practice, but I think the ELK-relevant question is: what notion of knowledge is relevant to reducing AI risk? (TBC, I don’t think the answer to this is immediately obvious; I’m unsure which types of knowledge are most risk-relevant.)
I think arguing against Platonism is a good segue into arguing for pragmatism. We often use the word “knowledge” different ways in different contexts, and I think that’s fine.
When the context is about math we can “know” statements that are theorems of some axioms (given either explicitly or implicitly), but we can also use “know” in other ways, as in “we know P!=NP but we can’t prove it.”
And when the context is about the world, we can have “knowledge” that’s about correspondence between our beliefs and reality. We can even use “knowledge” in a way that lets us know false things, as in “I knew he was dead, until he showed up on my doorstep.”
I don’t think this directly helps with ELK, but if anything it highlights a way the problem can be extra tricky—you have to somehow understand what “knowledge” the human is asking for.
To be pedantic, “pragmatism” in the context of theories of knowledge means “knowledge is whatever the scientific community eventually agrees on” (or something along those lines—I have not read deeply on it). [A pragmatist approach to ELK would, then, rule out “the predictor’s knowledge goes beyond human science” type counterexamples on principle.]
What you’re arguing for is more commonly called contextualism. (The standards for “knowledge” depend on context.)
I totally agree with contextualism as a description of linguistic practice, but I think the ELK-relevant question is: what notion of knowledge is relevant to reducing AI risk? (TBC, I don’t think the answer to this is immediately obvious; I’m unsure which types of knowledge are most risk-relevant.)
Pragmatism’s a great word, everyone wants to use it :P But to be specific, I mean more like Rorty (after some Yudkowskian fixes) than Pierce.
Fair enough!