To be pedantic, “pragmatism” in the context of theories of knowledge means “knowledge is whatever the scientific community eventually agrees on” (or something along those lines—I have not read deeply on it). [A pragmatist approach to ELK would, then, rule out “the predictor’s knowledge goes beyond human science” type counterexamples on principle.]
What you’re arguing for is more commonly called contextualism. (The standards for “knowledge” depend on context.)
I totally agree with contextualism as a description of linguistic practice, but I think the ELK-relevant question is: what notion of knowledge is relevant to reducing AI risk? (TBC, I don’t think the answer to this is immediately obvious; I’m unsure which types of knowledge are most risk-relevant.)
To be pedantic, “pragmatism” in the context of theories of knowledge means “knowledge is whatever the scientific community eventually agrees on” (or something along those lines—I have not read deeply on it). [A pragmatist approach to ELK would, then, rule out “the predictor’s knowledge goes beyond human science” type counterexamples on principle.]
What you’re arguing for is more commonly called contextualism. (The standards for “knowledge” depend on context.)
I totally agree with contextualism as a description of linguistic practice, but I think the ELK-relevant question is: what notion of knowledge is relevant to reducing AI risk? (TBC, I don’t think the answer to this is immediately obvious; I’m unsure which types of knowledge are most risk-relevant.)
Pragmatism’s a great word, everyone wants to use it :P But to be specific, I mean more like Rorty (after some Yudkowskian fixes) than Pierce.
Fair enough!