Some humans get some gene therapy that turns their blood blue. Then the AGI reasons that those entities aren’t humans anymore and therefore don’t need to be treated according to the protocol for treating humans.
Or it get’s confused with other issues about ‘red’.
The only way around this is to specifically tell the FAI the context in which the statement is true. If you look at Eliezers post about truth you won’t find a mention that context is important and needs to be passed along.
It’s just not there. It’s not a central part of his idea of truth. Instead he talks about probabilities of things being true.
None of my Anki cards has “this is true with 0.994%” probability. Instead it has a reference to context. That’s because context is more important than probability for most knowledge. In the framework that Eliezer propagates probability is of core importance.
That isn’t an example where a truth value depends on context, it’s an example where making the correct deductions depends on the correct theoretical background.
However I agree that quantifying what you haven’t first understood is pointless.
The thing is that having freshly spilled blood turn out to be red is neither sufficient nor necessary to be a human.
For the AGI it’s necessary to treat sentence that tell the AGI sufficient or necessary conditions from sentences that tell the AGI about observations. Those two kind of statements are different contexts.
We have plenty of smart people on LW and plenty of people who use Anki but we don’t have a good Anki deck that teaches knowledge about rationality. I think that errors in thinking about knowledge and the importance of context prevent people from expressing their knowledge about rationality explicitly via a medium such as Anki cards.
I think that has a lot to do with people searching for knowledge that objectively true in all contexts instead of focusing on knowledge that’s true in one context. If you switch towards your statement just being true in one context that you explicitly define you can suddenly start to say stuff.
When it comes to learning color terms I did have LW people roughly saying: “You can’t do this, objective truth is different. Your monitor doesn’t show true colors”.
When it comes to learning color terms I did have LW people roughly saying: “You can’t do this, objective truth is different. Your monitor doesn’t show true colors”.
That was probably me. You can define colors objectively, it’s not hard. That includes colors in a digital image given a color space. However what you see on your computer monitor may (and does) differ considerably from the reference standard for a given color.
That was probably me. You can define colors objectively, it’s not hard.
You can decide for some definition. But that’s not the core of the issue. The core of the issue is that I can write decent Anki cards given my context-based truth which I couldn’t write otherwise if I wouldn’t think the context based frame.
However what you see on your computer monitor may (and does) differ considerably from the reference standard for a given color.
As I said above deciding for a reference standard isn’t a trivial decision. The W3C standard is for example pretty stupid when it defines distance between colors. I also can’t easily find a defined reference standard for what #228B22 (forestgreen) is. Could you point me to a standard document that defines “the reference standard”?
The core of the issue is that I can write decent Anki cards given my context-based truth which I couldn’t write otherwise if I wouldn’t think the context based frame.
The core issue is whether your argument amounts to context based truth, or context based meaning.
If you are crediting an AI with the ability to undersand plain English, you are crediting it with a certain amount of ability to detect context in the first place.
If you are crediting an AI with the ability to undersand plain English, you are crediting it with a certain amount of ability to detect context in the first place.
I haven’t said anything about plain English. If you look at Cyc’s idea of red I don’t think you find the notion that red means different things based on context.
Let’s say you want to seed an AGI with knowledge.
Some humans get some gene therapy that turns their blood blue. Then the AGI reasons that those entities aren’t humans anymore and therefore don’t need to be treated according to the protocol for treating humans. Or it get’s confused with other issues about ‘red’.
The only way around this is to specifically tell the FAI the context in which the statement is true. If you look at Eliezers post about truth you won’t find a mention that context is important and needs to be passed along.
It’s just not there. It’s not a central part of his idea of truth. Instead he talks about probabilities of things being true.
None of my Anki cards has “this is true with 0.994%” probability. Instead it has a reference to context. That’s because context is more important than probability for most knowledge. In the framework that Eliezer propagates probability is of core importance.
That isn’t an example where a truth value depends on context, it’s an example where making the correct deductions depends on the correct theoretical background.
However I agree that quantifying what you haven’t first understood is pointless.
The thing is that having freshly spilled blood turn out to be red is neither sufficient nor necessary to be a human.
For the AGI it’s necessary to treat sentence that tell the AGI sufficient or necessary conditions from sentences that tell the AGI about observations. Those two kind of statements are different contexts.
We have plenty of smart people on LW and plenty of people who use Anki but we don’t have a good Anki deck that teaches knowledge about rationality. I think that errors in thinking about knowledge and the importance of context prevent people from expressing their knowledge about rationality explicitly via a medium such as Anki cards.
I think that has a lot to do with people searching for knowledge that objectively true in all contexts instead of focusing on knowledge that’s true in one context. If you switch towards your statement just being true in one context that you explicitly define you can suddenly start to say stuff.
When it comes to learning color terms I did have LW people roughly saying: “You can’t do this, objective truth is different. Your monitor doesn’t show true colors”.
That was probably me. You can define colors objectively, it’s not hard. That includes colors in a digital image given a color space. However what you see on your computer monitor may (and does) differ considerably from the reference standard for a given color.
You can decide for some definition. But that’s not the core of the issue. The core of the issue is that I can write decent Anki cards given my context-based truth which I couldn’t write otherwise if I wouldn’t think the context based frame.
As I said above deciding for a reference standard isn’t a trivial decision. The W3C standard is for example pretty stupid when it defines distance between colors. I also can’t easily find a defined reference standard for what #228B22 (forestgreen) is. Could you point me to a standard document that defines “the reference standard”?
The core issue is whether your argument amounts to context based truth, or context based meaning.
Sure.
If you are crediting an AI with the ability to undersand plain English, you are crediting it with a certain amount of ability to detect context in the first place.
I haven’t said anything about plain English. If you look at Cyc’s idea of red I don’t think you find the notion that red means different things based on context.