It still seems to me that correct answers to the big problems do require a rigid logical map, and the fact that our brain does not operate on strict logic is besides the point. It may be completely impossible for humans to create/learn/use in practice such a language, and if so perhaps we are actually doomed, but I’d like to fork that into a separate discussion. And as I posted in a response to Vladimir, if it helps clarify my question, I don’t propose a widely-used language, only a highly specialized one created to work on FAI, and/or dissolving “philosophical” issues, essentially.
I’d love to see a more detailed analysis of your position; as I implied earlier, your bullet points don’t seem to address my central question, unless I’m just not making the right connections. It sounds like you’ve discussed this with others in the past, any conversations you could link me to, perhaps?
I may have read too much into the first and second sentences of your post—I felt that you were suggesting that the only way for us to achieve sufficient rationality to work on FAI or solve important problems would be to start using Lojban (or similar) all the time.
So my response to using a language purely for working on FAI is much the same as Vladimir’s—sounds like you’re talking more about a set of conventions like predicate logic or maths notation than a language per se. Saddling it with the ‘language’ label is going to lead to lots of excess baggage, because languages as a whole need to do a lot of work.
It sounds like you’ve discussed this with others in the past
It’s the argument nearly anyone with any linguistic knowledge will have with countless people who think that language would be so much better if it was less ambiguous and we could just say exactly what we meant all the time. No convenient links though, sad to say
It still seems to me that correct answers to the big problems do require a rigid logical map
Apologies, I can see how you would have assumed that, my OP wasn’t as clearly formed as I thought.
I think one of my main confusions may be ignorance of how dependent DT, things like CEV, and metaethics are on actual language, rather than being expressed in such a mathematical notation that is uninfluenced by potentially critical ambiguities inherent in evolved language. My OP actually stemmed from jimrandomh’s comment here, specifically jim’s concerns about fuzzy language in DT. I have to confess I’m (hopefully understandably) not up to the challenge of fully understanding the level of work jim and Eliezer and others are operating on, so this (language dependence) is very hard for me to judge.
It still seems to me that correct answers to the big problems do require a rigid logical map, and the fact that our brain does not operate on strict logic is besides the point. It may be completely impossible for humans to create/learn/use in practice such a language, and if so perhaps we are actually doomed, but I’d like to fork that into a separate discussion. And as I posted in a response to Vladimir, if it helps clarify my question, I don’t propose a widely-used language, only a highly specialized one created to work on FAI, and/or dissolving “philosophical” issues, essentially.
I’d love to see a more detailed analysis of your position; as I implied earlier, your bullet points don’t seem to address my central question, unless I’m just not making the right connections. It sounds like you’ve discussed this with others in the past, any conversations you could link me to, perhaps?
I may have read too much into the first and second sentences of your post—I felt that you were suggesting that the only way for us to achieve sufficient rationality to work on FAI or solve important problems would be to start using Lojban (or similar) all the time.
So my response to using a language purely for working on FAI is much the same as Vladimir’s—sounds like you’re talking more about a set of conventions like predicate logic or maths notation than a language per se. Saddling it with the ‘language’ label is going to lead to lots of excess baggage, because languages as a whole need to do a lot of work.
It’s the argument nearly anyone with any linguistic knowledge will have with countless people who think that language would be so much better if it was less ambiguous and we could just say exactly what we meant all the time. No convenient links though, sad to say
Such as decision theories?
Apologies, I can see how you would have assumed that, my OP wasn’t as clearly formed as I thought.
I think one of my main confusions may be ignorance of how dependent DT, things like CEV, and metaethics are on actual language, rather than being expressed in such a mathematical notation that is uninfluenced by potentially critical ambiguities inherent in evolved language. My OP actually stemmed from jimrandomh’s comment here, specifically jim’s concerns about fuzzy language in DT. I have to confess I’m (hopefully understandably) not up to the challenge of fully understanding the level of work jim and Eliezer and others are operating on, so this (language dependence) is very hard for me to judge.