There’s a confusing and wide ranging literature on embodied cognition, and how the human mind uses metaphors to conceive of many objects and events that happen in the world around us.
Brain emulations, presumably, would also have a similar capacity for metaphorical thinking and, more generally, symbolic processing. It may be worthwhile to keep in mind though that some metaphor depend on the physical constituency of the being using them, and they may either fail to refer in an artificial virtual system, or not have any significant correlation with the way its effectors and affectors affect its environment.
So metaphors such as thinking of goals as destinations, of containers as bounded in particular ways, and of ‘through’ as a series of action steps as seen from the first person perspective may not be equally understandable or processed by these emulations (specially as they continue to interact with the world in ways inconsistent with our understanding of metaphors.)
I will comment on this more completely when we discuss indirect normativity and CEV-like approaches, when I’ll try to make salient that in spite of our best efforts to “de-naturalize” ethics and request from the AGI that it performs in fail proof manner, we haven’t even been able to evade some of the most deeply ingrained metaphors in our description of CEV as an empirical moral question.
As an aside, when you’re speaking of these embodied metaphors, I assume you have in mind the work of Lakoff and Johnson (and/or Lakoff and Núñez)?
I’m sympathetic to your expectation that a lack of embodiment might create a metaphor “mistranslation”. But, I would expect that any deficits could be remedied either through virtual worlds or through technological replacements of sensory equipment. Traversing a virtual path in a virtual world should be just as good a source of metaphor/analogy as traversing a physical path in the physical world, no? Or if it weren’t, then inhabiting a robotic body equipped with a camera and wheels could replace the body as well, for the purposes of learning/developing embodied metaphors.
What might be more interesting to me, albeit perhaps more speculative, is to wonder about the kinds of new metaphors that digital minds might develop that we corporeal beings might be fundamentally unable to grasp in quite the same way. (I’m reminded here of an XKCD comic about “what a seg-fault feels like”.)
Yes I did mean Lakoff and similar work (such as conceptual blending by Faulconnier and Analogy as the Fuel and Fire of Thinking by Hofstadter).
Yes virtual equivalents would (almost certainly) perform the same operations as their real equivalents.
No, this would not solve or get anywhere near solving the problem, since what matters is how the metaphors work, how source and target domains are acquired, to what extend would the machine be able to perform the equivalent transformations etc…
So this is not something that A) Is already solved. - nor is it something that B) Is in the agenda of what is expected to be solved in due time. - Furthermore this is actually C) Not being currently tackled by researchers at either FHI, CSER, FLI or MIRI (so it is neglected). - Someone might be inclined to think that Indirect Normativity will render this problem moot, but this is not the case either, which we can talk about more when Indirect Normativity appears in the book.
Which means it is high expected value to be working on it. If anyone wants to work on this with me, please contact me. I’ll help you check out whether you also think this is neglected enough to be worth your time.
Notice that to determine whether solving this is relevant, and also many of the initial steps towards actually solving it, it is not necessary to have advanced math knowledge.
There’s a confusing and wide ranging literature on embodied cognition, and how the human mind uses metaphors to conceive of many objects and events that happen in the world around us. Brain emulations, presumably, would also have a similar capacity for metaphorical thinking and, more generally, symbolic processing. It may be worthwhile to keep in mind though that some metaphor depend on the physical constituency of the being using them, and they may either fail to refer in an artificial virtual system, or not have any significant correlation with the way its effectors and affectors affect its environment.
So metaphors such as thinking of goals as destinations, of containers as bounded in particular ways, and of ‘through’ as a series of action steps as seen from the first person perspective may not be equally understandable or processed by these emulations (specially as they continue to interact with the world in ways inconsistent with our understanding of metaphors.)
I will comment on this more completely when we discuss indirect normativity and CEV-like approaches, when I’ll try to make salient that in spite of our best efforts to “de-naturalize” ethics and request from the AGI that it performs in fail proof manner, we haven’t even been able to evade some of the most deeply ingrained metaphors in our description of CEV as an empirical moral question.
As an aside, when you’re speaking of these embodied metaphors, I assume you have in mind the work of Lakoff and Johnson (and/or Lakoff and Núñez)?
I’m sympathetic to your expectation that a lack of embodiment might create a metaphor “mistranslation”. But, I would expect that any deficits could be remedied either through virtual worlds or through technological replacements of sensory equipment. Traversing a virtual path in a virtual world should be just as good a source of metaphor/analogy as traversing a physical path in the physical world, no? Or if it weren’t, then inhabiting a robotic body equipped with a camera and wheels could replace the body as well, for the purposes of learning/developing embodied metaphors.
What might be more interesting to me, albeit perhaps more speculative, is to wonder about the kinds of new metaphors that digital minds might develop that we corporeal beings might be fundamentally unable to grasp in quite the same way. (I’m reminded here of an XKCD comic about “what a seg-fault feels like”.)
Yes I did mean Lakoff and similar work (such as conceptual blending by Faulconnier and Analogy as the Fuel and Fire of Thinking by Hofstadter).
Yes virtual equivalents would (almost certainly) perform the same operations as their real equivalents.
No, this would not solve or get anywhere near solving the problem, since what matters is how the metaphors work, how source and target domains are acquired, to what extend would the machine be able to perform the equivalent transformations etc…
So this is not something that A) Is already solved. - nor is it something that B) Is in the agenda of what is expected to be solved in due time. - Furthermore this is actually C) Not being currently tackled by researchers at either FHI, CSER, FLI or MIRI (so it is neglected). - Someone might be inclined to think that Indirect Normativity will render this problem moot, but this is not the case either, which we can talk about more when Indirect Normativity appears in the book.
Which means it is high expected value to be working on it. If anyone wants to work on this with me, please contact me. I’ll help you check out whether you also think this is neglected enough to be worth your time.
Notice that to determine whether solving this is relevant, and also many of the initial steps towards actually solving it, it is not necessary to have advanced math knowledge.