Might be relevant I might have experiences which are really rare as I think for some time is (at least) was a seeing human that echolocated. That edgecase might be a problem for some of the options.
If there is a general processing algorith then it poses some challenges that are more naturally solved in a “biology hardwired” view. If sensory processing is responsive to experienced data and different individuals why does a species have a relatively stable brain-region map? Shoudn’t individuals have processing units in pretty random places? (stop if you want to think your stance alone mine follows) Sense apparatus (ears, toungue) are biological hardwired and lowest level sensory processing is likely to be directly adjacent to that and higher level processing directly adjacent to that level. Like it is hard to learn advanced mathematical conce3pts before learning the fundamentals, the data dependency of the perceptions projects to a wiring spatial adjacency constraints. The naivest view of this sort would say that the visual processing system should be in the front of the brain rather than the back. One could mix as some wires being more or less hardcoded and different kinds of basic substrates be more easily employable to some kinds of tasks. For example vision probably benefits from massive parallelization but audio doesn’t really (but time resolution is important for audio but not that much for vision).
I happen to believe that humans can learn to echolocate (despite knowing there is atleast one philosphy paper arguing that humans can’t know what it is like to be a bat) but bats learn it consistently. If humans do have ears and they do have the freqeuncy and time resolutions neccesarily to do the operations why they don’t naturally pick up on the skill? Here blind people are in a special place in that they have incentive to echolocate as it is their only access to remote 3d perception. As a seeing echolocator, it was comparatively much more burdensome to hear the environment than to see it. There are some edgecase benefits, sound can bend and reflect around corners which makes for perception for out of line-of-sight objects and different objects are opaque to light vs sound (it was very funny to realise that windows are very light transparent but very “bright” in audio (well hard)). But given an option the vision starves audio the development of audio skills.
In general there migth be attention and software limitations too, not just data. Part of why picked up the skill was that I watched a tv program featuring a blind echolocator and was super interested whether that could be learned. Learning took time that was sustained by my interest and possibly knowledge that it might be possible. And to be interested in random obscure things might be part of my neurotype. But as the skill progressed there was a shift and I fel tthat the new kind of processing shared similarities with vision. I have encountered a joke about drugs that “smell blue” is an impossible experience but part of getting echolocation to work was to “see audio” rather than “hear”. It felt like a subskill that when I realised I could apply to hearing made me suddenly much more capable of constructing richer experiences. I could locate in a 3d field where the echo bounces were happening. And it had sufficient automaticity that when I was not actively trying to make anything sensory happen but was just walking down the street I was suddenly surprised/fearful of something to the left of me. It turned out to be car ramp suddenly opening up into the building where previuosly the street was straigth close wall. Reflecting upon the experience I figured that the sound source was my footsteps and that I noticed a perception change and placed it in 3d before being aware via which channels I had percieved it. But someone who doesn’t have the lower level sensory experiences can’t use them to learn about their environments audio structure. It was a typical public street but other walking on it probably don’t have experiences that help develop their ears. Deeper rumination on the issue lead me also to believe that there are earlier “ear-skill levels”. While humans are typically thought as stereo-hearers ie that they can sense from which directions sounds are coming from it is not uncommon for people to fail to be stereohearer and only hear what sound is happening and not from where ie “monohearer” (there is a mouse here, is in the floor, roof or ceiling?). Yet common lanaguge often refers to a single binary “can hear? yes/no”.
Might be relevant I might have experiences which are really rare as I think for some time is (at least) was a seeing human that echolocated. That edgecase might be a problem for some of the options.
If there is a general processing algorith then it poses some challenges that are more naturally solved in a “biology hardwired” view. If sensory processing is responsive to experienced data and different individuals why does a species have a relatively stable brain-region map? Shoudn’t individuals have processing units in pretty random places? (stop if you want to think your stance alone mine follows) Sense apparatus (ears, toungue) are biological hardwired and lowest level sensory processing is likely to be directly adjacent to that and higher level processing directly adjacent to that level. Like it is hard to learn advanced mathematical conce3pts before learning the fundamentals, the data dependency of the perceptions projects to a wiring spatial adjacency constraints. The naivest view of this sort would say that the visual processing system should be in the front of the brain rather than the back. One could mix as some wires being more or less hardcoded and different kinds of basic substrates be more easily employable to some kinds of tasks. For example vision probably benefits from massive parallelization but audio doesn’t really (but time resolution is important for audio but not that much for vision).
I happen to believe that humans can learn to echolocate (despite knowing there is atleast one philosphy paper arguing that humans can’t know what it is like to be a bat) but bats learn it consistently. If humans do have ears and they do have the freqeuncy and time resolutions neccesarily to do the operations why they don’t naturally pick up on the skill? Here blind people are in a special place in that they have incentive to echolocate as it is their only access to remote 3d perception. As a seeing echolocator, it was comparatively much more burdensome to hear the environment than to see it. There are some edgecase benefits, sound can bend and reflect around corners which makes for perception for out of line-of-sight objects and different objects are opaque to light vs sound (it was very funny to realise that windows are very light transparent but very “bright” in audio (well hard)). But given an option the vision starves audio the development of audio skills.
In general there migth be attention and software limitations too, not just data. Part of why picked up the skill was that I watched a tv program featuring a blind echolocator and was super interested whether that could be learned. Learning took time that was sustained by my interest and possibly knowledge that it might be possible. And to be interested in random obscure things might be part of my neurotype. But as the skill progressed there was a shift and I fel tthat the new kind of processing shared similarities with vision. I have encountered a joke about drugs that “smell blue” is an impossible experience but part of getting echolocation to work was to “see audio” rather than “hear”. It felt like a subskill that when I realised I could apply to hearing made me suddenly much more capable of constructing richer experiences. I could locate in a 3d field where the echo bounces were happening. And it had sufficient automaticity that when I was not actively trying to make anything sensory happen but was just walking down the street I was suddenly surprised/fearful of something to the left of me. It turned out to be car ramp suddenly opening up into the building where previuosly the street was straigth close wall. Reflecting upon the experience I figured that the sound source was my footsteps and that I noticed a perception change and placed it in 3d before being aware via which channels I had percieved it. But someone who doesn’t have the lower level sensory experiences can’t use them to learn about their environments audio structure. It was a typical public street but other walking on it probably don’t have experiences that help develop their ears. Deeper rumination on the issue lead me also to believe that there are earlier “ear-skill levels”. While humans are typically thought as stereo-hearers ie that they can sense from which directions sounds are coming from it is not uncommon for people to fail to be stereohearer and only hear what sound is happening and not from where ie “monohearer” (there is a mouse here, is in the floor, roof or ceiling?). Yet common lanaguge often refers to a single binary “can hear? yes/no”.