I once saw a blind kid on TV that had developed a way of clicking with his mouth that he could use it to navigate sidewalks. This was pretty cool and it made me pay attention to my own sense of hearing and wondering what it must be like to use that kind of ability. I payed close attention to situations that it might be possible to hear the place of walls etc. Doing this for sometime it changed my relationship to my hearing.
I became aware when a sound is louder because additional bounces of wave energy hit my ear rather than having only the direct line-of-sight propagation. I picked up the threshold where I hear the primary sound and it’s echo as simultanous sound or as two separate sounds. After paying attention to things that I theorethically knew why they would happen I could tap into kinds of “feels” in the sound. My mind somehow clicked and connected geometric forms to the echo timing profile. In understand only discrete sounds conciously but the prolonged direction-changing continous echo that a sloped wall makes I could sense intrisically. And I found out that for example claps are very directional and you can kind of like cast different claping to a wall like you would shine a flashlight.
All in all my sense of hearing became much more like my sense of seeing with good 3D-structure. Experiencing this new way of hearing was very interesting and cool. However once I got settled how to hear like a echolocator I had trouble conceptualising what it is like not to hear like it. My guess is that if you don’t pay that much attention a lof information goes unextracted. But it was a big surprise that it wasn’t “obvious” how much information a given hearing includes. I didn’t gain a better ear. The amount of information I was receiving would have needed to stay same, but I guess I previously couldn’t structure them properly.
And I realised that i had atleast two hearing modes even before this new “3D” mode. A mono mode where you can decipher what kind of sound it is and can recognise what causing it with only knowing that it is “nearby within hearing distance” and couldn’t be able to face the sound and need to visually look for clues where the sound is coming. Then there is kinda “arrow” mode where you know to look at the direction where the sound is coming from. But it is kinda cool when in “3D” mode I can hear around the corner on what kind of space there is which I can’t do in “arrow” mode.
Thinking about how sound waves work it kinda makes sense how the perception changes between “mono” and “arrow mode”. If you are in a empty room and make big enough noise there is significant echo going from every direction. Without able to read the timing finestructure it feels like coming from everywhere. However if you in the same kind of room don’t make quite as much noise then the component directly going towards you will dominate the echoes. There is also an explanation why the “arrow” isn’t a pinpointer but a fuzzy approximation, when you try to read the texture/shape information as location information it will give a slightly contradictory result.
I am using language here where I first feel a certain way and then be puzzled on why it would feel this way and then start theorising in this way. I guess it’s worth noting that having more theory won’t give you insight in what your experience is. It was kinda mindopening to be able to target those feelings relatively theory-free and then the joy of finding the explanation. For example how sound propagation first felt “waterlike” and only afterwards confirming that that makes perfect sense as the waves are not equal in strength in all directions and do have dampening as they propagate.
I really couldn’t confirm that I wasn’t just reading too much into what I was supposedly experiencing, that I have just pretended to experience things while only actually wanting to experience them so. But then after I aquired the skill I passively would first pick up sounds and have 3D impressions of them when not actively pursuing to hear anything (and usually be frightened about it) and only then turning to look at them that this was a legit change in perception when the expectations formed by hearing would be confirmed by sight. For example I would drive by a post with a bike and suddenly be very aware of something square on my right, the wheel sounds giving enough echo basis that the post would pop-up against the background a lot more than it visually does. Or driving alleys making a sudden echo chamber on an otherwise echoless street. I also found out that glass sticks out a lot more than other materials (oh there is a large object to my right, oh it’s just a window).
For me I have discovered what it is to be like an echolocator which I guess is supposed to be the main alien part in the bat metaphor. There is also a joke on how drugs make you “taste blue” but I have come to experience that and how it makes sense to “see sound”. But the behavioural effects of this different kind of experiencing are not that telling or direct. I would not pass the vampire turing test because that isn’t to the point, it would need to be refined to be that but it is not trivial how that would be made.
The operation that made me undergo this change seems to be paying attention. It doesn’t seem to be that I learned a new fact. Althougth I clearly see that having atheory why I am feeling what I am feeling did have aguiding effect. Maybe call it a imagination aid? I would say it might be a deficiency in understanding and not knowledge that limits people not being able to experience bats. And it is possible for humans to understand what it is to be an echolocator. I would guess that if I had sufficiently clear descriptions on what kinds of “facets” my perceptions include I should be able to play it out how I would experience the situation had I that kind of a sense. So I think it might be possible to imagine seeing 4 primary colors but it takes skill in this “pay attention to your qualia structures” thingy that people are not in general very good at.
Around 3-4 weekends. Althought being actively interested in your surrounding sounds is somewhat a big part of it and that happened between more intense sessions. I found that having and considering edge cases that are just in the limit of your perception is the most developing. I used a walk-in closet to familirise myself with the direct voice in contrast to the echo. Empty rooms are actually noisy. The drop in volume is significant enough that there is a clear difference in effort to produce equal sound even in mono mode. I also tried to have a reference sound I can produce uniformly in a variety of places and isn’t disrupting to other people. One was base of my tongue against my palate. However this is a little confusing as the head internal accustics are not the most straigthforward ones and interfere with the external accoustics. I also had a button I would click into place and out of. This had trouble in that it often would have insignificant volume to get a proper feel for environment.
One most not forget about just being curious about sounds that just happen to be in the environment. Emergency vechicles are a great source of doppler and the volume output is really great. In urban areas there are plenty of clear surfaces and surface gaps making the moment have a nice variable microstructure. In more wide open spaces the scale of things makes it more easy to pick up on the echo components. Cars in general provide a pretty monotome moving sound source. Riding a bike also provides a constant mechanical noice that has relative position to you fixed and doesn’t really tire you out in generating (+ is socially acceptable way of being noizy (you can even get away with devices explictly designed to generate noise (atleast if you are young enough))). I didn’t really use it myself but cell phone button/ui noises should be pretty standard, narrow and somewhat acceptable.
In private areas clapping has pretty narrow sound profile althought is pretty directional that can make the volume non-standard when you haven’t masterd that yet. Listening to the wall of a detached house with clapping could be done within it’s yard. The smaller scale you are the higher you want the pitch to be (or can only latch into higher components).
The main thing is to be aware and ready to percieve. It’s clearly a very learnable skill the main obstacle being paying that attention. I didn’t use any reference or readymade learning materials. Having a goal was plenty in providing steps/structure to proceed (ie thinking that there are possibly harder and easier sounds, you focus on what could determine the easiness/hardness of a sound, having a bunch of hearing experiences focusing on what kinds of categories you can place them in can then be used to anticipate categorising novel experinces etc). You have ears, use them, play with them. Shockingly most people really don’t. I have yet to generalise what other things could be achieved with “seriously playing”. Being able to take into account findings “midflight” might be a critical thing that most learning alternatives lack. You won’t need that many repetitions but you need to be level-appropriate (even as that level shifts).
https://en.wikipedia.org/wiki/Human_echolocation mentions some training courses, and checking pages on them, they don’t talk about units of years or months, but short ‘workshops’, which usually means that they won’t last more than 3-4 days. So with intense training, it may be learnable quickly.
I once saw a blind kid on TV that had developed a way of clicking with his mouth that he could use it to navigate sidewalks. This was pretty cool and it made me pay attention to my own sense of hearing and wondering what it must be like to use that kind of ability. I payed close attention to situations that it might be possible to hear the place of walls etc. Doing this for sometime it changed my relationship to my hearing.
I became aware when a sound is louder because additional bounces of wave energy hit my ear rather than having only the direct line-of-sight propagation. I picked up the threshold where I hear the primary sound and it’s echo as simultanous sound or as two separate sounds. After paying attention to things that I theorethically knew why they would happen I could tap into kinds of “feels” in the sound. My mind somehow clicked and connected geometric forms to the echo timing profile. In understand only discrete sounds conciously but the prolonged direction-changing continous echo that a sloped wall makes I could sense intrisically. And I found out that for example claps are very directional and you can kind of like cast different claping to a wall like you would shine a flashlight.
All in all my sense of hearing became much more like my sense of seeing with good 3D-structure. Experiencing this new way of hearing was very interesting and cool. However once I got settled how to hear like a echolocator I had trouble conceptualising what it is like not to hear like it. My guess is that if you don’t pay that much attention a lof information goes unextracted. But it was a big surprise that it wasn’t “obvious” how much information a given hearing includes. I didn’t gain a better ear. The amount of information I was receiving would have needed to stay same, but I guess I previously couldn’t structure them properly.
And I realised that i had atleast two hearing modes even before this new “3D” mode. A mono mode where you can decipher what kind of sound it is and can recognise what causing it with only knowing that it is “nearby within hearing distance” and couldn’t be able to face the sound and need to visually look for clues where the sound is coming. Then there is kinda “arrow” mode where you know to look at the direction where the sound is coming from. But it is kinda cool when in “3D” mode I can hear around the corner on what kind of space there is which I can’t do in “arrow” mode.
Thinking about how sound waves work it kinda makes sense how the perception changes between “mono” and “arrow mode”. If you are in a empty room and make big enough noise there is significant echo going from every direction. Without able to read the timing finestructure it feels like coming from everywhere. However if you in the same kind of room don’t make quite as much noise then the component directly going towards you will dominate the echoes. There is also an explanation why the “arrow” isn’t a pinpointer but a fuzzy approximation, when you try to read the texture/shape information as location information it will give a slightly contradictory result.
I am using language here where I first feel a certain way and then be puzzled on why it would feel this way and then start theorising in this way. I guess it’s worth noting that having more theory won’t give you insight in what your experience is. It was kinda mindopening to be able to target those feelings relatively theory-free and then the joy of finding the explanation. For example how sound propagation first felt “waterlike” and only afterwards confirming that that makes perfect sense as the waves are not equal in strength in all directions and do have dampening as they propagate.
I really couldn’t confirm that I wasn’t just reading too much into what I was supposedly experiencing, that I have just pretended to experience things while only actually wanting to experience them so. But then after I aquired the skill I passively would first pick up sounds and have 3D impressions of them when not actively pursuing to hear anything (and usually be frightened about it) and only then turning to look at them that this was a legit change in perception when the expectations formed by hearing would be confirmed by sight. For example I would drive by a post with a bike and suddenly be very aware of something square on my right, the wheel sounds giving enough echo basis that the post would pop-up against the background a lot more than it visually does. Or driving alleys making a sudden echo chamber on an otherwise echoless street. I also found out that glass sticks out a lot more than other materials (oh there is a large object to my right, oh it’s just a window).
For me I have discovered what it is to be like an echolocator which I guess is supposed to be the main alien part in the bat metaphor. There is also a joke on how drugs make you “taste blue” but I have come to experience that and how it makes sense to “see sound”. But the behavioural effects of this different kind of experiencing are not that telling or direct. I would not pass the vampire turing test because that isn’t to the point, it would need to be refined to be that but it is not trivial how that would be made.
The operation that made me undergo this change seems to be paying attention. It doesn’t seem to be that I learned a new fact. Althougth I clearly see that having atheory why I am feeling what I am feeling did have aguiding effect. Maybe call it a imagination aid? I would say it might be a deficiency in understanding and not knowledge that limits people not being able to experience bats. And it is possible for humans to understand what it is to be an echolocator. I would guess that if I had sufficiently clear descriptions on what kinds of “facets” my perceptions include I should be able to play it out how I would experience the situation had I that kind of a sense. So I think it might be possible to imagine seeing 4 primary colors but it takes skill in this “pay attention to your qualia structures” thingy that people are not in general very good at.
How long did it take to build this skill, and how did you do it?
Around 3-4 weekends. Althought being actively interested in your surrounding sounds is somewhat a big part of it and that happened between more intense sessions. I found that having and considering edge cases that are just in the limit of your perception is the most developing. I used a walk-in closet to familirise myself with the direct voice in contrast to the echo. Empty rooms are actually noisy. The drop in volume is significant enough that there is a clear difference in effort to produce equal sound even in mono mode. I also tried to have a reference sound I can produce uniformly in a variety of places and isn’t disrupting to other people. One was base of my tongue against my palate. However this is a little confusing as the head internal accustics are not the most straigthforward ones and interfere with the external accoustics. I also had a button I would click into place and out of. This had trouble in that it often would have insignificant volume to get a proper feel for environment.
One most not forget about just being curious about sounds that just happen to be in the environment. Emergency vechicles are a great source of doppler and the volume output is really great. In urban areas there are plenty of clear surfaces and surface gaps making the moment have a nice variable microstructure. In more wide open spaces the scale of things makes it more easy to pick up on the echo components. Cars in general provide a pretty monotome moving sound source. Riding a bike also provides a constant mechanical noice that has relative position to you fixed and doesn’t really tire you out in generating (+ is socially acceptable way of being noizy (you can even get away with devices explictly designed to generate noise (atleast if you are young enough))). I didn’t really use it myself but cell phone button/ui noises should be pretty standard, narrow and somewhat acceptable.
In private areas clapping has pretty narrow sound profile althought is pretty directional that can make the volume non-standard when you haven’t masterd that yet. Listening to the wall of a detached house with clapping could be done within it’s yard. The smaller scale you are the higher you want the pitch to be (or can only latch into higher components).
The main thing is to be aware and ready to percieve. It’s clearly a very learnable skill the main obstacle being paying that attention. I didn’t use any reference or readymade learning materials. Having a goal was plenty in providing steps/structure to proceed (ie thinking that there are possibly harder and easier sounds, you focus on what could determine the easiness/hardness of a sound, having a bunch of hearing experiences focusing on what kinds of categories you can place them in can then be used to anticipate categorising novel experinces etc). You have ears, use them, play with them. Shockingly most people really don’t. I have yet to generalise what other things could be achieved with “seriously playing”. Being able to take into account findings “midflight” might be a critical thing that most learning alternatives lack. You won’t need that many repetitions but you need to be level-appropriate (even as that level shifts).
https://en.wikipedia.org/wiki/Human_echolocation mentions some training courses, and checking pages on them, they don’t talk about units of years or months, but short ‘workshops’, which usually means that they won’t last more than 3-4 days. So with intense training, it may be learnable quickly.