I missed September’s and got ambushed by Akratic Goblins halfway through, but I think it still counts as in the last month (or did yesterday):
I went back to college, and immediately did what I should have done years ago and made a program for viewing images as sound or braille. I never managed to compile it into a distributable .exe, though, so no one cared. But it’s still a huge step forward that I’m frustrated didn’t happen a decade ago, at least.
I went back to college, and immediately did what I should have done years ago and made a program for viewing images as sound or braille. [...] But it’s still a huge step forward that I’m frustrated didn’t happen a decade ago, at least.
I’m pretty sure I saw a segment about this technology on Tomorrow’s World over a decade ago, but I don’t remember hearing a single thing about it since. I’d have thought it’d be an easy sell!
Maybe you saw the vOICe? It has a steep learning curve and does not mesh well with what I want to use it for (and is in grayscale). Also it doesn’t do braille output so far as I know.
The vOICe uses sine waves to represent visual data: pitch for vertical position, volume for brightness, and stereo pan for horizontal position. My program treats regions of color as sound sources and positions them in 3D (I did want to use pitch for vertical, but this turned out less effective than I’d hoped), and lets the user map specific colors to wave files.
I’d not mind a head-to-head comparison of the two, once mine has more features. The vOICe has been used in research relating to how vision works neurologically. I’d be all for putting mine through the same hoops once it’s stronger.
(You’d think the concept would be an easy sell, much like a free screen reader and electrostatic haptic displays. Of these three, only the free screen reader is catching on, and it’s still more or less the Linux to Jaws’ Windows.)
Thanks for pointing me to that website. Looking at one of the pages, I think what I saw was this:
On September 16, 1998, the BBC science program Tomorrow’s World featured a musical image to sound mapping devised by John Cronly-Dillon, a neurobiologist at the Department of Optometry and Vision Sciences at the University of Manchester (UMIST), UK. The broadcast showed examples in which the basic characteristics of transforming shapes into music for hearing images appeared identical to those employed by The vOICe [...] Cronly-Dillon’s implementation was a computer program without live visual input, but he mentioned plans for a future portable system with a camera.
Apparently it’s pretty similar to The vOICe. Both systems sound less flexible than yours could be, though, so you’re probably onto a good thing here!
(You’d think the concept would be an easy sell, much like a free screen reader and electrostatic haptic displays. Of these three, only the free screen reader is catching on, and it’s still more or less the Linux to Jaws’ Windows.)
I missed September’s and got ambushed by Akratic Goblins halfway through, but I think it still counts as in the last month (or did yesterday):
I went back to college, and immediately did what I should have done years ago and made a program for viewing images as sound or braille. I never managed to compile it into a distributable .exe, though, so no one cared. But it’s still a huge step forward that I’m frustrated didn’t happen a decade ago, at least.
I’m pretty sure I saw a segment about this technology on Tomorrow’s World over a decade ago, but I don’t remember hearing a single thing about it since. I’d have thought it’d be an easy sell!
Maybe you saw the vOICe? It has a steep learning curve and does not mesh well with what I want to use it for (and is in grayscale). Also it doesn’t do braille output so far as I know.
The vOICe uses sine waves to represent visual data: pitch for vertical position, volume for brightness, and stereo pan for horizontal position. My program treats regions of color as sound sources and positions them in 3D (I did want to use pitch for vertical, but this turned out less effective than I’d hoped), and lets the user map specific colors to wave files.
I’d not mind a head-to-head comparison of the two, once mine has more features. The vOICe has been used in research relating to how vision works neurologically. I’d be all for putting mine through the same hoops once it’s stronger.
(You’d think the concept would be an easy sell, much like a free screen reader and electrostatic haptic displays. Of these three, only the free screen reader is catching on, and it’s still more or less the Linux to Jaws’ Windows.)
Thanks for pointing me to that website. Looking at one of the pages, I think what I saw was this:
Apparently it’s pretty similar to The vOICe. Both systems sound less flexible than yours could be, though, so you’re probably onto a good thing here!
Yikes.