Maybe you saw the vOICe? It has a steep learning curve and does not mesh well with what I want to use it for (and is in grayscale). Also it doesn’t do braille output so far as I know.
The vOICe uses sine waves to represent visual data: pitch for vertical position, volume for brightness, and stereo pan for horizontal position. My program treats regions of color as sound sources and positions them in 3D (I did want to use pitch for vertical, but this turned out less effective than I’d hoped), and lets the user map specific colors to wave files.
I’d not mind a head-to-head comparison of the two, once mine has more features. The vOICe has been used in research relating to how vision works neurologically. I’d be all for putting mine through the same hoops once it’s stronger.
(You’d think the concept would be an easy sell, much like a free screen reader and electrostatic haptic displays. Of these three, only the free screen reader is catching on, and it’s still more or less the Linux to Jaws’ Windows.)
Thanks for pointing me to that website. Looking at one of the pages, I think what I saw was this:
On September 16, 1998, the BBC science program Tomorrow’s World featured a musical image to sound mapping devised by John Cronly-Dillon, a neurobiologist at the Department of Optometry and Vision Sciences at the University of Manchester (UMIST), UK. The broadcast showed examples in which the basic characteristics of transforming shapes into music for hearing images appeared identical to those employed by The vOICe [...] Cronly-Dillon’s implementation was a computer program without live visual input, but he mentioned plans for a future portable system with a camera.
Apparently it’s pretty similar to The vOICe. Both systems sound less flexible than yours could be, though, so you’re probably onto a good thing here!
(You’d think the concept would be an easy sell, much like a free screen reader and electrostatic haptic displays. Of these three, only the free screen reader is catching on, and it’s still more or less the Linux to Jaws’ Windows.)
Maybe you saw the vOICe? It has a steep learning curve and does not mesh well with what I want to use it for (and is in grayscale). Also it doesn’t do braille output so far as I know.
The vOICe uses sine waves to represent visual data: pitch for vertical position, volume for brightness, and stereo pan for horizontal position. My program treats regions of color as sound sources and positions them in 3D (I did want to use pitch for vertical, but this turned out less effective than I’d hoped), and lets the user map specific colors to wave files.
I’d not mind a head-to-head comparison of the two, once mine has more features. The vOICe has been used in research relating to how vision works neurologically. I’d be all for putting mine through the same hoops once it’s stronger.
(You’d think the concept would be an easy sell, much like a free screen reader and electrostatic haptic displays. Of these three, only the free screen reader is catching on, and it’s still more or less the Linux to Jaws’ Windows.)
Thanks for pointing me to that website. Looking at one of the pages, I think what I saw was this:
Apparently it’s pretty similar to The vOICe. Both systems sound less flexible than yours could be, though, so you’re probably onto a good thing here!
Yikes.