Your situation, and your reaction to it, highlight a great advantage of working within a knowledge profession—of identifying as what the LW community calls “rationalists”. When learning about something like that, you can make plans to be not just a passive sufferer of the disease, but a researcher of it from the inside, actively helping in the fight against it.
You can plan to learn all you can about the causes and progression of the disease, and be prepared for your losses as they happen. You can plan to investigate related areas—you mentioned voice synthesis and Brain-Computer Interfaces also come to mind as a field that’s been moving along lately; still quite slow from what I’ve seen, but improving. If you can use BCI to play a video game, it’s not such a big stretch to think of it providing control of, say, a virtual avatar—the name “Second Life” takes an altogether different meaning there. Being a software developer would, at any rate, definitely be handy in that situation.
I didn’t know about voice banking; that’s a fascinating idea, with all sorts of interesting implications (would one want to record non-verbal things like laughter; is there some way to program voice synthesis for singing, etc.). Can you maybe post a link to whoever provides the free service you mentioned ? Especially if they can use financial support.
The voice banking software I’m using is from the Speech Research Lab at the University of Delaware. They say they are in the process of commercializing it; hopefully it will still be free to the disabled. Probably not looking for donations though.
Another interesting communications assistance project is Dasher. They have a Java applet demo as well as programs for PC and smart phones. It does predictive input designed to maximize effective bandwidth. A little confusing at first but supposedly after some practice you can type fast with only minimal use of the controls. I say supposedly because I haven’t used it much, it’s not clear what I might be controlling it with. I should practice with it some more, it sounds likely to be part of an overall solution. Would be cool to control it with BCI, sit back and just think to type your messages.
Everybody with ALS talks about how terrible it is, all the things you can’t do any more. But nobody seems to notice that there are all these things you get to do that you’ve never done before. I’ve never used a power wheelchair. I’ve never controlled a computer with my eyes. I’ve never had a voice synthesizer trained to mimic my natural voice. If I told people on the ALS forums that I was looking forward to some of this, they’d think I was crazy. Maybe people here will understand.
I spent two years as the maintainer of Dasher, and would be happy to answer questions on it. It’s able to use any single analog muscle for control, as a worst case (and a two-axis precise device like a mouse as a best case). There’s a video of using Dasher with one axis here—breath control, as measured by diaphragm circumference:
Head-mice (you put an infra-red dot on some glasses or your forehead and then just move your head to move a pointer) are a common and cheap input method; they cost less than $100, and Dasher’s very accepting of noisy input; if you oversteer in one direction you can just compensate later.
You’re not the first person to consider Dasher with BCI—here’s a slightly outdated summary:
I was listening to an interesting podcast recently with the fellows who founded “patientslikeme.com″ I have no idea what the community is like, but it may be an interesting resource you haven’t encountered yet.
I understand—it reminds me of the Max Berry story “Machine Man” where the protagonist, a robotics researcher, loses a leg, so he designs an artificial one to replace it. Of course, it’s a lot better than his old leg… so he “loses” the other one. Of course, two out of four artificial limbs is just a good start (and so forth). I wouldn’t wish your condition on anyone, but you might just have been lucky enough to live in a time when the meat we were born with isn’t relevant to a happy life. Best wishes regardless.
I’ve played around briefly with dasher and like many of these alternate text inputs it is not designed with coding in mind. I can’t remember the forms of punctuation it uses, but the frequencies will be all wrong to start with.
What you really want is a cross of dasher and the visual studio style auto-complete, so the words/letters it puts largest are the the variables in scope or from libraries included, or the member functions for the object you are accessing. You’ll probably need to specialize your tools to a single language to start with, which is a shame. Pick wisely!
I’d love to play around with controlling a computer with my eyes.
Dasher, at least the Mac version and presumably the other desktop versions, can be given a custom character set (including how they’re ordered and grouped), and you can feed it an arbitrary text file to learn frequencies from. If you feed it plenty of program text it should learn the common phrases just fine, though without context-specific completion as in an IDE.
(Update: The current Mac version seems to be have an entirely nonfunctional preferences dialog and thus lost the character set functionality (there is a blank list box for it). Feels like the app got released in the middle of development work — hopefully it’ll be fixed sometime. The basic functionality still works; I typed about a third of this paragraph using it before I got tired of the lack of uppercase and punctuation.)
If you can use BCI to play a video game, it’s not such a big stretch to think of it providing control of, say, a virtual avatar—the name “Second Life” takes an altogether different meaning there. Being a software developer would, at any rate, definitely be handy in that situation.
BCI is intriguing, but from what I understand, it’s not the best option at the moment—it’s slower, more difficult, and less reliable than a gaze detector, which I assume would work in any case where there’s enough muscle control to allow vision. At any rate, though, I know that those kinds of systems can be used in Second Life. I’m peripherally involved with Virtual Ability Island, which has members who use that kind of technology. And yes, virtual worlds (and Second Life in particular at the moment, though there are other options—OpenSim might be a slightly better option eventually for someone interested in open source) are uniquely useful for people with limited access to the real world.
Ugh. Deepest sympathies.
Your situation, and your reaction to it, highlight a great advantage of working within a knowledge profession—of identifying as what the LW community calls “rationalists”. When learning about something like that, you can make plans to be not just a passive sufferer of the disease, but a researcher of it from the inside, actively helping in the fight against it.
You can plan to learn all you can about the causes and progression of the disease, and be prepared for your losses as they happen. You can plan to investigate related areas—you mentioned voice synthesis and Brain-Computer Interfaces also come to mind as a field that’s been moving along lately; still quite slow from what I’ve seen, but improving. If you can use BCI to play a video game, it’s not such a big stretch to think of it providing control of, say, a virtual avatar—the name “Second Life” takes an altogether different meaning there. Being a software developer would, at any rate, definitely be handy in that situation.
I didn’t know about voice banking; that’s a fascinating idea, with all sorts of interesting implications (would one want to record non-verbal things like laughter; is there some way to program voice synthesis for singing, etc.). Can you maybe post a link to whoever provides the free service you mentioned ? Especially if they can use financial support.
Best wishes.
The voice banking software I’m using is from the Speech Research Lab at the University of Delaware. They say they are in the process of commercializing it; hopefully it will still be free to the disabled. Probably not looking for donations though.
Another interesting communications assistance project is Dasher. They have a Java applet demo as well as programs for PC and smart phones. It does predictive input designed to maximize effective bandwidth. A little confusing at first but supposedly after some practice you can type fast with only minimal use of the controls. I say supposedly because I haven’t used it much, it’s not clear what I might be controlling it with. I should practice with it some more, it sounds likely to be part of an overall solution. Would be cool to control it with BCI, sit back and just think to type your messages.
Everybody with ALS talks about how terrible it is, all the things you can’t do any more. But nobody seems to notice that there are all these things you get to do that you’ve never done before. I’ve never used a power wheelchair. I’ve never controlled a computer with my eyes. I’ve never had a voice synthesizer trained to mimic my natural voice. If I told people on the ALS forums that I was looking forward to some of this, they’d think I was crazy. Maybe people here will understand.
Hi Hal. I’m sorry to hear of your diagnosis.
I spent two years as the maintainer of Dasher, and would be happy to answer questions on it. It’s able to use any single analog muscle for control, as a worst case (and a two-axis precise device like a mouse as a best case). There’s a video of using Dasher with one axis here—breath control, as measured by diaphragm circumference:
http://www.inference.phy.cam.ac.uk/dasher/movies/BreathDasher.mpg
and there are videos using other muscles (head tracking, eye tracking) here:
http://www.inference.phy.cam.ac.uk/dasher/Demonstrations.html
Head-mice (you put an infra-red dot on some glasses or your forehead and then just move your head to move a pointer) are a common and cheap input method; they cost less than $100, and Dasher’s very accepting of noisy input; if you oversteer in one direction you can just compensate later.
You’re not the first person to consider Dasher with BCI—here’s a slightly outdated summary:
http://www.inference.phy.cam.ac.uk/saw27/dasher/bci/
All the best,
Chris.
How confident are you of this? I’d be surprised if there weren’t some there who understood.
I was listening to an interesting podcast recently with the fellows who founded “patientslikeme.com″ I have no idea what the community is like, but it may be an interesting resource you haven’t encountered yet.
http://itc.conversationsnetwork.org/shows/detail4118.html
I understand—it reminds me of the Max Berry story “Machine Man” where the protagonist, a robotics researcher, loses a leg, so he designs an artificial one to replace it. Of course, it’s a lot better than his old leg… so he “loses” the other one. Of course, two out of four artificial limbs is just a good start (and so forth). I wouldn’t wish your condition on anyone, but you might just have been lucky enough to live in a time when the meat we were born with isn’t relevant to a happy life. Best wishes regardless.
I’ve played around briefly with dasher and like many of these alternate text inputs it is not designed with coding in mind. I can’t remember the forms of punctuation it uses, but the frequencies will be all wrong to start with.
What you really want is a cross of dasher and the visual studio style auto-complete, so the words/letters it puts largest are the the variables in scope or from libraries included, or the member functions for the object you are accessing. You’ll probably need to specialize your tools to a single language to start with, which is a shame. Pick wisely!
I’d love to play around with controlling a computer with my eyes.
Dasher, at least the Mac version and presumably the other desktop versions, can be given a custom character set (including how they’re ordered and grouped), and you can feed it an arbitrary text file to learn frequencies from. If you feed it plenty of program text it should learn the common phrases just fine, though without context-specific completion as in an IDE.
(Update: The current Mac version seems to be have an entirely nonfunctional preferences dialog and thus lost the character set functionality (there is a blank list box for it). Feels like the app got released in the middle of development work — hopefully it’ll be fixed sometime. The basic functionality still works; I typed about a third of this paragraph using it before I got tired of the lack of uppercase and punctuation.)
The link for the voice banking and synthesis is: http://www.modeltalker.com/
If you want to make a donation, the link is: http://www.medsci.udel.edu/cpass/support.html
Best, -jim
BCI is intriguing, but from what I understand, it’s not the best option at the moment—it’s slower, more difficult, and less reliable than a gaze detector, which I assume would work in any case where there’s enough muscle control to allow vision. At any rate, though, I know that those kinds of systems can be used in Second Life. I’m peripherally involved with Virtual Ability Island, which has members who use that kind of technology. And yes, virtual worlds (and Second Life in particular at the moment, though there are other options—OpenSim might be a slightly better option eventually for someone interested in open source) are uniquely useful for people with limited access to the real world.