I had an student doing an M.Sc. thesis (recently passed the viva, with a paper in press and an invitation from the external examiner to give a presentation) on a system he built for combining visual and textual programming. For example, if a variable happens to be holding an image, in a debugger you see a thumbnail of the image instead of a piece of textual information like . One of the examples he used was a visual display of regular expressions.
But there are several standard problems that come up with every attempt to improve on plain text.
Text is universal—to improve on it is a high bar to pass.
A lot of work has been done on visual programming, but a problem that crops up over and over is that every idea of how to do it will work for toy examples, but most of them won’t scale. You just get a huge, tangled mess on your screen. Thinking up visual representations is the easy part, scaling is the real problem.
What makes intuitive visual sense to a human is not necessarily easily made sense of by the software that is supposed to handle it. Even plain text can be opaque if it wasn’t designed with computer parsing in mind. I speak here from experience of implementing a pre-existing notation for recording sign languages. The first thing we did was convert it from its own custom font into ASCII, and the second was to write a horribly complicated parser to transform it into more sensible syntax trees—which were then represented as XML text. Only then was it possible to do the real work, that of generating animation data. And even then, I kept coming across cases where it was obvious to a human being (me) what the original notation meant, but not obvious to the implementer (me) how to express that understanding in program code.
I am not a programming language expert, but hobbyist/amateur, so generally I defer to people who do this stuff for a living. My only points are:
(a) The space of possible languages is large.
(b) It would be curious indeed if ASCII lines was the optimum for a species with such a strong visual subsystem.
(c) Computer science community has a terrible institutional memory for its own advances (e.g. LISP code is its own syntax tree with hardly any parsing, Perl’s garbage collector for the longest time failed on circular references, etc.) So progress is slow.
These I take as evidence that there is much more progress to be made just on notation and representation.
I had an student doing an M.Sc. thesis (recently passed the viva, with a paper in press and an invitation from the external examiner to give a presentation) on a system he built for combining visual and textual programming. For example, if a variable happens to be holding an image, in a debugger you see a thumbnail of the image instead of a piece of textual information like . One of the examples he used was a visual display of regular expressions.
But there are several standard problems that come up with every attempt to improve on plain text.
Text is universal—to improve on it is a high bar to pass.
A lot of work has been done on visual programming, but a problem that crops up over and over is that every idea of how to do it will work for toy examples, but most of them won’t scale. You just get a huge, tangled mess on your screen. Thinking up visual representations is the easy part, scaling is the real problem.
What makes intuitive visual sense to a human is not necessarily easily made sense of by the software that is supposed to handle it. Even plain text can be opaque if it wasn’t designed with computer parsing in mind. I speak here from experience of implementing a pre-existing notation for recording sign languages. The first thing we did was convert it from its own custom font into ASCII, and the second was to write a horribly complicated parser to transform it into more sensible syntax trees—which were then represented as XML text. Only then was it possible to do the real work, that of generating animation data. And even then, I kept coming across cases where it was obvious to a human being (me) what the original notation meant, but not obvious to the implementer (me) how to express that understanding in program code.
I am not a programming language expert, but hobbyist/amateur, so generally I defer to people who do this stuff for a living. My only points are:
(a) The space of possible languages is large.
(b) It would be curious indeed if ASCII lines was the optimum for a species with such a strong visual subsystem.
(c) Computer science community has a terrible institutional memory for its own advances (e.g. LISP code is its own syntax tree with hardly any parsing, Perl’s garbage collector for the longest time failed on circular references, etc.) So progress is slow.
These I take as evidence that there is much more progress to be made just on notation and representation.