A visual representation of objects in a programming language context must necessarily be complete descriptions of those objects; either you’re switching rapidly between multiple levels of abstraction in order to perform your work, or the sheer volume of screenspace necessary to encapsulate even relatively simple programming instructions results in difficulty comprehending program flow as a singular process.
Or, to put it another way—open up a simple paint program, and try drawing out a visual-programming approach to a very simple programming problem—say, reading the first twenty entries from a database and selecting a tuple based on the highest-valued column. Make the fake program logically complete; it should encapsulate every operation and data structure necessary (although you can presuppose libraries to interact with, it should represent those libraries, as well).
This is actually a problem I’ve been working on on-and-off for my company; we’re trying to implement a comprehensive visual editor for our program, instead of the multiple-layers-of-abstraction visual editor that exists today. As it transpires, given the vast amount of information that needs to be available, symbol encoding—memorizing the meanings of large numbers of symbols in the context of the UI—is the only effective means we’ve found. And at that point you’re just writing a scripting language using different letters than the English ones, and what’s the point?
We could definitely do better than a one dimensional string of ASCII characters for programming.
See, for example, the following glorious mutant: http://en.wikipedia.org/wiki/APL_(programming_language%29
Check out Conway’s game of life implementation further down on that page.
My pet issue is how difficult it is to have native support (both UI and data structure wise) for graphs in a programming language, given how ubiquitous graphs are in mathematics and computer science.
function L = life(L) L1 = imfilter( L, [1 1 1;1 0 1;1 1 1], ‘circular’ ); L = int32( (L1==3) | ((L1==2) & L) ); end
That’s about 50 lexical tokens against APL’s 30, but does not require advanced knowledge of Matlab to understand. Not that I want to get into a language war here, there are any number of things I dislike about Matlab.
Here’s the equivalent of the primes program from the APL Wiki page:
function P = prms( R ) P = 2:R; % Make an array of the numbers from 2 to R. PP = P’ * P; % Make a 2D array of all pairwise products. PP = PP(PP<=R); % Make a 1D array of the products no more than R. P(PP-1) = [ ]; % Remove those products from P. end
Language support for array operations is the major advantage of APL, Matlab, Q, and K, and I wish every language had it.
My point wasn’t actually that it’s a useful thing to pursue shortest ways of writing a given algorithm. In fact I am not an APL expert, and find it hard to read. My point is that there is no particular reason other than inertia that we happen to formalize mathematical/algorithmic ideas via a linear string of ascii characters. In fact, this representation makes it unnatural to {reason about|write algorithms on} many common types of structures. The fact that many attempts to do something better do poorly (as the great-grand-parent poster experienced) does not mean improvements do not exist—the space is very large.
For example, a regular expression is a graph. Why on earth do we insist on encoding it as a very hard to read string of ASCII? (I am sure one could be a very efficient regexp jockey with practice, but in some sense the representation is working against us and our powerful vision subsystem). There are all these theorems in graphical models that have proofs much easier for humans to follow because they use graph theory and not algebra, etc.
I had an student doing an M.Sc. thesis (recently passed the viva, with a paper in press and an invitation from the external examiner to give a presentation) on a system he built for combining visual and textual programming. For example, if a variable happens to be holding an image, in a debugger you see a thumbnail of the image instead of a piece of textual information like . One of the examples he used was a visual display of regular expressions.
But there are several standard problems that come up with every attempt to improve on plain text.
Text is universal—to improve on it is a high bar to pass.
A lot of work has been done on visual programming, but a problem that crops up over and over is that every idea of how to do it will work for toy examples, but most of them won’t scale. You just get a huge, tangled mess on your screen. Thinking up visual representations is the easy part, scaling is the real problem.
What makes intuitive visual sense to a human is not necessarily easily made sense of by the software that is supposed to handle it. Even plain text can be opaque if it wasn’t designed with computer parsing in mind. I speak here from experience of implementing a pre-existing notation for recording sign languages. The first thing we did was convert it from its own custom font into ASCII, and the second was to write a horribly complicated parser to transform it into more sensible syntax trees—which were then represented as XML text. Only then was it possible to do the real work, that of generating animation data. And even then, I kept coming across cases where it was obvious to a human being (me) what the original notation meant, but not obvious to the implementer (me) how to express that understanding in program code.
I am not a programming language expert, but hobbyist/amateur, so generally I defer to people who do this stuff for a living. My only points are:
(a) The space of possible languages is large.
(b) It would be curious indeed if ASCII lines was the optimum for a species with such a strong visual subsystem.
(c) Computer science community has a terrible institutional memory for its own advances (e.g. LISP code is its own syntax tree with hardly any parsing, Perl’s garbage collector for the longest time failed on circular references, etc.) So progress is slow.
These I take as evidence that there is much more progress to be made just on notation and representation.
Yes, I am aware of this (and lispy things in general), but thanks! s-expressions are great if you like metaprogramming, but they share the same fundamental problem as ordinary regular expressions—they encode non-linear structures as a line of ASCII.
Actually, there is no reason macro-based metaprogramming couldn’t work in a language that uses graphs as a priimitive UI element, rather than a list like LISP does. “Graph rewriting” is practically a cottage industry.
Actually, there is no reason macro-based metaprogramming couldn’t work in a language that uses graphs as a priimitive UI element, rather than a list like LISP does. “Graph rewriting” is practically a cottage industry.
Where you wrote “UI element”, did you mean “data structure”? I don’t know what it would mean to talk about graphs as a primitive user interface element.
With a language with sufficiently expressive metaprogramming facilities (LISP enthusiasts will recommend LISP for this role) you can extend it with whatever data structures you want.
I guess I meant both a data structure and a visual representation of a data structure (in LISP they are almost the same, which is what makes metaprogramming in LISP so natural).
At this point I would like to make the comparison to flow charts and their interpreters (us), but even in this case, when we look at a flowchart (with the purpose of implementing something) we mentally substitute the boxes and flows with the code/libraries/interfaces for them. Then following this thought, if we had a compiler that could do the same when fed a diagram ie. parse it to generate the appropriate code, we’d be getting somewhere, I suppose. But as it stands I see why a diagram might not be enough to formally encapsulate all the data and state needed for execution.
A visual representation of objects in a programming language context must necessarily be complete descriptions of those objects; either you’re switching rapidly between multiple levels of abstraction in order to perform your work, or the sheer volume of screenspace necessary to encapsulate even relatively simple programming instructions results in difficulty comprehending program flow as a singular process.
Or, to put it another way—open up a simple paint program, and try drawing out a visual-programming approach to a very simple programming problem—say, reading the first twenty entries from a database and selecting a tuple based on the highest-valued column. Make the fake program logically complete; it should encapsulate every operation and data structure necessary (although you can presuppose libraries to interact with, it should represent those libraries, as well).
This is actually a problem I’ve been working on on-and-off for my company; we’re trying to implement a comprehensive visual editor for our program, instead of the multiple-layers-of-abstraction visual editor that exists today. As it transpires, given the vast amount of information that needs to be available, symbol encoding—memorizing the meanings of large numbers of symbols in the context of the UI—is the only effective means we’ve found. And at that point you’re just writing a scripting language using different letters than the English ones, and what’s the point?
We could definitely do better than a one dimensional string of ASCII characters for programming. See, for example, the following glorious mutant: http://en.wikipedia.org/wiki/APL_(programming_language%29 Check out Conway’s game of life implementation further down on that page.
My pet issue is how difficult it is to have native support (both UI and data structure wise) for graphs in a programming language, given how ubiquitous graphs are in mathematics and computer science.
Your link is broken.
Thanks. Markdown is silly.
Conway’s Life in Matlab:
function L = life(L)
L1 = imfilter( L, [1 1 1;1 0 1;1 1 1], ‘circular’ );
L = int32( (L1==3) | ((L1==2) & L) );
end
That’s about 50 lexical tokens against APL’s 30, but does not require advanced knowledge of Matlab to understand. Not that I want to get into a language war here, there are any number of things I dislike about Matlab.
Here’s the equivalent of the primes program from the APL Wiki page:
function P = prms( R )
P = 2:R; % Make an array of the numbers from 2 to R.
PP = P’ * P; % Make a 2D array of all pairwise products.
PP = PP(PP<=R); % Make a 1D array of the products no more than R.
P(PP-1) = [ ]; % Remove those products from P.
end
Language support for array operations is the major advantage of APL, Matlab, Q, and K, and I wish every language had it.
My point wasn’t actually that it’s a useful thing to pursue shortest ways of writing a given algorithm. In fact I am not an APL expert, and find it hard to read. My point is that there is no particular reason other than inertia that we happen to formalize mathematical/algorithmic ideas via a linear string of ascii characters. In fact, this representation makes it unnatural to {reason about|write algorithms on} many common types of structures. The fact that many attempts to do something better do poorly (as the great-grand-parent poster experienced) does not mean improvements do not exist—the space is very large.
For example, a regular expression is a graph. Why on earth do we insist on encoding it as a very hard to read string of ASCII? (I am sure one could be a very efficient regexp jockey with practice, but in some sense the representation is working against us and our powerful vision subsystem). There are all these theorems in graphical models that have proofs much easier for humans to follow because they use graph theory and not algebra, etc.
I had an student doing an M.Sc. thesis (recently passed the viva, with a paper in press and an invitation from the external examiner to give a presentation) on a system he built for combining visual and textual programming. For example, if a variable happens to be holding an image, in a debugger you see a thumbnail of the image instead of a piece of textual information like . One of the examples he used was a visual display of regular expressions.
But there are several standard problems that come up with every attempt to improve on plain text.
Text is universal—to improve on it is a high bar to pass.
A lot of work has been done on visual programming, but a problem that crops up over and over is that every idea of how to do it will work for toy examples, but most of them won’t scale. You just get a huge, tangled mess on your screen. Thinking up visual representations is the easy part, scaling is the real problem.
What makes intuitive visual sense to a human is not necessarily easily made sense of by the software that is supposed to handle it. Even plain text can be opaque if it wasn’t designed with computer parsing in mind. I speak here from experience of implementing a pre-existing notation for recording sign languages. The first thing we did was convert it from its own custom font into ASCII, and the second was to write a horribly complicated parser to transform it into more sensible syntax trees—which were then represented as XML text. Only then was it possible to do the real work, that of generating animation data. And even then, I kept coming across cases where it was obvious to a human being (me) what the original notation meant, but not obvious to the implementer (me) how to express that understanding in program code.
I am not a programming language expert, but hobbyist/amateur, so generally I defer to people who do this stuff for a living. My only points are:
(a) The space of possible languages is large.
(b) It would be curious indeed if ASCII lines was the optimum for a species with such a strong visual subsystem.
(c) Computer science community has a terrible institutional memory for its own advances (e.g. LISP code is its own syntax tree with hardly any parsing, Perl’s garbage collector for the longest time failed on circular references, etc.) So progress is slow.
These I take as evidence that there is much more progress to be made just on notation and representation.
If you don’t already know about it, you’ll enjoy reading about Olin Shivers’s SRE regex notation.
Yes, I am aware of this (and lispy things in general), but thanks! s-expressions are great if you like metaprogramming, but they share the same fundamental problem as ordinary regular expressions—they encode non-linear structures as a line of ASCII.
Actually, there is no reason macro-based metaprogramming couldn’t work in a language that uses graphs as a priimitive UI element, rather than a list like LISP does. “Graph rewriting” is practically a cottage industry.
Where you wrote “UI element”, did you mean “data structure”? I don’t know what it would mean to talk about graphs as a primitive user interface element.
With a language with sufficiently expressive metaprogramming facilities (LISP enthusiasts will recommend LISP for this role) you can extend it with whatever data structures you want.
I guess I meant both a data structure and a visual representation of a data structure (in LISP they are almost the same, which is what makes metaprogramming in LISP so natural).
Makes sense, thank you for the elaboration.
At this point I would like to make the comparison to flow charts and their interpreters (us), but even in this case, when we look at a flowchart (with the purpose of implementing something) we mentally substitute the boxes and flows with the code/libraries/interfaces for them. Then following this thought, if we had a compiler that could do the same when fed a diagram ie. parse it to generate the appropriate code, we’d be getting somewhere, I suppose. But as it stands I see why a diagram might not be enough to formally encapsulate all the data and state needed for execution.