It’s a very old observation, that Homo sapiens was made to hunt and gather on the savanna, rather than work in an office. Civilization and its discontents…
I’m wondering if you don’t necessarily need to modify the human brain in order to make this problem at least a bit better. There are already some jobs that are much closer to “savanna” than “office.” I chose to go into nursing because, among other things, I knew about my father’s experience working in a cubicle and I never wanted that. Nursing is both intellectually stimulating (very much so in critical care/ICU, which is where I’m currently doing my final clinical rotation), and also “sensual” in the way you describe. I get a huge amount of satisfaction from manipulation physical materials and supplies–mixing drugs, priming IV tubing, changing dressings, etc. It’s fun. And then there’s the direct human contact, which is kind of exhausting for an introvert like me, but also really, really rewarding.
I’m guessing that jobs like electrician, plumber, etc, are probably similar in having both intellectually and sensorially stimulating aspects. Manual labor or construction leans more towards the sensory, engineering/science towards the intellectual (although some scientists get to play with cool equipment, samples, etc), math and programming are almost solely intellectual, and a great deal of office work seems to be neither.
There are a couple of questions this brings up for me. 1) Can “boring” jobs be made more sensual? I wonder how much of a difference it would make if offices were more colourful, contained obstacle courses, involved walking around more, etc? It sounds silly and even like a waste of time, but if it keeps employees engaged, it might save time. 2) Do boring jobs really need to be done by humans? I’m not talking about jobs like math and programming, which aren’t ‘boring’, just unilaterally intellectual. 3) Can strongly intellectual jobs be reformatted in a more physical way? For example, in the future, could programmers and mathematicians manipulate symbols in the air, like Tony Stark does in Iron Man? This would at least activate significantly more visual cortex than symbols on a screen. And all of these options seem significantly more achievable, with current technology, than trying to change the human brain.
There is this aspect of coding (and I write code for a living), the very act of it, that I do find sensual, (I don’t know if others perceive this in the same way or my calling it sensual is just a convenient metaphor for my experience) but as my fingers dance across the keyboard and I see my thoughts take shape on screen, there is a certain poetry there in the form of the combined sounds of my typing, the tactile feedback of the keys themselves and a well executed subroutine staring back at you. Writing that routine was not just a purely mental activity, it involved fine-motor skills, long hours of tapping away to get to stage where you don’t even have to look down at the keys, you think the words and your fingers move, of their own accord, to put those words on screen! (This is even better if you use a good tool, such as Vim, to maximize the efficiency of your keystrokes. It’s also the reason why I find it supremely satisfying to use mechanical keyboards.)
3) Can strongly intellectual jobs be reformatted in a more physical way? For example, in the future, could programmers and mathematicians manipulate symbols in the air, like Tony Stark does in Iron Man? This would at least activate significantly more visual cortex than symbols on a screen.
I was going to make just such a point about programming. If one were to look at coding as a means of controlling data flow, or controlling state machines or decision paths, then ‘coding’ by means of drawing up an active flow chart and manipulating this spatially, much like what Stark did, would be awesome fun. This lets me visually and in space stack the scaffolding of ideas, blocks, functions and subroutines, ‘see’ the connections between blocks and watch the—the flow of control and data, yank things around ‘literally’, and so on and so forth.
It was tried countless times: Visual programming languages. It never worked outside some specific application domains.
Keep in mind that text is a visual representation. It is a visual representation optimized to express our thoughts, trivial or complex, in a precise, efficient, and succint way. We went from making artistic cave paintings and wood carvings to writing simple, standardized characters.
Programming is about expressing how to do something in an extremely precise way, so precise that it can be understood by a machine with little or no intelligence.
Non-textual media might provide an aid when communicating between intelligent humans, and even there it is often used for superficial communication (e.g. advertising). When you need precision, text is probably the most effective choice.
I agree with you in that text is a visual representation of ‘units’ of ideas, if I were to be not very precise about this, that we string together to convey more complex ideas. And I agree with you that in the kind of complex scaffolding of little ideas into big ones ad. infinitum, that happens in computer science, that the kind of ‘coding’ medium I was suggesting, would be inefficient. But still the idea has a novelty for us humans who are more at home with spatially manipulating objects and stringing them together as opposed to doing all of this in abstract space.
OTOH, flowcharts and such are still in widespread use.
Flowcharts and other types of diagrams are indeed in widespread use, but as design documents to be understood by humans, not as executable specifications. Being made to describe the high-level design of a system to humans, these diagrams are highly abstract and omit most of the details that would be required for an executable specification.
You can define executable graphical languages, as listed in the link I provided, but once you try to use them for anything but a toy example, your diagrams become excedingly large and complicated, essentially unusable.
And that a graph (mathematical sense) can be as precise as you like.
You can define executable graphical languages, as listed in the link I provided, but once you try to use them for anything but a toy example, your diagrams become excedingly large and complicated, essentially unusable.
There are entire management chains of my acquaintance on whose eyelids I could wish that sentence engraved.
There is research being done in improving abstractions for graphical languages. For instance, this applies to graphical representations of monoidal categories (so-called “string diagrams”), which can be used to represent functional programming, monad-based programs (at least to some extent), data-flow, control flow and the like.
It is still the case that textual syntax has a higher information density, though.
By the way, natural language generation could also be used to make programming closer to the cognitive style of humans, and thus more stimulating. I’m not talking about primitive efforts like COBOL here: we could take inspiration from linguistically-inspired formalisms such as Montague grammar to map commonly used calculi and programming languages to natural language in a fairly straightforward way.
No. Is it not more like that there is our cognitive landscape/real estate and on this (amongst others) are visual representations of objects/ideas/whatevers? And so all I was saying was that if you were to look at the space within which we code, then a visual representation of objects, the functions that operate on them and the state/data flows between them might be a more compelling medium of work.
A visual representation of objects in a programming language context must necessarily be complete descriptions of those objects; either you’re switching rapidly between multiple levels of abstraction in order to perform your work, or the sheer volume of screenspace necessary to encapsulate even relatively simple programming instructions results in difficulty comprehending program flow as a singular process.
Or, to put it another way—open up a simple paint program, and try drawing out a visual-programming approach to a very simple programming problem—say, reading the first twenty entries from a database and selecting a tuple based on the highest-valued column. Make the fake program logically complete; it should encapsulate every operation and data structure necessary (although you can presuppose libraries to interact with, it should represent those libraries, as well).
This is actually a problem I’ve been working on on-and-off for my company; we’re trying to implement a comprehensive visual editor for our program, instead of the multiple-layers-of-abstraction visual editor that exists today. As it transpires, given the vast amount of information that needs to be available, symbol encoding—memorizing the meanings of large numbers of symbols in the context of the UI—is the only effective means we’ve found. And at that point you’re just writing a scripting language using different letters than the English ones, and what’s the point?
We could definitely do better than a one dimensional string of ASCII characters for programming.
See, for example, the following glorious mutant: http://en.wikipedia.org/wiki/APL_(programming_language%29
Check out Conway’s game of life implementation further down on that page.
My pet issue is how difficult it is to have native support (both UI and data structure wise) for graphs in a programming language, given how ubiquitous graphs are in mathematics and computer science.
function L = life(L) L1 = imfilter( L, [1 1 1;1 0 1;1 1 1], ‘circular’ ); L = int32( (L1==3) | ((L1==2) & L) ); end
That’s about 50 lexical tokens against APL’s 30, but does not require advanced knowledge of Matlab to understand. Not that I want to get into a language war here, there are any number of things I dislike about Matlab.
Here’s the equivalent of the primes program from the APL Wiki page:
function P = prms( R ) P = 2:R; % Make an array of the numbers from 2 to R. PP = P’ * P; % Make a 2D array of all pairwise products. PP = PP(PP<=R); % Make a 1D array of the products no more than R. P(PP-1) = [ ]; % Remove those products from P. end
Language support for array operations is the major advantage of APL, Matlab, Q, and K, and I wish every language had it.
My point wasn’t actually that it’s a useful thing to pursue shortest ways of writing a given algorithm. In fact I am not an APL expert, and find it hard to read. My point is that there is no particular reason other than inertia that we happen to formalize mathematical/algorithmic ideas via a linear string of ascii characters. In fact, this representation makes it unnatural to {reason about|write algorithms on} many common types of structures. The fact that many attempts to do something better do poorly (as the great-grand-parent poster experienced) does not mean improvements do not exist—the space is very large.
For example, a regular expression is a graph. Why on earth do we insist on encoding it as a very hard to read string of ASCII? (I am sure one could be a very efficient regexp jockey with practice, but in some sense the representation is working against us and our powerful vision subsystem). There are all these theorems in graphical models that have proofs much easier for humans to follow because they use graph theory and not algebra, etc.
I had an student doing an M.Sc. thesis (recently passed the viva, with a paper in press and an invitation from the external examiner to give a presentation) on a system he built for combining visual and textual programming. For example, if a variable happens to be holding an image, in a debugger you see a thumbnail of the image instead of a piece of textual information like . One of the examples he used was a visual display of regular expressions.
But there are several standard problems that come up with every attempt to improve on plain text.
Text is universal—to improve on it is a high bar to pass.
A lot of work has been done on visual programming, but a problem that crops up over and over is that every idea of how to do it will work for toy examples, but most of them won’t scale. You just get a huge, tangled mess on your screen. Thinking up visual representations is the easy part, scaling is the real problem.
What makes intuitive visual sense to a human is not necessarily easily made sense of by the software that is supposed to handle it. Even plain text can be opaque if it wasn’t designed with computer parsing in mind. I speak here from experience of implementing a pre-existing notation for recording sign languages. The first thing we did was convert it from its own custom font into ASCII, and the second was to write a horribly complicated parser to transform it into more sensible syntax trees—which were then represented as XML text. Only then was it possible to do the real work, that of generating animation data. And even then, I kept coming across cases where it was obvious to a human being (me) what the original notation meant, but not obvious to the implementer (me) how to express that understanding in program code.
I am not a programming language expert, but hobbyist/amateur, so generally I defer to people who do this stuff for a living. My only points are:
(a) The space of possible languages is large.
(b) It would be curious indeed if ASCII lines was the optimum for a species with such a strong visual subsystem.
(c) Computer science community has a terrible institutional memory for its own advances (e.g. LISP code is its own syntax tree with hardly any parsing, Perl’s garbage collector for the longest time failed on circular references, etc.) So progress is slow.
These I take as evidence that there is much more progress to be made just on notation and representation.
Yes, I am aware of this (and lispy things in general), but thanks! s-expressions are great if you like metaprogramming, but they share the same fundamental problem as ordinary regular expressions—they encode non-linear structures as a line of ASCII.
Actually, there is no reason macro-based metaprogramming couldn’t work in a language that uses graphs as a priimitive UI element, rather than a list like LISP does. “Graph rewriting” is practically a cottage industry.
Actually, there is no reason macro-based metaprogramming couldn’t work in a language that uses graphs as a priimitive UI element, rather than a list like LISP does. “Graph rewriting” is practically a cottage industry.
Where you wrote “UI element”, did you mean “data structure”? I don’t know what it would mean to talk about graphs as a primitive user interface element.
With a language with sufficiently expressive metaprogramming facilities (LISP enthusiasts will recommend LISP for this role) you can extend it with whatever data structures you want.
I guess I meant both a data structure and a visual representation of a data structure (in LISP they are almost the same, which is what makes metaprogramming in LISP so natural).
At this point I would like to make the comparison to flow charts and their interpreters (us), but even in this case, when we look at a flowchart (with the purpose of implementing something) we mentally substitute the boxes and flows with the code/libraries/interfaces for them. Then following this thought, if we had a compiler that could do the same when fed a diagram ie. parse it to generate the appropriate code, we’d be getting somewhere, I suppose. But as it stands I see why a diagram might not be enough to formally encapsulate all the data and state needed for execution.
In case of a down vote on something that seems reasonable and/or is non-inflammatory, it’d be informative if someone left a note as to why it was being down voted.
In case of a down vote on something that seems reasonable
Perhaps it merely indicates that the voter doesn’t agree about the reasonableness. I neither voted on nor—until now—read your comment but I do note that in general many people write things that they (evidently) consider reasonable but which I consider utter nonsense and other have vehemently disagreed with (what seem to me) to be reasonable statements by myself and others. Neither objective reasonableness (to the extent that such a thing exists) nor the belief of the author will force another to perceive it as reasonable.
and/or is non-inflammatory
Not especially inflammatory, true. I note however that you opened with a contradiction, “No.”. That has a clear meaning of asserting the falsity of its parent. If the parent is (perceived to be) correct then a negation may be considered sufficient to downvote the comment with or without reading the remainder.
it’d be informative if someone left a note as to why it was being down voted.
This is a true statement. Another true statement is “Writing declarations of downvotes can be perceived as a nuisance by third parties and promote emnity or at very least dowvnoting by the downvoted author. This is a negative consequence for the downvoter, who is not obliged to abandon his or her anonymity if they don’t chose to.”
(I downoted both comments. OrphanWilde’s assertion was mostly meaningless and given without substantiation/clarification, and your reply engaged it on object level instead of pointing that out (or silently downvoting), sustaining a flawed mode of discussion. Being “non-inflammatory” is an insufficiently strict standard, a conversation should be sane.)
OrphanWilde’s assertion was mostly meaningless and given without substantiation/clarification
I agree.
your reply engaged it on object level instead of pointing that out (or silently downvoting), sustaining a flawed mode of discussion.
Can you elaborate what you mean by ‘object level’?
Also, I am kind of perplexed here—you don’t approve of my deciding to react to a seemingly vague statement, which was made with the intent of getting OrphanWilde to perhaps clarify himself? I realize that I phrased my reply badly, starting with a negation was counter productive, but still.
Let me clarify here, I do not care so much about the down vote, as much as I do about being engaged in a conversation.
Can you elaborate what you mean by ‘object level’?
Someone asserts a confused statement whose meaning is unclear. An example of an “object level” response it to make up an interpretation for that statement with a particular meaning, and immediately engage that interpretation (for example, by giving an argument for modifying some of its details).
This has two immediate problems. First, the interpretation that you’ve made up isn’t necessarily the intended one, and in fact no clear intended interpretation may exist, in the sense that the original statement wasn’t constructed to communicate a clear idea, but was to a significant extent a confabulation. This may result in talking past each other, thinking of different things, and in simple cases may lead to an argument about definitions. Second, the fact that the original statement was confusing is itself significant and worthy of attention. It may mean that you lack knowledge of context or training necessary to interpret it, or that the person making the statement needs to improve their communication effort or skills, or that they need to think more carefully to make sure that there is an actual idea that is being described. These problems have little to do with the topic of discussion, hence “not object level”.
(Even worse, an “object level response” may itself fail to reflect on any particular idea.)
On the other hand, asking for clarification or accusing the other party of talking confused nonsense bring their own problems.
I generally reply on the object level, but note that I’m unsure if I parsed their statement correctly, so they can clarify in their next comment if I misinterpreted.
For example, in the future, could programmers and mathematicians manipulate symbols in the air, like Tony Stark does in Iron Man?
Well, doing it in the air probably isn’t going to happen until we have augmented reality systems that go far beyond Google Goggles, but I’ve been hearing speculation about more visually oriented symbol-based programming since at least 1995 or thereabouts: basically, compilable versions of the block diagrams programmers already use for design work. It seems to be one of those technologies that’s always ten years off, though.
It’s already common for hardware engineers to directly manipulate chip layout through a visual interface, though for all but the simplest circuits there’s also a syntactic element in the form of Verilog or VHDL code.
I’m wondering if you don’t necessarily need to modify the human brain in order to make this problem at least a bit better. There are already some jobs that are much closer to “savanna” than “office.” I chose to go into nursing because, among other things, I knew about my father’s experience working in a cubicle and I never wanted that. Nursing is both intellectually stimulating (very much so in critical care/ICU, which is where I’m currently doing my final clinical rotation), and also “sensual” in the way you describe. I get a huge amount of satisfaction from manipulation physical materials and supplies–mixing drugs, priming IV tubing, changing dressings, etc. It’s fun. And then there’s the direct human contact, which is kind of exhausting for an introvert like me, but also really, really rewarding.
I’m guessing that jobs like electrician, plumber, etc, are probably similar in having both intellectually and sensorially stimulating aspects. Manual labor or construction leans more towards the sensory, engineering/science towards the intellectual (although some scientists get to play with cool equipment, samples, etc), math and programming are almost solely intellectual, and a great deal of office work seems to be neither.
There are a couple of questions this brings up for me. 1) Can “boring” jobs be made more sensual? I wonder how much of a difference it would make if offices were more colourful, contained obstacle courses, involved walking around more, etc? It sounds silly and even like a waste of time, but if it keeps employees engaged, it might save time. 2) Do boring jobs really need to be done by humans? I’m not talking about jobs like math and programming, which aren’t ‘boring’, just unilaterally intellectual. 3) Can strongly intellectual jobs be reformatted in a more physical way? For example, in the future, could programmers and mathematicians manipulate symbols in the air, like Tony Stark does in Iron Man? This would at least activate significantly more visual cortex than symbols on a screen. And all of these options seem significantly more achievable, with current technology, than trying to change the human brain.
I thought Shop Class as Soulcraft by Matthew B. Crawford was a pretty good book on pretty much this topic.
There is this aspect of coding (and I write code for a living), the very act of it, that I do find sensual, (I don’t know if others perceive this in the same way or my calling it sensual is just a convenient metaphor for my experience) but as my fingers dance across the keyboard and I see my thoughts take shape on screen, there is a certain poetry there in the form of the combined sounds of my typing, the tactile feedback of the keys themselves and a well executed subroutine staring back at you. Writing that routine was not just a purely mental activity, it involved fine-motor skills, long hours of tapping away to get to stage where you don’t even have to look down at the keys, you think the words and your fingers move, of their own accord, to put those words on screen! (This is even better if you use a good tool, such as Vim, to maximize the efficiency of your keystrokes. It’s also the reason why I find it supremely satisfying to use mechanical keyboards.)
I was going to make just such a point about programming. If one were to look at coding as a means of controlling data flow, or controlling state machines or decision paths, then ‘coding’ by means of drawing up an active flow chart and manipulating this spatially, much like what Stark did, would be awesome fun. This lets me visually and in space stack the scaffolding of ideas, blocks, functions and subroutines, ‘see’ the connections between blocks and watch the—the flow of control and data, yank things around ‘literally’, and so on and so forth.
It was tried countless times: Visual programming languages. It never worked outside some specific application domains.
Keep in mind that text is a visual representation. It is a visual representation optimized to express our thoughts, trivial or complex, in a precise, efficient, and succint way. We went from making artistic cave paintings and wood carvings to writing simple, standardized characters.
Programming is about expressing how to do something in an extremely precise way, so precise that it can be understood by a machine with little or no intelligence.
Non-textual media might provide an aid when communicating between intelligent humans, and even there it is often used for superficial communication (e.g. advertising). When you need precision, text is probably the most effective choice.
I agree with you in that text is a visual representation of ‘units’ of ideas, if I were to be not very precise about this, that we string together to convey more complex ideas. And I agree with you that in the kind of complex scaffolding of little ideas into big ones ad. infinitum, that happens in computer science, that the kind of ‘coding’ medium I was suggesting, would be inefficient. But still the idea has a novelty for us humans who are more at home with spatially manipulating objects and stringing them together as opposed to doing all of this in abstract space.
OTOH, flowcharts and such are still in widespread use. And that a graph (mathematical sense) can be as precise as you like.
Flowcharts and other types of diagrams are indeed in widespread use, but as design documents to be understood by humans, not as executable specifications. Being made to describe the high-level design of a system to humans, these diagrams are highly abstract and omit most of the details that would be required for an executable specification.
You can define executable graphical languages, as listed in the link I provided, but once you try to use them for anything but a toy example, your diagrams become excedingly large and complicated, essentially unusable.
Irrelevant.
There are entire management chains of my acquaintance on whose eyelids I could wish that sentence engraved.
There is research being done in improving abstractions for graphical languages. For instance, this applies to graphical representations of monoidal categories (so-called “string diagrams”), which can be used to represent functional programming, monad-based programs (at least to some extent), data-flow, control flow and the like.
It is still the case that textual syntax has a higher information density, though.
By the way, natural language generation could also be used to make programming closer to the cognitive style of humans, and thus more stimulating. I’m not talking about primitive efforts like COBOL here: we could take inspiration from linguistically-inspired formalisms such as Montague grammar to map commonly used calculi and programming languages to natural language in a fairly straightforward way.
This is exactly what I was imagining!
Visual real estate is more limited than cognitive real estate.
No. Is it not more like that there is our cognitive landscape/real estate and on this (amongst others) are visual representations of objects/ideas/whatevers? And so all I was saying was that if you were to look at the space within which we code, then a visual representation of objects, the functions that operate on them and the state/data flows between them might be a more compelling medium of work.
A visual representation of objects in a programming language context must necessarily be complete descriptions of those objects; either you’re switching rapidly between multiple levels of abstraction in order to perform your work, or the sheer volume of screenspace necessary to encapsulate even relatively simple programming instructions results in difficulty comprehending program flow as a singular process.
Or, to put it another way—open up a simple paint program, and try drawing out a visual-programming approach to a very simple programming problem—say, reading the first twenty entries from a database and selecting a tuple based on the highest-valued column. Make the fake program logically complete; it should encapsulate every operation and data structure necessary (although you can presuppose libraries to interact with, it should represent those libraries, as well).
This is actually a problem I’ve been working on on-and-off for my company; we’re trying to implement a comprehensive visual editor for our program, instead of the multiple-layers-of-abstraction visual editor that exists today. As it transpires, given the vast amount of information that needs to be available, symbol encoding—memorizing the meanings of large numbers of symbols in the context of the UI—is the only effective means we’ve found. And at that point you’re just writing a scripting language using different letters than the English ones, and what’s the point?
We could definitely do better than a one dimensional string of ASCII characters for programming. See, for example, the following glorious mutant: http://en.wikipedia.org/wiki/APL_(programming_language%29 Check out Conway’s game of life implementation further down on that page.
My pet issue is how difficult it is to have native support (both UI and data structure wise) for graphs in a programming language, given how ubiquitous graphs are in mathematics and computer science.
Your link is broken.
Thanks. Markdown is silly.
Conway’s Life in Matlab:
function L = life(L)
L1 = imfilter( L, [1 1 1;1 0 1;1 1 1], ‘circular’ );
L = int32( (L1==3) | ((L1==2) & L) );
end
That’s about 50 lexical tokens against APL’s 30, but does not require advanced knowledge of Matlab to understand. Not that I want to get into a language war here, there are any number of things I dislike about Matlab.
Here’s the equivalent of the primes program from the APL Wiki page:
function P = prms( R )
P = 2:R; % Make an array of the numbers from 2 to R.
PP = P’ * P; % Make a 2D array of all pairwise products.
PP = PP(PP<=R); % Make a 1D array of the products no more than R.
P(PP-1) = [ ]; % Remove those products from P.
end
Language support for array operations is the major advantage of APL, Matlab, Q, and K, and I wish every language had it.
My point wasn’t actually that it’s a useful thing to pursue shortest ways of writing a given algorithm. In fact I am not an APL expert, and find it hard to read. My point is that there is no particular reason other than inertia that we happen to formalize mathematical/algorithmic ideas via a linear string of ascii characters. In fact, this representation makes it unnatural to {reason about|write algorithms on} many common types of structures. The fact that many attempts to do something better do poorly (as the great-grand-parent poster experienced) does not mean improvements do not exist—the space is very large.
For example, a regular expression is a graph. Why on earth do we insist on encoding it as a very hard to read string of ASCII? (I am sure one could be a very efficient regexp jockey with practice, but in some sense the representation is working against us and our powerful vision subsystem). There are all these theorems in graphical models that have proofs much easier for humans to follow because they use graph theory and not algebra, etc.
I had an student doing an M.Sc. thesis (recently passed the viva, with a paper in press and an invitation from the external examiner to give a presentation) on a system he built for combining visual and textual programming. For example, if a variable happens to be holding an image, in a debugger you see a thumbnail of the image instead of a piece of textual information like . One of the examples he used was a visual display of regular expressions.
But there are several standard problems that come up with every attempt to improve on plain text.
Text is universal—to improve on it is a high bar to pass.
A lot of work has been done on visual programming, but a problem that crops up over and over is that every idea of how to do it will work for toy examples, but most of them won’t scale. You just get a huge, tangled mess on your screen. Thinking up visual representations is the easy part, scaling is the real problem.
What makes intuitive visual sense to a human is not necessarily easily made sense of by the software that is supposed to handle it. Even plain text can be opaque if it wasn’t designed with computer parsing in mind. I speak here from experience of implementing a pre-existing notation for recording sign languages. The first thing we did was convert it from its own custom font into ASCII, and the second was to write a horribly complicated parser to transform it into more sensible syntax trees—which were then represented as XML text. Only then was it possible to do the real work, that of generating animation data. And even then, I kept coming across cases where it was obvious to a human being (me) what the original notation meant, but not obvious to the implementer (me) how to express that understanding in program code.
I am not a programming language expert, but hobbyist/amateur, so generally I defer to people who do this stuff for a living. My only points are:
(a) The space of possible languages is large.
(b) It would be curious indeed if ASCII lines was the optimum for a species with such a strong visual subsystem.
(c) Computer science community has a terrible institutional memory for its own advances (e.g. LISP code is its own syntax tree with hardly any parsing, Perl’s garbage collector for the longest time failed on circular references, etc.) So progress is slow.
These I take as evidence that there is much more progress to be made just on notation and representation.
If you don’t already know about it, you’ll enjoy reading about Olin Shivers’s SRE regex notation.
Yes, I am aware of this (and lispy things in general), but thanks! s-expressions are great if you like metaprogramming, but they share the same fundamental problem as ordinary regular expressions—they encode non-linear structures as a line of ASCII.
Actually, there is no reason macro-based metaprogramming couldn’t work in a language that uses graphs as a priimitive UI element, rather than a list like LISP does. “Graph rewriting” is practically a cottage industry.
Where you wrote “UI element”, did you mean “data structure”? I don’t know what it would mean to talk about graphs as a primitive user interface element.
With a language with sufficiently expressive metaprogramming facilities (LISP enthusiasts will recommend LISP for this role) you can extend it with whatever data structures you want.
I guess I meant both a data structure and a visual representation of a data structure (in LISP they are almost the same, which is what makes metaprogramming in LISP so natural).
Makes sense, thank you for the elaboration.
At this point I would like to make the comparison to flow charts and their interpreters (us), but even in this case, when we look at a flowchart (with the purpose of implementing something) we mentally substitute the boxes and flows with the code/libraries/interfaces for them. Then following this thought, if we had a compiler that could do the same when fed a diagram ie. parse it to generate the appropriate code, we’d be getting somewhere, I suppose. But as it stands I see why a diagram might not be enough to formally encapsulate all the data and state needed for execution.
In case of a down vote on something that seems reasonable and/or is non-inflammatory, it’d be informative if someone left a note as to why it was being down voted.
Perhaps it merely indicates that the voter doesn’t agree about the reasonableness. I neither voted on nor—until now—read your comment but I do note that in general many people write things that they (evidently) consider reasonable but which I consider utter nonsense and other have vehemently disagreed with (what seem to me) to be reasonable statements by myself and others. Neither objective reasonableness (to the extent that such a thing exists) nor the belief of the author will force another to perceive it as reasonable.
Not especially inflammatory, true. I note however that you opened with a contradiction, “No.”. That has a clear meaning of asserting the falsity of its parent. If the parent is (perceived to be) correct then a negation may be considered sufficient to downvote the comment with or without reading the remainder.
This is a true statement. Another true statement is “Writing declarations of downvotes can be perceived as a nuisance by third parties and promote emnity or at very least dowvnoting by the downvoted author. This is a negative consequence for the downvoter, who is not obliged to abandon his or her anonymity if they don’t chose to.”
Fair enough.
(I downoted both comments. OrphanWilde’s assertion was mostly meaningless and given without substantiation/clarification, and your reply engaged it on object level instead of pointing that out (or silently downvoting), sustaining a flawed mode of discussion. Being “non-inflammatory” is an insufficiently strict standard, a conversation should be sane.)
I agree.
Can you elaborate what you mean by ‘object level’?
Also, I am kind of perplexed here—you don’t approve of my deciding to react to a seemingly vague statement, which was made with the intent of getting OrphanWilde to perhaps clarify himself? I realize that I phrased my reply badly, starting with a negation was counter productive, but still.
Let me clarify here, I do not care so much about the down vote, as much as I do about being engaged in a conversation.
Someone asserts a confused statement whose meaning is unclear. An example of an “object level” response it to make up an interpretation for that statement with a particular meaning, and immediately engage that interpretation (for example, by giving an argument for modifying some of its details).
This has two immediate problems. First, the interpretation that you’ve made up isn’t necessarily the intended one, and in fact no clear intended interpretation may exist, in the sense that the original statement wasn’t constructed to communicate a clear idea, but was to a significant extent a confabulation. This may result in talking past each other, thinking of different things, and in simple cases may lead to an argument about definitions. Second, the fact that the original statement was confusing is itself significant and worthy of attention. It may mean that you lack knowledge of context or training necessary to interpret it, or that the person making the statement needs to improve their communication effort or skills, or that they need to think more carefully to make sure that there is an actual idea that is being described. These problems have little to do with the topic of discussion, hence “not object level”.
(Even worse, an “object level response” may itself fail to reflect on any particular idea.)
On the other hand, asking for clarification or accusing the other party of talking confused nonsense bring their own problems.
I generally reply on the object level, but note that I’m unsure if I parsed their statement correctly, so they can clarify in their next comment if I misinterpreted.
Well, doing it in the air probably isn’t going to happen until we have augmented reality systems that go far beyond Google Goggles, but I’ve been hearing speculation about more visually oriented symbol-based programming since at least 1995 or thereabouts: basically, compilable versions of the block diagrams programmers already use for design work. It seems to be one of those technologies that’s always ten years off, though.
It’s already common for hardware engineers to directly manipulate chip layout through a visual interface, though for all but the simplest circuits there’s also a syntactic element in the form of Verilog or VHDL code.