Thank you for the reference to STEPS; I am now evaluating this material in some detail.
I would like to discuss the differences and similarities I see between their work and my perspective; are you are familiar enough with STEPS to discuss it from their point of view?
My claim is that this use of a base general purpose language is not necessary, and possibly not generally desirable. With an ecosystem of DSLs general purpose languages can be generated when needed, and DSLs can be generated using only other DSLs.
I would like to discuss the differences and similarities I see between their work and my perspective; are you are familiar enough with STEPS to discuss it from their point of view?
I think I am (though I’m but an outsider). However, I can’t really see any significant difference between their approach and yours. Except maybe that their DSLs tend to be much more Turing complete than what you would like. It makes little matter however, because the cost of implementing a DSL is so low that there is little danger of being trapped in a Turing tar-pit. (To give you an idea, implementing Javascript on top of their stack takes 200 lines. And I believe the whole language stack implements itself in about 1000 lines .)
In the unlikely case you haven’t already, you may want to check out their other papers, which include the other progress reports, and other specific findings. You should be most interested by Ian Piumarta’s work on maru, and Alessandro Warth’s on OMeta, which can be examined separately.
My claim is that this use of a base general purpose language is not necessary, and possibly not generally desirable. With an ecosystem of DSLs general purpose languages can be generated when needed, and DSLs can be generated using only other DSLs.
This seems like a bad idea. There is a high cognitive cost to learning a language. There is a high engineering cost to making different languages play nice together—you need to figure out precisely what happens to types, synchronization, etc etc at the boundaries.
I suspect that breaking programs into pieces that are defined in terms of separate languages is lousy engineering. Among other things, traditional unix shell programming has very much this flavor—a little awk, a little sed, a little perl, all glued together with some shell. And the outcome is usually pretty gross.
These are well targeted critiques, and are points that must be addressed in my proposal. I will address these critiques here while not claiming that the approach I propose is immune to “bad design”.
There is a high cognitive cost to learning a language.
Yes, traditional general purpose languages (GPLs) and many domain specific languages (DSLs) are hard to learn. There are a few reasons that I believe this can be allayed by the approach I propose. The DSLs I propose are (generally) small, composable, heavily reused, and interface oriented which is probably very different than the GPLs (and perhaps DSLs) from your experience. Also, I will describe what I call the encoding problem and map it between DSLs and GPLs to show why well chosen DSLs should be better.
In my model there will be heavy reuse of small (or even tiny) DSLs. The DSLs can be small because they can be composed to create new DSLs (via transparent implementations, heavy use of generics, transformation, and partial specialization). Composition allows each DSL to deal with a distinct and simple concern but yet be combined. Reuse is enhanced because many problem domains regardless of their abstraction level can be effectively modeled using common concerns. For example consider functions, Boolean logic, control structures, trees, lists, and sets. Cross-cutting concerns can be handled using the approaches of Aspect-oriented programming.
The small size of these commonly used DSLs, and their focused concerns make them individually easy to learn. The heavy reuse provides good leveraging of knowledge across projects and across scales and types of abstractions. Probably learning how to program with a large number of these DSLs will be the equivalent of learning a new GPL.
In my model DSLs are best thought of as interfaces, where the interface is customized to provide an efficient and easily understood method of manipulating solutions within the problem domain. In some cases this might be text based interfaces such as we commonly program in now, but it also could be graphs, interactive graphics, sound, touch, or EM signals; really any form of communication. The method and structure of communication is constrained by the interface, and is chosen to providing a useful (and low noise) perspective into the problem domain. Text base languages often come with a large amount of syntactic noise. (Ever try template based metaprogramming in C++? Ack!).
Different interfaces (DSLs) may provide different perspectives into the same solution space of a problem domain. For example a graph, and the data being graphed: the underlying data could be modified by interacting with either interface. The choice of interface will depend on the programmer’s intention. This is also related to the concept of projectional editors, and can be enhanced with concepts like Example Centric Programming.
The encoding problem is the problem of transforming an abstract model (the solution) into code that represents it properly. If the solution is coded in a high-level DSL, then the description of the model that we create while thinking about the problem and talking to our customizers might actually represent the final top level code. In this case the cognitive cost of learning the DSL is the same as understanding the problem domain, and the cost of understanding the program is that of understanding the solution model. For well chosen DSLs the encoding problem will be easy to solve. In the case of general purpose languages the encoding problem can add arbitrary levels of complexity. In addition to understanding the problem domain and the abstract solution model, we also have to know how these are encoded into the general purpose language. This adds a great deal of learning effort even if we already know the language, and even if we find a library that allows us to code the solution relatively directly. Perhaps worse than the learning cost is the ongoing mental effort of encoding and decoding between the abstract models and the general purpose implementation. We have to be able to understand and modify the solution through an additional layer of syntactic noise. The extra complexity, the larger code size and the added cognitive load imposed by using general purpose languages multiplies the likelihood of bugs.
There is a high engineering cost to making different languages play nice together—you need to figure out precisely what happens to types, synchronization, etc etc at the boundaries.
Boundary costs can be common and high even if you are lucky enough to get to program exclusively in a single general purpose language. Ever try to use functions from two different libraries on the same data? Image processing libraries and math libraries are notorious for custom memory representations, none of which seem to match my preferred representation of the same data. Two GUI libraries or stream I/O libraries will clobber each other’s output. The costs (both development-time and run-time) to conform disparate interfaces in general purpose languages is outrageous. My proposal just moves these boundary costs to new (and perhaps unexpected) places while providing tools (DSLs for composition and transformation) that ease the effort of connecting the disparate interfaces.
I suspect that breaking programs into pieces that are defined in terms of separate languages is lousy engineering.
I’ve described my proposal as a perspective shift, and that interface might be a better term than language. To shift your perspective, consider the interfaces you have to your file system. You may have a command line interface to it, a GUI interface, and a programmatical interface (in your favorite language). You choose the appropriate interface based on the task at hand. The same is true for the interfaces I propose. You could use the file system in a complex way to perform perfectly good source code control, or you could rely on the simpler interface of a source control system. The source control system itself might simply rely on a complex structuring of the file system, but you don’t really care how it works as long as it is easy to use and meets your needs. You could use CSV text files to store your data, but if you need to perform complex queries a database engine is probably a better choice.
We already break programs (stuff we do) into pieces that are defined in terms of separate languages (interfaces), and we consider this good engineering. My proposal is about how to successfully extend this type of separation of concerns to its granular and interconnected end-point.
Among other things, traditional unix shell programming has very much this flavor—a little awk, a little sed, a little perl, all glued together with some shell. And the outcome is usually pretty gross.
Your UNIX shell programming example is well placed. It is roughly a model that matches my proposal with connected DSLs, but it is not a panacea (perhaps far from it). I will point out that the languages you mention (awk, sed, and perl) are all general purpose (Turing-complete) text based languages, which is far from the type of DSL I am proposing. Also the shell limits interaction between DSLs to character streams via pipes. This representation of communication rarely maps cleanly to the problem being solved; forcing the implementations to compensate. This generates a great deal of overhead in terms of cognitive effort, complexity, cost ($, development time, run-time), and in some sense a reduction of beauty in the Universe.
To highlight the difference between shell programming and the system I’m proposing, start with the shell programming model, but in addition to character streams add support for the communication of structured data, and in addition to pipes add new communication models like a directed graph communication model. Add DSLs that perform transformations on structured data, and DSLs for interactive interfaces. Now you can create sophisticated applications such as syntax sensitive editors while programming at a level that feels like scripting or perhaps like painting; and given the composability of my DSLs, the parts of this program could be optimized and specialized (to the hardware) together to run like a single, purpose built program.
I’m only going to respond to the last few paragraphs you wrote. I did read the rest. But I think most of the relevant issues are easier to talk about in a concrete context which the shell analogy supplies.
Your UNIX shell programming example is well placed. It is roughly a model that matches my proposal with connected DSLs, but it is not a panacea (perhaps far from it). I will point out that the languages you mention (awk, sed, and perl) are all general purpose (Turing-complete) text based languages, which is far from the type of DSL I am proposing. Also the shell limits interaction between DSLs to character streams via pipes. This representation of communication rarely maps cleanly to the problem being solved; forcing the implementations to compensate. This generates a great deal of overhead in terms of cognitive effort, complexity, cost ($, development time, run-time), and in some sense a reduction of beauty in the Universe.
Yes. It’s clunky. But it’s not clunky by happenstance. It’s clunky because standardized IPC is really hard.
To highlight the difference between shell programming and the system I’m proposing, start with the shell programming model, but in addition to character streams add support for the communication of structured data, and in addition to pipes add new communication models like a directed graph communication model. Add DSLs that perform transformations on structured data, and DSLs for interactive interfaces. Now you can create sophisticated applications such as syntax sensitive editors while programming at a level that feels like scripting or perhaps like painting; and given the composability of my DSLs, the parts of this program could be optimized and specialized (to the hardware) together to run like a single, purpose built program.
It’s a standard observation in the programming language community that a library is sort of a miniature domain-specific language. Every language worth talking about can be “extended” in this way. But there’s nothing novel about saying “we can extend the core Java language by defining additional classes.” Languages like C++ and Scala go to some trouble to let user classes resemble the core language, syntactically. (With features like operator overloading).
I assume you want to do something different from that, since if you wanted C++, you know where to find it.
In particular, I assume you want to be able to write and compose DSLs, where those DSLs cannot be implemented as libraries in some base GPL. But that’s a self-contradictory desire. If DSL A and DSL B don’t share common abstractions, they won’t compose cleanly.
Think about types for a minute. Suppose DSL A has some type system t, and DSL B has some other set of types t’. If t and t’ aren’t identical, then you’ll have trouble sharing data between those DSLs, since there won’t be a way to represent the data from A in B (or vice versa).
Alternatively, ask about implementation. I have a chunk of code written in A and a chunk written in B. I’d like my compiler/translator to optimize across the boundary. I also want to be able to handle memory management, synchronization, etc across the boundary. That’s what composability means, I think.
Today, we often achieve it by having a shared representation that we compile down to. For instance, there are a bunch of languages that all compile down to JVM bytecode, to the .NET CLR, or to GCC’s intermediate representation. (This also sidesteps the type problem I mentioned above.)
But the price is that if you have to compile to reasonably clean JVM bytecode (or the like), that really limits you. To give an example of an un-embeddable language, I don’t believe you could compile C to JVM bytecode and have it efficiently share objects with Java code. Look at the contortions scala has gone through to implement closures and higher-order functions efficiently.
if two DSLs A and B share a common internal representation, they aren’t all that separate as languages. Alternatively, if A and B are really different—say, C and Haskell—then you would have an awfully hard time writing a clean implementation of the joint language.
Shell is a concrete example of this. I agree that a major reason why shell is clunk is that you can’t communicate structured data. Everything has to be serialized, and in practice, mostly in newline-delimited lists of records, which is very limiting. But that’s not simply because of bad design. It’s because most of the languages we glue together with shell don’t have any other data type in common. Awk and sed don’t have powerful type systems. If they did, they would be much more complicated—and that would make them much less useful.
Another reason shell programming is hard is that there aren’t good constructs in the shell for error handling, concurrency, and so forth. But there couldn’t be, in some sense—you would have to carry the same mechanisms into each of the embedded languages. And that’s intolerably confining.
(Duplicate of this)
If you haven’t heard of the STEPS project from the Viewpoint Research Institute already, it may interest you. (Their last report is here)
Thank you for the reference to STEPS; I am now evaluating this material in some detail.
I would like to discuss the differences and similarities I see between their work and my perspective; are you are familiar enough with STEPS to discuss it from their point of view?
In reply to this:
This use of a general purpose language also shows up in the current generation of language workbenches (and here). For example JetBrains’ Meta Programming System uses a Java-like base language, and Intentional Software uses a C# (like?) base language.
My claim is that this use of a base general purpose language is not necessary, and possibly not generally desirable. With an ecosystem of DSLs general purpose languages can be generated when needed, and DSLs can be generated using only other DSLs.
I think I am (though I’m but an outsider). However, I can’t really see any significant difference between their approach and yours. Except maybe that their DSLs tend to be much more Turing complete than what you would like. It makes little matter however, because the cost of implementing a DSL is so low that there is little danger of being trapped in a Turing tar-pit. (To give you an idea, implementing Javascript on top of their stack takes 200 lines. And I believe the whole language stack implements itself in about 1000 lines .)
In the unlikely case you haven’t already, you may want to check out their other papers, which include the other progress reports, and other specific findings. You should be most interested by Ian Piumarta’s work on maru, and Alessandro Warth’s on OMeta, which can be examined separately.
This seems like a bad idea. There is a high cognitive cost to learning a language. There is a high engineering cost to making different languages play nice together—you need to figure out precisely what happens to types, synchronization, etc etc at the boundaries.
I suspect that breaking programs into pieces that are defined in terms of separate languages is lousy engineering. Among other things, traditional unix shell programming has very much this flavor—a little awk, a little sed, a little perl, all glued together with some shell. And the outcome is usually pretty gross.
These are well targeted critiques, and are points that must be addressed in my proposal. I will address these critiques here while not claiming that the approach I propose is immune to “bad design”.
Yes, traditional general purpose languages (GPLs) and many domain specific languages (DSLs) are hard to learn. There are a few reasons that I believe this can be allayed by the approach I propose. The DSLs I propose are (generally) small, composable, heavily reused, and interface oriented which is probably very different than the GPLs (and perhaps DSLs) from your experience. Also, I will describe what I call the encoding problem and map it between DSLs and GPLs to show why well chosen DSLs should be better.
In my model there will be heavy reuse of small (or even tiny) DSLs. The DSLs can be small because they can be composed to create new DSLs (via transparent implementations, heavy use of generics, transformation, and partial specialization). Composition allows each DSL to deal with a distinct and simple concern but yet be combined. Reuse is enhanced because many problem domains regardless of their abstraction level can be effectively modeled using common concerns. For example consider functions, Boolean logic, control structures, trees, lists, and sets. Cross-cutting concerns can be handled using the approaches of Aspect-oriented programming.
The small size of these commonly used DSLs, and their focused concerns make them individually easy to learn. The heavy reuse provides good leveraging of knowledge across projects and across scales and types of abstractions. Probably learning how to program with a large number of these DSLs will be the equivalent of learning a new GPL.
In my model DSLs are best thought of as interfaces, where the interface is customized to provide an efficient and easily understood method of manipulating solutions within the problem domain. In some cases this might be text based interfaces such as we commonly program in now, but it also could be graphs, interactive graphics, sound, touch, or EM signals; really any form of communication. The method and structure of communication is constrained by the interface, and is chosen to providing a useful (and low noise) perspective into the problem domain. Text base languages often come with a large amount of syntactic noise. (Ever try template based metaprogramming in C++? Ack!).
Different interfaces (DSLs) may provide different perspectives into the same solution space of a problem domain. For example a graph, and the data being graphed: the underlying data could be modified by interacting with either interface. The choice of interface will depend on the programmer’s intention. This is also related to the concept of projectional editors, and can be enhanced with concepts like Example Centric Programming.
The encoding problem is the problem of transforming an abstract model (the solution) into code that represents it properly. If the solution is coded in a high-level DSL, then the description of the model that we create while thinking about the problem and talking to our customizers might actually represent the final top level code. In this case the cognitive cost of learning the DSL is the same as understanding the problem domain, and the cost of understanding the program is that of understanding the solution model. For well chosen DSLs the encoding problem will be easy to solve. In the case of general purpose languages the encoding problem can add arbitrary levels of complexity. In addition to understanding the problem domain and the abstract solution model, we also have to know how these are encoded into the general purpose language. This adds a great deal of learning effort even if we already know the language, and even if we find a library that allows us to code the solution relatively directly. Perhaps worse than the learning cost is the ongoing mental effort of encoding and decoding between the abstract models and the general purpose implementation. We have to be able to understand and modify the solution through an additional layer of syntactic noise. The extra complexity, the larger code size and the added cognitive load imposed by using general purpose languages multiplies the likelihood of bugs.
Boundary costs can be common and high even if you are lucky enough to get to program exclusively in a single general purpose language. Ever try to use functions from two different libraries on the same data? Image processing libraries and math libraries are notorious for custom memory representations, none of which seem to match my preferred representation of the same data. Two GUI libraries or stream I/O libraries will clobber each other’s output. The costs (both development-time and run-time) to conform disparate interfaces in general purpose languages is outrageous. My proposal just moves these boundary costs to new (and perhaps unexpected) places while providing tools (DSLs for composition and transformation) that ease the effort of connecting the disparate interfaces.
I’ve described my proposal as a perspective shift, and that interface might be a better term than language. To shift your perspective, consider the interfaces you have to your file system. You may have a command line interface to it, a GUI interface, and a programmatical interface (in your favorite language). You choose the appropriate interface based on the task at hand. The same is true for the interfaces I propose. You could use the file system in a complex way to perform perfectly good source code control, or you could rely on the simpler interface of a source control system. The source control system itself might simply rely on a complex structuring of the file system, but you don’t really care how it works as long as it is easy to use and meets your needs. You could use CSV text files to store your data, but if you need to perform complex queries a database engine is probably a better choice.
We already break programs (stuff we do) into pieces that are defined in terms of separate languages (interfaces), and we consider this good engineering. My proposal is about how to successfully extend this type of separation of concerns to its granular and interconnected end-point.
Your UNIX shell programming example is well placed. It is roughly a model that matches my proposal with connected DSLs, but it is not a panacea (perhaps far from it). I will point out that the languages you mention (awk, sed, and perl) are all general purpose (Turing-complete) text based languages, which is far from the type of DSL I am proposing. Also the shell limits interaction between DSLs to character streams via pipes. This representation of communication rarely maps cleanly to the problem being solved; forcing the implementations to compensate. This generates a great deal of overhead in terms of cognitive effort, complexity, cost ($, development time, run-time), and in some sense a reduction of beauty in the Universe.
To highlight the difference between shell programming and the system I’m proposing, start with the shell programming model, but in addition to character streams add support for the communication of structured data, and in addition to pipes add new communication models like a directed graph communication model. Add DSLs that perform transformations on structured data, and DSLs for interactive interfaces. Now you can create sophisticated applications such as syntax sensitive editors while programming at a level that feels like scripting or perhaps like painting; and given the composability of my DSLs, the parts of this program could be optimized and specialized (to the hardware) together to run like a single, purpose built program.
I’m only going to respond to the last few paragraphs you wrote. I did read the rest. But I think most of the relevant issues are easier to talk about in a concrete context which the shell analogy supplies.
Yes. It’s clunky. But it’s not clunky by happenstance. It’s clunky because standardized IPC is really hard.
It’s a standard observation in the programming language community that a library is sort of a miniature domain-specific language. Every language worth talking about can be “extended” in this way. But there’s nothing novel about saying “we can extend the core Java language by defining additional classes.” Languages like C++ and Scala go to some trouble to let user classes resemble the core language, syntactically. (With features like operator overloading).
I assume you want to do something different from that, since if you wanted C++, you know where to find it.
In particular, I assume you want to be able to write and compose DSLs, where those DSLs cannot be implemented as libraries in some base GPL. But that’s a self-contradictory desire. If DSL A and DSL B don’t share common abstractions, they won’t compose cleanly.
Think about types for a minute. Suppose DSL A has some type system t, and DSL B has some other set of types t’. If t and t’ aren’t identical, then you’ll have trouble sharing data between those DSLs, since there won’t be a way to represent the data from A in B (or vice versa).
Alternatively, ask about implementation. I have a chunk of code written in A and a chunk written in B. I’d like my compiler/translator to optimize across the boundary. I also want to be able to handle memory management, synchronization, etc across the boundary. That’s what composability means, I think.
Today, we often achieve it by having a shared representation that we compile down to. For instance, there are a bunch of languages that all compile down to JVM bytecode, to the .NET CLR, or to GCC’s intermediate representation. (This also sidesteps the type problem I mentioned above.)
But the price is that if you have to compile to reasonably clean JVM bytecode (or the like), that really limits you. To give an example of an un-embeddable language, I don’t believe you could compile C to JVM bytecode and have it efficiently share objects with Java code. Look at the contortions scala has gone through to implement closures and higher-order functions efficiently.
if two DSLs A and B share a common internal representation, they aren’t all that separate as languages. Alternatively, if A and B are really different—say, C and Haskell—then you would have an awfully hard time writing a clean implementation of the joint language.
Shell is a concrete example of this. I agree that a major reason why shell is clunk is that you can’t communicate structured data. Everything has to be serialized, and in practice, mostly in newline-delimited lists of records, which is very limiting. But that’s not simply because of bad design. It’s because most of the languages we glue together with shell don’t have any other data type in common. Awk and sed don’t have powerful type systems. If they did, they would be much more complicated—and that would make them much less useful.
Another reason shell programming is hard is that there aren’t good constructs in the shell for error handling, concurrency, and so forth. But there couldn’t be, in some sense—you would have to carry the same mechanisms into each of the embedded languages. And that’s intolerably confining.