Well, a genie isn’t going to care about what we think unless it was designed to do so, which seems like a very human thing to make it do. But whatever.
As for the difference between literal and malicious genies … I’m just not sure what a “literal” genie is supposed to be doing, if it’s not deducing my desires based on audio input. Interpreting things “literally” is a mistake people make while trying to do this; a merely incompetent genie might make the same mistake, but why should we pay any more attention to that mistake rather than, say, mishearing us, or mistaking parts of our instructions for sarcasm?
Exactly. There isn’t a literal desire in an audio waveform, nor in words. And there’s a literal genie: the compiler. You have to be very verbose with it, though—because it doesn’t model what you want, it doesn’t cull down the space of possible programs down to much smaller space of programs you may want, and you have to point into much larger space, for which you use much larger index, i.e. write long computer programs.
Well, the compiler would not process right your normal way of speaking, because the normal way of speaking requires modelling of the speaker for interpretation.
An image from the camera can mean a multitude of things. It could be an image of a cat, or a dog. An image is never literally a cat or a dog, of course. To tell apart cats and dogs with good fidelity, one has to model the processes producing the image, and classify those based on some part of the model—the animal—the data of interest is a property of the process which produced the input. Natural processing of the normal manner of speaking of language is done using same general mechanisms—one has to take in the data and model the process producing the data, to obtain properties of the process which would be actually meaningful, and since humans all have this ability, the natural language does not—in normal manner of speaking—have any defined literal meaning that is naturally separate from some subtle meaning or intent.
You have to be very verbose with it, though—because it doesn’t model what you want, it doesn’t cull down the space of possible programs down to much smaller space of programs you may want,
Have you used ML? I’ve been told by its adherents that it does a good job of doing just that.
I’ve been told by [insert language here] advocates that it does a good job of [insert anything]. The claims are inversely proportional to popularity. Basically, no programming language what so ever infers anything about any sort of high level intent (and no, type of expression is not a high level intent), so they’re all pretty much equal except some are more unusable than others and subsequently less used. Programming currently works as following: human, using a mental model of the environment, makes a string that gets computer to do something. Most types of cleverness put into in “how compiler works” part thus can be expected to decrease, rather than increase productivity, and indeed that’s precisely what happens with those failed attempts at a better language.
Basically, no programming language what so ever infers anything about any sort of high level intent (and no, type of expression is not a high level intent),
The phrase they (partially tongue in cheek) used was “compile time correctness checking”, i.e., the criterion of being a syntactically correct ML program is better approximation to the space of programs you may want than is the case for most other languages.
a larger proportion of the strings that fail to compile in ML are programs that exhibit high-level behavior that you don’t want?
This formulation is missing the programmer’s mind. The claim that a programming language is better in this way is that, for a given intended result, the set of strings that
a programmer would believe achieve the desired behavior,
compile*, and
do not exhibit the desired behavior
is smaller than for other languages — because there are fewer ways to write program fragments that deviate from the obvious-to-the-(reader|writer) behavior.
Is it harder to write a control program for a wind turbine that causes excessive fatigue cracking in ML as compared to any other language?
The claim is yes, given that the programmer is intending to write a program which does not cause excessive fatigue cracking.
(I’m not familiar with ML; I do not intend to advocate it here. I am attempting to explicate the general thinking behind any effort to create/advocate a better-in-this-dimension programming language.)
* for ’dynamic” languages, substitute “does not signal an error on a typical input”, i.e., is not obviously broken when trivially tested
Suppose that the programmer is unaware of the production-line issues which result in stress concentration on turbine blades and create the world such that turbines which cycle more often have larger fatigue cracks. Suppose the programmer is also unaware of the lack of production-line issues which result in larger fatigue cracks on turbines that were consistently overspeed.
The programmer is aware that both overspeed and cyclical operations will result in the growth of two different types of cracks, and that the ideal solution uses both cycling the turbine and tolerating some amount of overspeed operation.
In that case, I don’t find it reasonable that the choice of programming language should have any effect on the belief of the programmer that fatigue cracks will propagate; the only possible benefit would be making the programmer more sure that the string was a program which controls turbines. The high-level goals of the programmer aren’t often within the computer.
Well, a genie isn’t going to care about what we think unless it was designed to do so, which seems like a very human thing to make it do. But whatever.
As for the difference between literal and malicious genies … I’m just not sure what a “literal” genie is supposed to be doing, if it’s not deducing my desires based on audio input. Interpreting things “literally” is a mistake people make while trying to do this; a merely incompetent genie might make the same mistake, but why should we pay any more attention to that mistake rather than, say, mishearing us, or mistaking parts of our instructions for sarcasm?
Exactly. There isn’t a literal desire in an audio waveform, nor in words. And there’s a literal genie: the compiler. You have to be very verbose with it, though—because it doesn’t model what you want, it doesn’t cull down the space of possible programs down to much smaller space of programs you may want, and you have to point into much larger space, for which you use much larger index, i.e. write long computer programs.
So, sorry—what is this “literal genie” doing, exactly? Is it trying to use my natural-language input as code, which is run to determine it’s actions?
Well, the compiler would not process right your normal way of speaking, because the normal way of speaking requires modelling of the speaker for interpretation.
An image from the camera can mean a multitude of things. It could be an image of a cat, or a dog. An image is never literally a cat or a dog, of course. To tell apart cats and dogs with good fidelity, one has to model the processes producing the image, and classify those based on some part of the model—the animal—the data of interest is a property of the process which produced the input. Natural processing of the normal manner of speaking of language is done using same general mechanisms—one has to take in the data and model the process producing the data, to obtain properties of the process which would be actually meaningful, and since humans all have this ability, the natural language does not—in normal manner of speaking—have any defined literal meaning that is naturally separate from some subtle meaning or intent.
Have you used ML? I’ve been told by its adherents that it does a good job of doing just that.
I’ve been told by [insert language here] advocates that it does a good job of [insert anything]. The claims are inversely proportional to popularity. Basically, no programming language what so ever infers anything about any sort of high level intent (and no, type of expression is not a high level intent), so they’re all pretty much equal except some are more unusable than others and subsequently less used. Programming currently works as following: human, using a mental model of the environment, makes a string that gets computer to do something. Most types of cleverness put into in “how compiler works” part thus can be expected to decrease, rather than increase productivity, and indeed that’s precisely what happens with those failed attempts at a better language.
The phrase they (partially tongue in cheek) used was “compile time correctness checking”, i.e., the criterion of being a syntactically correct ML program is better approximation to the space of programs you may want than is the case for most other languages.
In other words, a larger proportion of the strings that fail to compile in ML are programs that exhibit high-level behavior that you don’t want?
Is it harder to write a control program for a wind turbine that causes excessive fatigue cracking in ML as compared to any other language?
This formulation is missing the programmer’s mind. The claim that a programming language is better in this way is that, for a given intended result, the set of strings that
a programmer would believe achieve the desired behavior,
compile*, and
do not exhibit the desired behavior
is smaller than for other languages — because there are fewer ways to write program fragments that deviate from the obvious-to-the-(reader|writer) behavior.
The claim is yes, given that the programmer is intending to write a program which does not cause excessive fatigue cracking.
(I’m not familiar with ML; I do not intend to advocate it here. I am attempting to explicate the general thinking behind any effort to create/advocate a better-in-this-dimension programming language.)
* for ’dynamic” languages, substitute “does not signal an error on a typical input”, i.e., is not obviously broken when trivially tested
Suppose that the programmer is unaware of the production-line issues which result in stress concentration on turbine blades and create the world such that turbines which cycle more often have larger fatigue cracks. Suppose the programmer is also unaware of the lack of production-line issues which result in larger fatigue cracks on turbines that were consistently overspeed.
The programmer is aware that both overspeed and cyclical operations will result in the growth of two different types of cracks, and that the ideal solution uses both cycling the turbine and tolerating some amount of overspeed operation.
In that case, I don’t find it reasonable that the choice of programming language should have any effect on the belief of the programmer that fatigue cracks will propagate; the only possible benefit would be making the programmer more sure that the string was a program which controls turbines. The high-level goals of the programmer aren’t often within the computer.