The Big Mud Puddle - Why Concatenative Programming Matters
The Big Mud Puddle - Why Concatenative Programming Matters
The Big Mud Puddle - Why Concatenative Programming Matters
Home ▼
12 February 2012
There doesn’t seem to be a good tutorial out there for concatenative programming, so I
figured I’d write one, inspired by the classic “Why Functional Programming Matters” by
John Hughes. With any luck it will get more people interested in the topic, and give me a
URL to hand people when they ask what the heck I’m so excited about.
Foremost, there seems to be some disagreement over what the term “concatenative”
actually means. This Stack Overflow answer by Norman Ramsey even goes so far as to
say:
This is rather harsh, and, well, wrong. Not entirely wrong, mind, just rather wrong. But it’s
not surprising that this kind of misinformation gets around, because concatenative
programming isn’t all that well known. (I aim to change that.)
One of the problems with functional programming is that the oft-touted advantages—
immutability, referential transparency, mathematical purity, &c.—don’t immediately seem
to apply in the real world. The reason “Why Functional Programming Matters” was
necessary in the first place was that functional programming had been mischaracterised
:
as a paradigm of negatives—no mutation, no side-effects—so everybody knew what you
couldn’t do, but few people grasped what you could.
There is a similar problem with concatenative programming. Just look at the Wikipedia
introduction:
This is all true—and all irrelevant to your immediate problem of why you should care. So
in the next sections, I will show you:
How concatenative programming works
♦♦♦
The Basics
In an applicative language, functions are applied to values to get other values. λ-calculus,
the basis of functional languages, formalises application as “β-reduction”, which just says
that if you have a function (f x = x + 1) then you can substitute a call to that function (f y)
with its result (y + 1). In λ-calculus, simple juxtaposition (f x) denotes application, but
function composition must be handled with an explicit composition function:
This definition says “the composition (compose) of two functions (f and g) is the result of
applying one (f) to the result of the other (g x)”, which is pretty much a literal definition.
Note that this function can only be used to compose functions of a single argument—
more on this later.
Concatenative languages have a much simpler basis—there are only functions and
compositions, and evaluation is just the simplification of functions. It is never necessary
to deal with named state—there are no variables. In a sense, concatenative languages
are “more functional” than traditional functional languages! And yet, as we will see, they
are also easy to implement efficiently.
♦♦♦
Composition
23×
Second, we said all terms denote functions—so what the heck are 2 and 3 doing in there?
They sure look like values to me! But if you tilt your head a little, you can see them as
functions too: values take no arguments and return themselves. If we were to write
down the inputs and outputs of these functions, it would look like this:
2 :: () → (int)
3 :: () → (int)
As you may guess, “x :: T” means “x is of type T”, and “T1 → T2” means “a function from
type T1 to type T2”. So these functions take no input, and return one integer. We also
know the type of the multiplication function, which takes two integers and returns just one:
Now how do you compose all of these functions together? Remember we said “f g”
:
means the (reverse) composition of f and g, but how can we compose 2 with 3 when their
inputs and outputs don’t match up? You can’t pass an integer to a function that takes no
arguments.
The solution lies in something called stack polymorphism. Basically, we can give a
generic, polymorphic type to these functions that says they’ll take any input, followed by
what they actually need. They return the arguments they don’t use, followed by an actual
return value:
“∀A.” means “For all A”—in these examples, even if A has commas in it. So now the
meaning of the expression “2 3” is clear: it is a function that takes no input and returns
both 2 and 3. This works because when we compose two functions, we match up the
output of one with the input of the other, so we start with the following definitions:
This is correct: the expression “2 3 ×” takes no input and produces one integer. Whew! As
a sanity check, note also that the equivalent function “6” has the same type as “2 3 ×”:
♦♦♦
Cool Stuff
In the above example, we worked from left to right, but because composition is
associative, you can actually do it in any order. In math terms, (f ∘ g) ∘ h = f ∘ (g ∘ h). Just
as “2 3 ×” contains “2 3”, a function returning two integers, it also contains “3 ×”, a
function that returns thrice its argument:
3 × :: (int) → (int)
(From now on I’ll omit the ∀ bit from type signatures to make them easier to read.)
So we can already trivially represent partial function application. But this is actually a
huge win in another way. Applicative languages need to have a defined associativity for
function application (almost always from left to right), but here we’re free from this
restriction. A compiler for a statically typed concatenative language could literally:
Because all we have are functions and composition, a concatenative program is a single
function—typically an impure one with side effects, but that’s by no means a requirement.
(You can conceive of a pure, lazy concatenative language with side-effect management
along the lines of Haskell.) Because a program is just a function, you can think about
composing programs in the same way.
This is the basic reason Unix pipes are so powerful: they form a rudimentary string-
based concatenative programming language. You can send the output of one program to
another (|); send, receive, and redirect multiple I/O streams (n<, 2&>1); and more. At the
end of the day, a concatenative program is all about the flow of data from start to finish.
And that again is why concatenative composition is written in “reverse” order—because
it’s actually forward:
+---+
:
| 2 |
+---+
|
| +---+
| | 3 |
| +---+
| |
V V
+---------+
| * |
+---------+
|
V
♦♦♦
Implementation
So far, I have deliberately stuck to high-level terms that pertain to all concatenative
languages, without any details about how they’re actually implemented. One of the very
cool things about concatenative languages is that while they are inherently quite
functional, they also have a very straightforward and efficient imperative implementation.
In fact, concatenative languages are the basis of many things you use every day:
The Java Virtual Machine on your PC and mobile phone
The CPython bytecode interpreter that powers BitTorrent, Dropbox, and YouTube
The PostScript page description language that runs many of the world’s printers
The Forth language that started it all, which still enjoys popularity on embedded
systems
The type of a concatenative function is formulated so that it takes any number of inputs,
uses only the topmost of these, and returns the unused input followed by actual output.
These functions are essentially operating on a list-like data structure, one that allows
removal and insertion only at one end. And any programmer worth his salt can tell you
what that structure is called.
Function Output
()
2 (2)
:
3 (2, 3)
× (6)
4 (6, 4)
5 (6, 4, 5)
× (6, 20)
+ (26)
Moving from left to right in the expression, whenever we encounter a “value” (remember:
a nullary self-returning function), we push its result to the stack. Whenever we encounter
an “operator” (a non-nullary function), we pop its arguments, perform the computation,
and push its result. Another name for postfix is reverse Polish notation, which achieved
great success in the calculator market on every HP calculator sold between 1968 and
1977—and many thereafter.
So a concatenative language is a functional language that is not only easy, but trivial to
run efficiently, so much so that most language VMs are essentially concatenative. x86
relies heavily on a stack for local state, so even C programs have a little bit of
concatenativity in ’em, even though x86 machines are register-based.
Furthermore, it’s straightforward to make some very clever optimisations, which are
ultimately based on simple pattern matching and replacement. The Factor compiler uses
these principles to produce very efficient code. The JVM and CPython VMs, being stack-
based, are also in the business of executing and optimising concatenative languages, so
the paradigm is far from unresearched. In fact, a sizable portion of all the compiler
optimisation research that has ever been done has involved virtual stack machines.
♦♦♦
Point-free Expressions
More importantly, it’s more meaningful to write what a function is versus what it does, and
point-free functions are more succinct than so-called “pointful” ones. For all these
reasons, point-free style is generally considered a Good Thing™.
However, if functional programmers really believe that point-free style is ideal, they
shouldn’t be using applicative languages! Let’s say you want to write a function that tells
you the number of elements in a list that satisfy a predicate. In Haskell, for instance:
:
countWhere :: (a -> Bool) -> [a] -> Int
countWhere predicate list = length (filter predicate list)
It’s pretty simple, even if you’re not so familiar with Haskell. countWhere returns the length
of the list you get when you filter out elements of a list that don’t satisfy a predicate.
Now we can use it like so:
We can write this a couple of ways in point-free style, omitting predicate and list:
But the meaning of the weird repeated self-application of the composition operator (.)
isn’t necessarily obvious. The expression (.) (.) (.)—equivalently (.) . (.) using infix
syntax—represents a function that composes a unary function (length) with a binary
function (filter). This type of composition is occasionally written .:, with the type you
might expect:
But what are we really doing here? In an applicative language, we have to jump through
some hoops in order to get the basic concatenative operators we want, and get them to
typecheck. When implementing composition in terms of application, we must explicitly
implement every type of composition.
+-----------------+
| (1, 2, 3, 4, 5) |
+-----------------+
|
| +-------+
| | [2 >] |
| +-------+
| |
+---|-----------------|-----+
| | countWhere | |
| V V |
| +-----------------------+ |
| | filter | |
| +-----------------------+ |
| | |
| V |
| +--------+ |
| | length | |
| +--------+ |
| | |
+---|-----------------------+
|
V
When you’re building a diagram like this, just follow a few simple rules:
If there aren’t enough arrows to match the block, the program is ill-typed
♦♦♦
Quotations
Notice the use of brackets for the predicate [2 >] in the preceding example? In addition to
composition, the feature that completes a concatenative language is quotation, which
allows deferred composition of functions. For example, “2 >” is a function that returns
whether its argument is greater than 2, and [2 >] is a function that returns “2 >”.
:
It’s at this point that we go meta. While just composition lets us build descriptions of
dataflow machines, quotation lets us build machines that operate on descriptions of other
machines. Quotation eliminates the distinction between code and data, in a simple, type-
safe manner.
The “filter” machine mentioned earlier takes in the blueprints for a machine that accepts
list values and returns Booleans, and filters a list according to the instructions in those
blueprints. Here’s the type signature for it:
There are all kinds of things you can do with this. You can write a function that applies a
quotation to some arguments, without knowing what those arguments are:
You can write a function to compose two quotations into a new one:
compose :: ∀A B C. (A → B, B → C) → (A → C)
And you can write one to convert a function to a quotation that returns it:
♦♦♦
f x y z = y2 + x2 − |y|
This has a bit of everything: it mentions one of its inputs multiple times, the order of the
variables in the expression doesn’t match the order of the inputs, and one of the inputs is
ignored. So we need a function that gives us an extra copy of a value:
drop :: (T) → ()
From the basic functions we’ve defined so far, we can make some other useful functions,
:
such as one that joins two values into a quotation:
And hey, thanks to quotations, it’s also easy to declare your own control structures:
Those particular definitions for true and false will be familiar to anyone who’s used
Booleans in the λ-calculus. A Boolean is a quotation, so it behaves like an ordinary value,
but it contains a binary function that chooses one branch and discards another. “If-then-
else” is merely the application of that quotation to the particular branches.
Anyway, getting back to the math, we already know the type of our function ((int, int, int)
→ (int)), we just need to deduce how to get there. If we build a diagram of how the data
flows through the expression, we might get this:
Well…that sucked.
♦♦♦
A Lighter Note
You’ve just seen one of the major problems with concatenative programming—hey, every
kind of language has its strengths and weaknesses, but most language designers will lie
to you about the latter. Writing seemingly simple math expressions can be difficult
and unintuitive, especially using just the functions we’ve seen so far. To do so exposes
all of the underlying complexity of the expression that we’re accustomed to deferring to a
compiler.
Factor gets around this by introducing a facility for lexically scoped local variables. Some
things are simply more natural to write with named state rather than a bunch of stack-
shuffling. However, the vast majority of programs are not predominated by mathematical
expressions, so in practice this feature is not used very much:
“Out of 38,088 word and method definitions in the source code of Factor and
its development environment at the time of this writing, 310 were defined with
named parameters.”—Factor: A Dynamic Stack-based Programming
Language
One of the great strengths of concatenative languages, however, is their ability to refactor
complex expressions. Because every sequence of terms is just a function, you can
directly pull out commonly used code into its own function, without even needing to
rewrite anything. There are generally no variables to rename, nor state to manage.
square = dup ×
f = drop [square] [abs] bi − [square] dip +
Which doesn’t look so bad, and actually reads pretty well: the difference between the
square and absolute value of the second argument, plus the square of the first. But even
that description shows that our mathematical language has evolved as inherently
:
applicative. It’s better sometimes just to stick to tradition.
♦♦♦
Whither Hence
So you’ve got the gist, and it only took a few dozen mentions of the word “concatenative”
to get there. I hope you’re not suffering from semantic satiation.
You’ve seen that concatenative programming is a paradigm like any other, with a real
definition and its own pros and cons:
If you’re interested in trying out a mature, practical concatenative language, check out
Factor and the official blog of the creator, Slava Pestov. Also see Cat for more information
on static typing in concatenative languages.
I’ve been idly working on a little concatenative language called Kitten off and on for a
while now. It’s dynamically typed and compiles to C, so you can run it just about
anywhere. I wanted a language I could use for a site on a shared host where installing
compilers was irritating. That shows you the extent of my language geekery—I’d rather
spend hours writing a language than twenty minutes figuring out how to install GHC on
Bluehost.
Anyway, the implementation is just barely complete enough to play around with. Feel free
to browse the source, try it out, and offer feedback. You’ll need GHC and GCC to build it,
and I imagine it works best on Linux or Unix, but there shouldn’t be any particularly
horrible incompatibilities.
This would also be a good time to mention that I’m working on a more serious language
called Magnet, which I mentioned in my last article about how programming is borked. It’s
principally concatenative, but also has applicative syntax for convenience, and relies
heavily on the twin magics of pattern matching and generalised algebraic data types. Hell,
half the reason I wrote this article was to provide background for Magnet. So expect more
articles about that.
Edit (20 April 2013) The above information is no longer accurate. Kitten is currently being
:
rewritten; the work that was done on Magnet has been incorporated. Kitten is now
statically typed and, until the compiler is complete, interpreted.
And that about does it. As always, feel free to email me at evincarofautumn@gmail.com
to talk at length about anything. Happy coding!
Share
112 comments:
Replies
Reply
Replies
Reply
:
João Neto February 12, 2012 at 2:16 PM
Great Post! Thanks for writing it.
Reply
For example, the 'function' \x could pop the top value of the stack, and bind it to the name x for
the remainder of the expression. Then
The reverse order of popping the arguments could be a bit confusing, but I would say that this
is a clear improvement.
Reply
Replies
Reply
:
ceii February 12, 2012 at 4:28 PM
Many thanks for writing this! I'm finally starting to feel like being excited about concatenative
languages is something I'm allowed to be proud of as a functional programmer. I'll have to take
a long look at Prog's design; I've always wondered how pattern matching could be combined
with concatenative syntax in a sensible and convenient way.
One major advantage which you didn't mention and which doesn't seem to have been explored
a lot yet: a (statically typed) concatenative language can afford incredible IDE support. The
type of the stack at the cursor's position is everything you need to know about your current
context; just display it in a corner and the programmer has all the info they need. Better yet,
autocomplete for words that can take the current stack as input and you get IntelliSense on
steroids.
Reply
Replies
Reply
I am writing a compiler that targets CIL right now. Thinking of all this in terms of a stack really
made it so much easier to follow. .NET is not completely stack based though as it uses local
variables as well.
Reply
:
Anonymous February 13, 2012 at 3:35 AM
If you'd just called it a 'stack based language' instead of concatenate it'd all have been
massively clearer from the start..
Also, you list the Android phone as having a stack-based interpreter. In fact its famously
register based, which is why it is so much faster. Its not a JVM, its Dalvik.
http://en.wikipedia.org/wiki/Dalvik_%28software%29#Architecture
Reply
Replies
You're right about Dalvik being register based, but I can't find anything suggesting
that it was done for speed, unless it was actually done to give reasonable speed
without a JIT (and whether that's true or not, I can find no hint that any research was
done). Lua's switch to a register machine is always described in terms of a speed-
centered choice, but it's very clear that they were comparing a modern optimized
register machine versus an antique naive stack machine; not a fair comparison at
all.
A JITted virtual stack machine should be faster on arbitrary CPUs than a register
machine -- the only exception being when the virtual register machine is a very close
match to the physical register machine (after the overhead of virtualization).
The reason I structured this how I did is that the paradigm is not bound to stacks.
There are a lot of high-level concepts that apply to all concatenative languages
whether they use a stack or not. I wanted to demonstrate those concepts first,
before showing that a stack is just an efficient way of implementing them.
It’s like Smalltalk and JavaScript versus Simula and C++: even though the former
pair has prototypes and messages while the latter has classes and methods, they’re
both object-oriented. A concatenative language based on term rewriting or directed
graphs would still be concatenative, so long as it were still based on composition
and quotation.
Reply
"Most of the world's printers" are either cheap-ass inkjets or industrial thermal printers, neither
of which use PostScript. Even the vast majority of mid-range business printers are non-
PostScript.
In fact, you'd be hard-pressed to find a PostScript printer that was made in the last 10 years
and isn't a high-end network laser printer.
Reply
Replies
Reply
However, I do wonder about the usefulness of this style for longer programs - I wrote quite a
few medium-sized things on the HP48, and the code was always *intensely* write-only since
you had to have a mental model of exactly what was on the stack where to even hope to follow
it. Not sure how much of that was due to the system discouraging the definition of many small
functions, though (you could only edit one function at a time).
Reply
Replies
Whether it’s useful for larger programs I think just depends on the programmer and
:
the program. Concatenative programming demands obsessive factoring. If you don’t
have “many small functions”, you’re doing it wrong, and your work will be harder
than it has to be.
It’s like writing assembly. You can think about everything at the instruction level, with
a mental model (or excessive comments) about what’s where. But you’ll never get
anything done that way, because you’re working at the wrong level of abstraction. If
you’re shuffling the stack a lot, then you’re treating the symptom of a problem you’re
not seeing because you need to step back.
True, but not always. The name of a parameter is often immaterial or generic; most
of the identifiers in programs I’ve read have been largely meaningless syntactic junk.
Where names serve a real purpose is in math expressions, where you expect to do
symbolic manipulation.
Reply
That said:
"Concatenative programming is so called because it uses function composition instead of
function application—a non-concatenative language is thus called applicative."
I must observe: at some point, you need to apply your composed functions to your arguments
to get any results at all.
No, that "5 is a function!" talk is just a silly fallacy to make it look overly-sophisticated what is
really just values in a stack consumed by subsequent function calls. I can say the same for any
scheme expression: (+ 1 2 3 4 5) is not then the application of the arguments to the function +,
but the composition of a new function out of + and constant "functions" denoted by numbers... I
can then compose another function from this new function (15) and get yet another "function":
(* 2 (+ 1 2 3 4 5))
No, you can't. You can just fill a stack and let subsequent calls alter it.
In scheme you may return multiple values, but the context must be waiting for them. (values 1
2 3) actually returns 3 values instead of a tuple (list). In the top-level REPL, it returns the
separate 3 values, but if you were to use them, you'd better make a context to consume them:
(call-with-values
(lambda () (values 1 2 3)) ; return multiple values
(lambda (x y z) (+ x y z))) ; bind the values to the arguments x y z
In other words: (+ 1 (sqrt 4)) won't return 2 values, even if sqrt returned 2 values (default sqrt in
scheme returns just the positive value) because its continuation (+ 1 []) expects a single value.
say:
> (define (sqrt2 n)
(let ((pos (sqrt n))) ; original sqrt
(values pos (* -1 pos))))
> (sqrt2 4)
:
2
-2
> (+ 1 (sqrt2 4))
context expected 1 value, received 2 values: 2 -2
(call-with-values
(lambda () (sqrt2 4)) ; assuming it returns 2 values
(lambda (pos neg) (values (+ 1 pos) (+ 1 neg)))) ; returns 2 values too
it's much simpler to just return lists, which are Lisp's forte anyway, just like stacks are for
concatenative languages. So you just return a list and use the usual functional higher-order
functions to process the result:
All that said, concatenative languages always sound to me like the exact opposite of Lisp as
far as syntax is concerned. :)
Reply
Replies
Which textual functions compose with which textual arguments? The answer isn't as
cut and dried as you imply -- remember, concatenation (and composition) is
associative. In an applicative language the answer is VERY clear in the text.
Ouch.
"to make it look overly-sophisticated what is really just values in a stack consumed
by subsequent function calls."
You're assuming stack semantics. The author explained that textual evaluation is
another possible semantics; there's no "really just values" there. I believe
:
www.enchiladacode.nl is a purely rewriting semantics (although that may just be my
bad memory). Apter built a few concatenative languages using things like a stack-
dequeue pair. Most languages entirely WITHOUT a stack are mere toys, but I'm not
sure it will always be so. Oh, and of course, all professional Forths aside from stuff
for hardware stack machines perform register optimizations, which uses a stack
approximately like a C program would.
You're also missing the point. In the _language_, that is in the mapping of text to
semantics, there is no difference between a literal and a function. The semantics of
a literal may be simple; but then again they may not. The point is that a literal like "3"
is the same sort of language (text) object as a function; it's parsed the same way
(even though obviously it's lexed differently).
For higher-order languages, and for real lower-level languages, not all symbols are
so tidy. There has to be a symbol that means "do not execute the following
symbols", which is NOT semantically similar to what a function does.
"I can say the same for any scheme expression: (+ 1 2 3 4 5) is not then the
application of the arguments to the function +, but the composition of a new function
out of + and constant "functions" denoted by numbers... I can then compose another
function from this new function (15) and get yet another "function": (* 2 (+ 1 2 3 4
5))"
You're not performing composition; you're building a tree, not a string. It's not
associative except between siblings on the tree (and that's not a concept that's
directly apparent in the text; you have to convert it to a parsed data structure to see
which things are siblings).
"please..."
Well... It's true. And what's more, because of the associative property, _every
lexically correct substring_ has a valid type, and can be pulled out and made a
function. (Of course, this is only true inasfar as the language is purely concatenative.
All currently used languages offer things like definitions and nested blocks, which
are not associative. I made a language which doesn't have any of those and is
therefore purely concatenative, but you wouldn't want to code in it; its name is
"zeroone", and its two functions are named "zero" and "one". The bits, not the
English numerals.)
"All that said, concatenative languages always sound to me like the exact opposite
of Lisp as far as syntax is concerned. :)"
Sounds fair to me :-). And no parens (unless you want literal lists or quotations).
-Wm
:
Codingtales February 15, 2012 at 1:21 PM
"No, that "5 is a function!" talk is just a silly fallacy to make it look overly-
sophisticated"
Reply
Replies
Reply
But having the semantics formalised via row polymorphism actually makes the stack an
:
implementation detail and the elegance has returned!
Replies
Reply
Forth also prompted me to try Rebol, which I totally enjoyed. I'm starting to see how ideas
evolve and change from language to language, and I think knowing this is really valuable for
any serious developer.
Reply
Cheers,
Z-Bo
Reply
Replies
Now, I don't understand what you mean by "maintaining invariants". What invariants
are you talking about? Is there some context here that I'm missing?
-Wm
Reply
Replies
:
Jon Purdy February 15, 2012 at 8:59 PM
It would be, except I established early on how row-polymorphic functions are
uniformly composable. Currying is not at play.
Reply
Did I evaluate up to the right step until then? And if so, how does the compose function pair up
numbers?
Reply
Replies
Start: 1 2 join2.
Expand join2: 1 2 quote swap quote swap compose.
Substitute quote: 1 [2] swap quote swap compose.
Substitute swap: [2] 1 quote swap compose.
Substitute quote: [2] [1] swap compose.
Substitute swap: [1] [2] compose.
Substitute compose: [1 2].
There is a bit of a notational problem here. The type T—one value—is equivalent to
the type () → T—function from 0 values to 1 value—which too is equivalent to ∀A. A
→ (A, T)—function from n to n+1 values. So while the type I gave for “quote” is
accurate, it is also somewhat misleading, which probably led you astray when you
were simplifying “compose”.
1. X = (A)
2. Y = (A, int)
3. Y = (B)
4. Z = (B, int)
And thus the type of “(() → 1) (() → 2) compose” is “() → (int, int)”, as expected.
Reply
Replies
Reply
Replies
It looks like in this sense "pointless" means "without referring to the objects being
stored". There's a similarity, but it's a different area of discussion; and note that the
"pointless" proofs use plenty of names.
-Wm
Reply
I don't like ‘concatenative programming […] uses function composition instead of function
application’ as a definition. Dropping one operation while promoting another does not mean the
latter is used ‘instead’ of the former; they mean very different things after all.
The statement ‘that function application becomes unnecessary […] makes these languages a
whole lot simpler to build, use, and reason about’ is not true in general – the author himself
gives an example of the opposite, later on in the article.
One interesting question is the relation between concatenativity, stacks, and postfix notation.
The author seems to maintain that stack is ‘not fundamental’ to concatenativity, but why then
all real, usable, concatenative languages are stack-based?
Also, if being postfix is also not fundamental and ‘there is nothing stopping a concatenative
language from having infix [or prefix] operators’, then why all concatenative languages are
postfix? Can concatenativity be preserved in the presence of non-postfix notation?
A commenter said that ‘semantics […] via row polymorphism […] makes the stack an
implementation detail’, but I see it exactly the opposite: the way it is used, row polymorphism
itself already seems to assume a stack organization (a row is a stack); by employing it to
define composition, the inherent relation of concatenative computation to stacks is only
accented.
In a comment following the article, the author says that ‘some of the distinctions that FP
systems maintain are arbitrary, such as between objects and functions’. In fact, that distinction
in FP is no more consequential than it is in concatenative languages. An FP programme
consists entirely of functions. Data objects only enter the picture when the programme is run.
Still further, he says: ‘row polymorphism makes the definition of a complete concatenative
system much cleaner and smaller; you can get away with only two stack combinators’. But of
course two combinators (e.g. S and K) suffice to express any computation in applicative style,
too – this sort of minimalism has nothing to do with row polymorphism or concatenativity.
:
The statement ‘the Forth language […] started it all’ is inaccurate. Forth is perhaps the most
influential in popularizing stack-based programming, but it was preceded by Pop (currently
known as Pop-11), which was stack-based and, unlike Forth, mostly functional.
The statement ‘because all we have are functions and composition, a concatenative program
is a single function’ is somewhat misleading, too – the said property holds of any (purely)
functional programme, not just concatenative ones.
The statement ‘most language VMs are essentially concatenative’ needs to be substantiated.
First, there is a well respectable set of VMs that are register-based, i.e., LLVM, Lua VM, Parrot,
Dalvik, and HASTE (Falcon's VM). Second, a VM may be stack-based but not concatenative.
The phrase ‘quotation […] allows deferred composition of functions’ is, I believe, incorrect.
Should it not rather be ‘deferred evaluation’?
Finally, the phrase ‘our mathematical language has evolved as inherently applicative’ needs to
be made more precise. That language is not only ‘applicative’. It is clearly variable-based, as
opposed to function-based (point-free). Functions very rarely are treated as values in
themselves. And, syntactically, infix notation is preferred wherever it applies.
Reply
I don't like ‘concatenative programming […] uses function composition instead of function
application’ as a definition. Dropping one operation while promoting another does not mean the
latter is used ‘instead’ of the former; they do very different job after all.
The statement ‘that function application becomes unnecessary […] makes these languages a
whole lot simpler to build, use, and reason about’ is not true in general – the author himself
gives an example of the opposite, later on in the article.
One interesting question is the relation between concatenativity, stacks, and postfix notation.
The author seems to maintain that stack is ‘not fundamental’ to concatenativity, but why then
all real, usable, concatenative languages are stack-based?
Also, if being postfix is also not fundamental and ‘there is nothing stopping a concatenative
language from having infix [or prefix] operators’, then why all concatenative languages are
postfix? Can concatenativity be preserved in the presence of non-postfix notation?
A commenter said that ‘semantics […] via row polymorphism […] makes the stack an
implementation detail’, but I see it exactly the opposite: the way it is used, row polymorphism
itself already seems to assume a stack organization (a row is a stack); by employing it to
define composition, the inherent relation of concatenative computation to stacks is only
:
accented.
In a comment following the article, the author says that ‘some of the distinctions that FP
systems maintain are arbitrary, such as between objects and functions’. In fact, that distinction
in FP is no more consequential than it is in concatenative languages. An FP programme
consists entirely of functions; data objects only enter the picture when the programme is run.
Still further, he says: ‘row polymorphism makes the definition of a complete concatenative
system much cleaner and smaller; you can get away with only two stack combinators’. But of
course two combinators (e.g. S and K) suffice to express any computation in applicative style,
too – this sort of minimalism has nothing to do with row polymorphism or concatenativity.
The statement ‘the Forth language […] started it all’ is inaccurate. Forth is perhaps the most
influential in popularizing stack-based programming, but it was preceded by Pop (currently
known as Pop-11), which was stack-based and, unlike Forth, mostly functional. Pop-11 is still
finding use for AI education and research in UK.
The statement ‘because all we have are functions and composition, a concatenative program
is a single function’ is somewhat misleading, too – the said property holds of any (purely)
functional programme, not just concatenative ones.
The statement ‘most language VMs are essentially concatenative’ needs to be substantiated.
First, there is a respectable set of VMs that are register-based, e.g. LLVM, Lua VM, Parrot,
Dalvik, and HASTE (Falcon's VM). Second, a VM may be stack-based and not concatenative.
The phrase ‘quotation […] allows deferred composition of functions’ is, I believe, incorrect.
Should it not be ‘evaluation’ rather than ‘composition’?
Finally, the phrase ‘our mathematical language has evolved as inherently applicative’ needs to
be made more precise. That language is not only ‘applicative’. It is conspicuously variable-
based, as opposed to function-based (point-free). Functions very rarely are treated as values
in themselves. And, syntactically, infix notation is preferred wherever it applies.
Reply
Some OO libraries that make use of this concatenative style, to good effect:
x.doThis().thenThat().andThisOtherThing().butWaitTheresMore()
:
It reads nicely compared to the functional reverse order:
andThisOtherThing(thenThat(doThis(x)))
You can factor out the middle parts into a new method, to some degree, like:
Reply
Replies
Reply
:
Anonymous April 14, 2013 at 5:09 AM
I wrote my own JIT/AOT compiler. In AOT mode you can do #exe{} and run JIT code during
compile time. You can insert code into the AOT stream. I did not count on lots of code this way.
Reply
I just loved your article on the beginners guide to starting a blog.If somebody take this blog
article seriously
in their life, he/she can earn his living by doing blogging.Thank you for this article.
tibco sportfire online training
Reply
EMAIL; painkillerpharmaceuticalslink@gmail.com
Reply
Reply
Reply
Wow. That is so elegant and logical and clearly explained. Brilliantly goes through what could
be a complex process and makes it obvious.I want to refer about the best corporate Training
institute In Bangalore . They Provide 100% Placement Program .
Reply
Digital Marketing Agency in IndiaWe are a Website Design Company and Digital marketing
agency worldly known for our super creative blend in the projects that we do deliver. Internet
Marketing Agency
Reply
Wow. That is so elegant and logical and clearly explained. Brilliantly goes through what could
be a complex process and makes it obvious.I want to refer about the best
sap abap training in bangalore . They Provide 100% Placement Program .
Reply
Digitürk sporun yıldızı Faturalıya ilk 4 ay 49,00 TL, daha sonra ayda yalnızca 154,00 TL. Kredi
Kartı ile 12 ay boyunca ayda 119 TL, toplamda 1428 TL.
nbb sütyen hem kaliteli hem de uygun fiyatlı sütyenler üretmektedir. Sütyene ek olarak sütyen
takımı ve jartiyer gibi ürünleri de mevcuttur. Özellikle Avrupa ve Orta Doğu'da çokça tercih
edilmektedir.
yeni inci sütyen kaliteyi ucuz olarak sizlere ulaştırmaktadır. Çok çeşitli sütyen varyantları
mevcuttur. iç giyime damga vuran markalardan biridir ve genellikle Avrupa'da ismi sıklıkla
:
duyulur.
iç giyim ürünlerine her zaman dikkat etmemiz gerekmektedir. Üretimde kullanılan malzemelerin
kullanım oranları, kumaşın esnekliği, çekmezlik testi gibi birçok unsuru aynı anda
değerlendirerek seçim yapmalıyız.
iç giyim bayanların erkeklere göre daha dikkatli oldukları bir alandır. Erkeklere göre daha özenli
ve daha seçici davranırlar. Biliyorlar ki iç giyimde kullandıkları şeyler kafalarındaki ve
ruhlarındaki özellikleri dışa vururlar.
Reply
Memegang Gelar atau title sebagai AGEN POKER ONLINE Terbaik di masanya
Publish Preview
:
‹ Home
›
View web version
Powered by Blogger.
: