Well, maybe not plasma, but rather the liquid metal that the later Terminator models are made of. So what am I talking about here?

Well, consider a Turing machine with its infinite paper tape. Definitely a one-dimensional programming language closest to the machine and I guess it's made of, well, paper tape. Not so easy to modify a program, you're probably best off just making a new tape. Disregarding other forms of esoteric languages, a computer with only a single instruction can actually be useful. Although since it doesn't have infinite memory, perhaps that is not even fully one-dimensional.

Moving on, the statements and computations in FORTRAN are much easier to specify, we've moved up to a higher dimension/plane, but essentially the language is still mostly one-dimensional, being a linear sequence of statements. Since you can jump around, maybe you could consider it being slightly higher than one dimension, but nowhere near two, and the syntax/layout of the language really doesn't show the increased structure well. FORTRAN is definitely made of cards, you can shuffle them around, remove some and add others.

Even LISP is mostly one-dimensional with a program being, well, a list. To be fair, the program is really a list of lists forming a tree and the opening and closing parentheses allow a freer use of the two-dimensional text area, so we're much closer to being two-dimensional, and I suppose you can choose to visualize that in better or worse ways. I'm not sure what LISP is made of since I haven't used LISP in anger, perhaps movable type? Or do macros make it more like liquid metal because it can morph shape? How free is the morphing, how much does it necessarily need to reflect the underlying structure? How brittle is the program under change? What happens if, for example, a parenthesis is misplaced?

Programming languages are generally so badly designed that you could just as well use random characters. "Notation matters", as Brian Kernighan puts it. AWK was for example designed to most efficiently express what you most likely wanted to do when processing line records in a file. Quorum is a language that tries to use scientific evidence to create a language that is easier to program correctly, especially for beginners. I think it was fairly clear from my previous article that syntax is important and does have measurable consequences, but we should also remember that it takes a really long time to become proficient in a new language, so much so that it almost never is worth changing to another language for production purposes. Although it is probably true that you will become a better developer if you learn a language that makes you think about programming in a different way.

Since I don't have the resources to research the effects of my syntax design for Tailspin, I had this idea of "multidimensional plasma" to try and get a feel for the usability of the language. The pure ideas and constructs of programs can be thought of as existing in multi-dimensional realms, while a programming language needs to project that into a smaller realm that we can manage, usually a two-dimensional text area, with a possible expansion of a third dimension into several files. The image/idea of dimensionality aims to capture how well the syntax does this. Java, for example, is probably more than two-dimensional, because you can choose to extract some code as a method in the linear sequence of method blocks, or as a class in my favourite refactoring pattern "Extract method object", which then can expand to a parallelly represented sequence of method blocks.

The image/idea of what the programming language is made of aims to capture how easy it is to create a program and how easy it is to modify/maintain a program over time. I have always thought that the elegance of a program is measured by how easy it is to modify the code later. The same should apply to programming languages. Java, for example, can be thought of as being made of Plasticine because it can keep being reshaped over time, its excellent refactorability even being the basis of a claim that Java programs can be faster than C.

JavaScript seems rather easy to shape, its proponents claim they are very productive, but any project tends to harden and become unworkable over time so it needs to be completely rewritten every few years. So JavaScript might rather be made of potter's clay. Interestingly, the facts show that JavaScript is actually less productive (see Table 16). The illusion of productivity is possibly explained by Table 17 in that same study, which shows you spend much more time battling with the language than thinking about the problem you wish to solve. Maybe JavaScript is too loose, more like mud than clay, so you might just be confusing activity for productivity and monkey-typing until Fortune grants you a working program.

So how does it look for Tailspin? Is it looking to be just another of the hundreds of copies of C or LISP that have been produced? Uncle Bob thinks it might be likely that we have explored the entire space of programming languages and that we should just pick one language as the last programming language. He proposes Clojure, which is a good choice from a features perspective, but syntax-wise it is unfortunately still a LISP.

The main idea behind Tailspin is to transform the data in every step, with a declarative syntax for selecting the correct transform and a declarative syntax for producing the new value, like in XSLT. But of course nothing is new under the sun and even if Tailspin calls it "templates", it looks very much like a function with pattern-matching:

And OO programmers have no problem recognizing an object:

In fact, a "function" can be an object with state for the duration of the call:

Note that the right-left dimension seems to be separately extensible from the up-down dimension.

Even though I think they are very handy, parser-syntax-functions have been done before:

So what about the sequences of transforms? Here we create an array of squares:

Not so different from Java streams, perhaps?

Perhaps more fair to compare to Julia:

So let's step it up a bit and require that we also include the number after the square as well:

Not so much more work in Java, although we need to change "map" to "flatMap":

But the structure of the statement changes quite a lot in Julia:

It turns out that the transforms in Tailspin are different from functions in that they take exactly one input value and output zero or more values. But nothing is new under the sun and this is very much like a Clojure transducer. So we could possibly implement at least some of Tailspin as Clojure macros, if we think the syntax is worth it. But transducers in Clojure take a function to receive the output values, kind of like the Collectors.toList() in Java threaded through the following stages, and that's not how it is implemented in Tailspin where it is currently specified that every input value passes through a transform before any value goes on to the next transform. So could we decouple the process even more? I'll have to think about it more carefully, but a possible extension to Tailspin could be to execute all input values in parallel for a particular step just by changing from "->" to "=>":

Or maybe "-|-" could be made to mean that we have a separate thread execute a particular step one value at a time (somewhat like a reactive stream with pushback):

It's still early days, but I'm feeling pretty good about the Tailspin syntax so far. I think it provides opportunity for a few different dimensions to organize the code on and it seems at least some parts might be like liquid metal!