# Low Code is Dead. Long Live Low Code!

An Introduction to the General Theory of Low Code Relativity

# History Rhymes

Technology providers ride the 24-hour hype cycle as much as they drive it - and with that comes peaks and valleys in the signal-to-noise ratio online. This week was Microsoft’s “Build” showcase - and with that comes their big marketing push to companies and the broader development community. I usually don’t pay much attention but I wanted to check out how they pitch the road map for .NET and surrounding tools. It can be useful to see how they’re positioning their wares to developers and the decision-makers that green-light the projects that use them. I was also curious to see how Microsoft would balance between playing to their enterprise customers while flexing both their purchased and earned-equity in open source.

“History doesn’t repeat itself. But it often rhymes.”
– Mark Twain (apocryphal)

One of the reasons I roll my eyes at these types of events is the inevitable low code product roll-outs. This is a staple of the enterprise so I would normally let it pass without mention, but well - here we are. Over the past few years I’ve worked as a consultant, and a considerable portion of that time is spent helping companies recover from low code “buyer’s remorse”. And while many of my examples here point to Microsoft, they’re not the only name in the low code game. Far from it. It just happens that the latest offerings read like deja vue all over again.

AI autocomplete for coders.
We’d rather see Dr Nick, to be honest

I remember when Microsoft was straight-faced telling their customers that weaving hand-crafted XML was a valid activity for non-developers. (Looking at you, InfoPath.) Now there’s PowerApps which at least is moving the WYSIWYG interface to FrontPage, levels of functionality. Remember when SharePoint Composite promised the ability to create a business solution “without programming”? Their marketing might prefer you forget. They’ve since moved on to Teams as the facade for what they’re calling Project Oakdale and PowerBI continues to merge with Excel in the form of Power Apps and Power Fx .

## Common Sense or Snake Oil?

I’ll admit that my skepticism toward the concept of low code marketing is both anecdotal and fed by long held sentiment in the developer community that it’s an inherent boondoggle. But aside from arms-length appraisal I’ve also had to rescue projects from the adverse consequences of poor low code implementations. Is it possible that there are so many well-functioning low/no-code systems I never see due to the “Maytag Repairman” effect. Those solutions simply quietly go about doing their work without notice. But given that Microsoft changes the names of these products while changing the “face” from Office for SharePoint, then SharePoint for PowerBI, then spreading the joy from PowerBI to Teams, I’ll stick with my thesis for now.

## Cautionary Tales

In one case I saw a company’s “low code” web form that fed into their recruitment system, and the form requested information that was not legal to ask. In another situation a healthcare company had run amok with data underlying PowerBI reporting such that a “multi-million-dollar investment” had to be re-tooled from scratch. And at a financial firm I saw a group of traders use SharePoint workflow to play a hidden shell game with positions that caused the comptroller to throw a fit when they found out about it. While tech companies with enterprise offerings will sell the sunny side of “citizen developers” I’ve seen enough of the dark side to be reflexively skeptical. The executive presentation may look like virtuous incentives and maximizing productivity, but lurking right behind the shiny facade are perverse incentives that give short shrift to good engineering practice and is often the basis for the dreaded “shadow IT”, with operational and legal risk that follows with it.

## Low Code On The Down Low

But there are many, many examples of actual low code in service that checks all of the boxes, it’s just not the shiny facade with slick presentations at trade shows. I was reminded earlier this year when reading an article from Steve Smith [aka Ardalis]. It echoed some ideas I had about java and .NET. But it really made a succinct if inadvertent case that “low code” is everywhere now.

GraphQL is the new ORM, and your API endpoint is the new stored procedure.

If a group in your company builds an API for your team to use in an internal application, that’s low code. If your company uses a third party service to broadcast messages to a mailing list, that’s low code. If an application makes a request to the open weather API or other public endpoint, that’s low code. Behind each of those examples and a myriad others there’s work going on behind the scenes that the caller will never know about - and that’s the beauty of it. But there’s a catch. There’s always a catch, and it’s all about context.

You still have to “mind the store” around certain base concepts, and recognize the unavoidable cognitive burden involved in carrying enough of what’s happening in that gray box of functionality so that you can maximize its advantages and limit its pitfalls. And never mind if that system or solution you depend on puts out a breaking change that nukes your project. No one selling you a “low code/no code” solution will talk about that, and for good reason. If you knew the total cost of that ownership, you’d never buy in, and that works against their interest. But for now I want to set that aside - because the more interesting bit to me is how software engineering today has become first-and-foremost a low code world. And it’s so pervasive that almost no one talks about it in those terms.

## DDD, DSL, ORM & Alphabet Soup

I didn’t recognize it at the time, but early in my career I was fortunate to be part of a project with domain driven design (DDD) elements to its architecture. The senior engineers ensured the core was expressed while also abstracting data access and interaction with secondary systems. The code developed its own form of notation - a DSL (domain specific language). And this wasn’t simply for the convenience of having common acronyms to share between contributors. The code structure also provided some shielding to new contributors (such as myself) which only allowed certain ways to interact with core functions. Part of that was enforced by a customized compiler and part was in group-managed coding standards. And of course the goal was to improve quality and reduce the chance of unwanted behaviors in a highly complex system. And over the years I’ve come to see nearly every large project filtered through that experiential lens. As Ardalis outlines above, whether you’re using an object mapper like Entity Framework or a service interface via REST or GraphQL they serve as a burden and shield by abstracting away complexity that developers should only have to address when needed.

## General Theory of Low Code Relativity

One of the exercises that brought this home was a recent live stream by Aaron Stannard. I have been reviewing Akka.NET and have been really taken with the actor model . He spent some time reviewing underlying .NET system libraries in the course of his own optimization for ActorPath Uri parsing. While I view Akka.NET code from a high level, I appreciate having the context for those situations where understanding it more deeply will shed light on the systems I build. But what was really interesting was that Aaron was doing the same thing as he went spelunking through the .NET system libraries - both checking for guideposts and giving a guided tour to those of us watching the live stream.

It really brought home that “low code” is in the eye of the beholder. And more importantly, the body of work that surrounded what Aaron was focused on was definitely not low code. There were functional proofs and performance benchmarks that were curated over months and years which provided a solid boundary around what he was working on. It was a substantial body of work that - once he had honed in on the changes he wanted to make - quickly revealed the improvements he had made in performance. This doesn’t happen by accident, and it’s certainly less likely in something labeled as “low code” as an eye-catching marketing term. As Aaron was working his way through .NET system library files he remarked in passing how there so much more to it than one would expect, and for what it’s worth that’s precisely what I thought when I was looking at the Akka.NET code base.

## R U Serious?

I mention in my COVID Datapothecary post of how I view R as a domain specific language, which may be met with some controversy in the community. Be that as it may, it’s a great example of how it attracts a high number of non-developers from academia, and is a top choice for statistical modeling and machine learning. From that, R benefits from the inherent context of the constituency that provides a form of natural “guide rails” - most folks who come to R already have some background to approach the math involved in their domain, and are simply looking for a programmatic means to express it. And because of the broad package support, both for data processing and for operating in specific disciplines, there’s a great deal of well-trod ground for new users to start their journey. And with that momentum the R ecosystem continues to grow apace.

R tools for obtaining and manupulating data from the COMPADRE and COMADRE Plant and Animal Matrix Databases - jonesor/Rcompadre
An R package to quickly obtain clean and tidy men’s basketball play by play data. - saiemgilani/hoopR

I’m re-using the example below from another post but I want to repeat it here to illustrate how this function not only masks its underlying complexity but also protects the user “at the boundaries”. Consider the following single line of code. It processes a column of data (here it’s “new_cases”) and generates a new one (ncrm - my short-hand for “new cases rolling mean”) as part of a chain of functions against a source data set.

  mutate(ncrm = slider::slide_dbl(new_cases, mean, .before = 3, .after = 3))

Below is the formula that this function expresses, and worth noting how the two look nothing alike. Above we’re simply trusting that it will do the right thing - which is to take value at a given position, look “before” and “after” three positions and provide the mean for the declared range. \begin{aligned} \LARGE\overline{p}_\text{SM} &= \LARGE\frac { \normalsize{p} \small{_M} \normalsize + p{ \small_M -1} + \cdots + p_{M \normalsize-(n-1)^{}}}{n}\\ &= \LARGE\frac1n \sum_{i=0}^{n-1} p_{ \small{M} \normalsize{-i}} \end{aligned} And it will perform this function as long as there are values in the named source column. Whether it’s one or one billion values it will keep politely stepping through and performing the calculation until it’s completed, and then attach the newly created range of values to the data frame. That’s a bunch of work done by a single line of code.

This seems straight-forward enough, but what about the boundaries? What about the first position where there’s no “before”? Well the function handles it properly. The same happens when it steps close to the end of the series. Everything is handled in a way that prevents brittle edge cases from surfacing. The users should definitely read the package documentation to understand how it handles those edge cases (such as empty versus null values - where I had substituted in 0 for any slot with a null or empty value in a previous function in the chain) and once understood it’s a succinct and powerful tool. So that new column can now be plotted as a line along with the bars representing the individual daily aggregates, and we have a view of the data that’s become all-to-familiar on the nightly news.

The point here is to illustrate that there certainly is a way to build an algorithm in nearly any language that can calculate a rolling mean. But the ecosystem around R that makes it a particularly adept low-code environment for this type of task.