Literate Programming: for DevOps, MLOps and Infrastructure as Code in General

Software developers generally tells me that they don’t have to document most of anything since the code is the documentation: just read it. This is when I reply: the code tells me the how you did it (if lucky) the words around the code should tell me the why.

With the recent (last 15 years) emergence of Infrastructure as Code (IaC), new important specialized developer roles such as DevOps and now MLOps, I will argue that literate programming concepts are becoming more and more important to the software industry.

Infrastructure as Code

In the last fifteen years, we saw the emergence of several Domain Specific Languages (DSL) to help system administrators to manage and provision their infrastructure. Those DSL revolutionized the way infrastructures were created and cared for.

Infrastructures were now entirely defined in plain text files. Those files could be versioned, they would get special treatments in IDEs, complete infrastructure could be rollbacked, etc.

But, there is one special characteristics that IaC has: the side effects of the “infrastructure code” are huge because a single line can lead to provision, or destroy, huge number of hardware resources which can have dramatic physical or monetary impacts.

Importance of Documentation

In this context, I argue about the importance of literate programming principles to code and maintain IaC. Some of the DSL are very opaque, a very small change can have dramatic side effects. IaC also has deal with versions of several tens, if not hundred, pieces of software that have to work together. IaC creates very complex networks of computers in different regions of the World.

None of that is self evident in code written in those DSL. The why needs to be documented very carefully, and documentation needs to be as close as possible to the DSLs code because every time something changes in the infrastructure, the change needs to be reflected in the text that describes the rational of that piece of infrastructure. And finally, both the how and the why will be carefully peer reviewed in a pull request. 

Org-mode: Agnostic Literate Programming Framework

Considering that there exists a specific DSL per framework (Terraform, Ansible, Docker, Puppet, Chef, etc.) it is important to have a literate programming framework that is language agnostic (unlike CWeb, nbdev, etc.). This is why I strongly support Org-mode. If the DevOps/MLOps developers works within Emacs, they have all the power of all the major modes already existing in Emacs to manipulate the code of those DSL within the code blocks. If they don’t, they can always use their favorite IDE with a Org-mode command line utility like OrgWeb.

Few Examples

Let’s take a look at what it looks like in the wild. Here are two examples, one that describes a Dockerfile and a series of Ansible playbooks, properly rendered in GitHub.

Literate Dockerfile

The first example is orgweb’s Docker.org file where the Dockerfile if generated. As you can see, everything of importance about the generated Docker image is stated. The version of Alpine Linux used, which version of Emacs is shipped with, the reason why we choose Alpine in the first place, etc.

Then it explains why the Dockerfile needs to install the tf-dejavu package, and what happens if it is not installed. And then it explains why the install.el and site-start.el files are being copied over and what they are used for. And finally, why install.el is being run while building the image.

Literate Ansible

Here is another example from a project I stumbled upon recently. This repository is a set of Ansible playbook to provision a series of infrastructure resources such as a docker registry, longhorn, etc.

OrgWeb: CLI Org-Mode Environment for WEB like development without Emacs

There is a wide range of tools and framework currently available for doing literate programming development. You have the ancestors like CWEB, NOWEB and nuweb. You have full editors like Leo. And then you have more modern approaches like nbdev, PyWebTool and FSharp.Formatting

However, most of them are specific to a programming language. Some of them are general like NOWEB, but they are lacking some kind of integrations in modern IDE environments.

For the last 8 years, I always fallback to the same: Org-Mode.

Org-mode

Org-mode is many things, but its most interesting feature has always been its code blocks to me. Org’s syntax is clean and powerful. Org is not specific to a particular programming languages: it supports tens of programming languages or other kind of configuration/scripting languages. Code blocks can be executed, tangled or weaved. 

Its drawback: the best (and frankly only) Org-mode implementation is in Emacs. Some, myself included, will say it is great because we love working with Emacs, and are happily willing to pay the cost. But we are not the norm, but the exception. Emacs is wonderfully different and it doesn’t appeal to all developers. I can understand that in today’s industry where the only incentive is to ship, ship, ship features.

But, how could we get the best of Org-mode without having to force people to use Emacs? One possibility could be to develop Org-mode plugins for other IDEs, the first on the list would most likely be VS Code. But this is not a small undertaking.

Org-mode CLI

There are some modules existing in other IDEs that support org-mode like on Vim, VS Code, etc. But those are mostly syntax highlighter, or implement some features mostly related to org-agenda and headings manipulation. It is a good start, but far from enough for a literate programming framework.

The goal of OrgWeb is to develop a simple tool that any developer could use to leverage the full power of doing literate programming using Org-mode and their preferred IDE. 

OrgWeb

OrgWeb is a simple CLI tool that can be installed using this command:

pip install orgweb

The tool only has four commands:

  1. tangle: extract code from code blocks into their source files
  2. detangle: sync source files back to their original Org-mode code blocks
  3. execute: execute code blocks such that they produce their side effects
  4. monitor: monitor local file system to tangle/detangle files automatically

The tangle, detangle and execute commands can be performed on a folder (recursively) or one or multiple specific files.

Note: I am not covering all the details of how we can use Org mode to do literate programming. You can search my blog which has plenty of posts about that, but also refer to the Org-mode documentation to read about all and every features available to you.

In addition to the orgweb CLI, you will need Docker available in your environment. If it is not already installed, you can follow those instructions to install it on your system

VS Code + Org-mode

For this blog post, I will cover how Org-mode can be used in conjunction with VS Code to develop an application using literate programming. To start, you can simply clone OrgWeb’s repository, install this Org-mode module in VS Code. The general development layout is:

In the bottom window, this is where we have the terminal instances. This is where OrgWeb commands happens. In the main edit window, this is where the Org files, or the tangled source files will be manipulated.

You can notice that the Org-mode VS code module does some basic syntax highlighting, even within the code blocks using Python’s syntax highlighter. This is far than enough to easily understand and follow the Org files.

Tangling

Tangling is the action of extracting code blocks from a literate file into its executable source code file.

Once ready to tangle the Org file, this command will tangle that specific file:

orgweb tangle . --file main.org

It asks OrgWeb to tangle the current directory . but to only tangle the main.org file. It will find all the Org files recursively, and tangle only the ones specified. If no files are specified, it will tangle all the Org files it finds. Then the main.py file will be generated from all the code blocks from main.org.

Detangling

Developers will often end-up working on the source files that have been generated from Org files. There are all kind of reasons for that, such as modifying a source file while debugging an application. When this happens, the literate Org files and the source files get desynchronized. Changes could be copy/paste to the Org files, but there is a much easier way to do it: detangling.

Detangling synchronize back any tangled code blocks from source files to their original Org file:

orgweb detangle . --file main.py

It asks OrgWeb to detangle the current directory . but to only detangle the main.py file. It will find all the Org files recursively, and tangle only the ones specified. Then the main.org file will be generated from all the code blocks from main.py.

Executing

The execute command is like the tangling command but instead of moving code in source files, it does execute the code blocks that needs to be executed. Code blocks that get executed produces side effects. It is those side effects that we want to force with the execute command.

One example are the PlantUML code blocks in the README.org file. When we execute them, the schema images will be generated.

Monitoring

This is all good, but it is still inconvenient to have to run commands in the terminal every time you want to tangle or detangle some files.

This is why there exists the monitor command:

The orgweb monitor . command will keep monitoring the specified folder. Every file that changes within that folder (recursively) will potentially be tangled or detangled by the running orgweb instance. If a .orgwebignore file exists in the target folder, then everything within that file will be ignores by the monitoring process.

In the envisioned development workflow, developers will simply run the monitoring in background such that every Org and source files automatically gets tangled and detangled every time they are saved. That way, developers will be sure that both files are always in sync.

How does it work?

As we know, orgweb is designed in a way that developers can use all the power of Org-mode, in any IDE they like, without having to rely on Emacs directly.

To do that, it leverages Docker to build an image where Emacs is properly installed and configured to implement the commands that are exposed via the command line tool.

If the orgweb docker image is not currently existing in the environment, then it will request Docker to build the image using the Dockerfile. The build process will install and configure all the components required to implement all orgweb commands.

orgweb check if it exists every time it is invoked from the command line. This process will happen any time that the image is not available in the environment.

If the image is existing in the environment, then the following will happen.

orgweb will ask Docker to create a container based on the image. Once the container is running, it will execute a command on the container’s terminal to run Emacs. Emacs is used directly from the command line by evaluating ELisp code it gets as input.

Every time a orgweb command line is executed, a new container is created and when the commands finishes, the container gets deleted:

Other possible avenues

OrgWeb works fine, but it won’t ever be as interesting as a proper IDE integration, like what is available in Emacs. Another interesting option worth investigating would be to use Emacs as a Org-mode backend of a LSP server. That way, IDE modules developers could more easily develop fully fledged Org-mode modules for specific IDE integration. That way, we could “easily” get the full Org-mode power within any IDE, being able to not only leverage code blocks but org-agenda, org-roam, tagging, date time, org-capture, etc.

Literate Programming at the dawn of LLMs

Since the beginning of the year, the industry’s main focus seems to revolve around “prompting.” We’ve seen the emergence of new job titles, new job descriptions, and even the introduction of “prompting wizards,” all of which are essentially part of branding and marketing strategies.

Prompting involves articulating a problem and providing clear instructions in the hope that the person or system reading it will produce the intended outcome. The recent shift lies in the recipient of these instructions: rather than a person taking action to solve the problem and follow the instructions, it’s now a thing (currently some form of AI model) that carries out the task.

What I find amusing, after 20 years of professional experience in software development and engineering management, is that we’re finally getting engineers to generate a substantial amount of text instead of solely focusing on writing code. This appears to signify quite a significant paradigm shift to me.

Prompting and Literate Programming

I recently had something of an epiphany while investigating the current state of Literate Programming: could Literate Programming not become a powerful software development paradigm with the advent of LLMs?

I mean, for 39 years, literate programming programmers have been essentially doing just this: “prompting” their software development. They have been describing their problems and outlining instructions before implementing the actual code, often in the format of a book or notebook. The only difference is that they were the ones doing 100% of the coding afterward (either themselves or with the help of an implementation development team).

Intuitively, it seems that this same format and these same skills are precisely what’s needed to best leverage LLMs in coding computer software. LLMs will undoubtedly become very effective tools, but they are just that: tools that need to be learned, experimented with, and mastered to extract the best results from them.

GitHub’s Copilot

In this blog post, I aim to explore how literate programming can influence and enhance the utilization of LLMs. The current leading LLM tool for software developers is undoubtedly GitHub’s Copilot, integrated into VS Code. It boasts three main features:

  1. Code completion
  2. Completions Panel (providing up to 10 distinct auto-completion suggestions)
  3. Chat (recently made available to the general public)

With all of these capabilities integrated into an IDE like VS Code, it forms a package that significantly accelerates the software development process.

The next question arises: will Copilot grasp, and potentially benefit from, the literate programming process in the suggestions it provides? This is what I’m aiming to explore – to observe how it reacts, what proves effective, and what may not.

To put it to the test, I’ve developed a straightforward command-line tool in Python designed to function as a basic calculator. The remainder of this post comprises a series of screenshots accompanied by my comments at each step.

literate-copilot

Before diving in, Is still needed to create a new GitHub project, and to use nbdev_new to create a new nbdev project, and then to configure it.

Before starting to develop the CLI tool, I wanted to see if GitHub Copilot was self aware of its own capabilities:

It’s hard to discern from this interaction whether it’s generating content or not, but at the very least, it seems promising. Let’s see if we can further explore this level of contextual awareness.

The initial step I took was to compose the introduction for the tool, right here in this Jupyter notebook. It outlines the purpose of the tool and the extensive list of calculator operations we aim to implement. I obtained the imports from the prior interaction with Chat. I manually added typer as this is the library I intend to use for building the command-line utility.

Following that, I proceeded to discuss creating a Typer application and its functionalities, etc. In the subsequent code block, I deliberately refrained from writing anything, as I didn’t want Copilot to auto-generate code within this block. I was interested in evaluating if it had an understanding of the entire notebook’s context, not just within a specific code block. This is why I opened the Suggestions Panel to assess if it would suggest anything relevant given the current context.

What I received was particularly interesting, as the initial suggestion aligns perfectly with the next step. It overlooks the #| export nbdev instruction, but that’s perfectly acceptable, as it’s rather obscure.

Next, I began detailing the subsequent steps by creating a new Markdown cell. At this point, Copilot’s auto-completion capabilities come into play. This is particularly interesting, as it essentially anticipates what I was about to write, drawing from the extensive list of calculator commands I plan to implement. In this case, it starts with the first command on that list, which is addition. This suggests to me that it leverages the entire notebook as the context for its suggestions.

For context, here is the full list of operations we want to implement:

However, this was actually not the first command we wanted to implement. The first one we wanted to implement is the version of the command line tool that we display to the users if they ask for it.

Then the next step is to start implementing the long list of calculator operations, starting with addition:

Why was the quiet parameter suggested? To dig a bit further into its thought process, I decided to open the Completions Panel. Suggestion 3 sheds light on what it had in mind. However, for a basic calculator, this isn’t very useful since the outcome of adding two numbers is quite straightforward. I’ll go ahead and accept .

Now, let’s compile this command-line application to ensure it functions as intended:

By blindly accepting the code proposed by Copilot, here is how the add command works:

Let’s see if it works as intended:

Yes, it does. It’s not the most convenient method for adding two numbers; it’s a bit complex and verbose, but it will suffice for now.

Afterwards, I added the entire list of operators in the same manner, by appending code block after code block, and it successfully implemented each of them. There was a point around number 7 or 8 where it lost the order, but simply starting to type the right term got it back on track. For example, typing def si will continue with defining the Sin function accordingly. Here is the current list that has been implemented so far:

Adding Tests

Now that we have all these functions, I’d like to give Copilot a try at generating tests for each of them. To do this, I posed a very simple question to the newly generated release of Copilot Chat while having the 00_main.ipynb file open:

I would like to add tests for each of those commands.

By “those commands”, I was referring to what was currently displayed in the Workspace on my right, hoping that it would contextualize the request within the Workspace. The result Chat provided me with is:

It is even aware that it is missing some from the list described in the introduction and continue to list them starting at the right place (divide):

As you can see, it is fully aware of the context. It will produce one test per command, understanding that the commands print output to the terminal and that the functions do not return actual numbers. It will also attempt to use a CliRunner to execute the tests. While it doesn’t work out of the box, it’s certainly a step in the right direction.

Conclusion

This concludes the tests. It’s clear that Copilot is aware of a Workspace and contextualizes its suggestions accordingly. When working in a Jupyter notebook, it takes into account every code block.

This little experiment suggests to me that adopting a literate programming workflow and its principles can lead to better and more effective suggestions from LLMs like Copilot.

For thousands of years, humans have been expressing their thoughts in a sequential manner, from top to bottom. We’ve developed highly effective systems to organize these writings (you can explore the BIBO ontology for a glimpse into this). These systems have evolved and been refined up to the present day.

To me, this is the essence of Literate Programming. It’s about developing computer software in a more natural, thoughtful, and systematic human way.

Not many people in the industry share this perspective. However, what I’ve begun to explore in this blog post is how LLMs, along with integrated tools like GitHub’s Copilot, could potentially shift that perception. How Literate Programming could emerge as one of the top programming frameworks for effectively utilizing tools like Copilot.

Profiling Python Code in Jupyter while doing Literate Programming with nbdev

As you may know if you followed this blog in the last few weeks, I started to experiment doing literate programming in Python using nbdev. This means that most of the Python code I do today is first written in a Jupyter Notebook (in VSCode), and eventually get their ways into a .py module file.

Often time, I like to profile a function here and there to better understand where execution time is spent. I do this in my normal development process, without thinking about early optimization, but just to better understand how things works at that time.

This week I wanted to understand what would be the easiest way to quickly profile a function written in a Jupyter Notebook, without having to tangle the code blocks and work at the level of the .py module.

Line Profiler

The solution that worked best for me with my current workflow is to use the line_profiler Python library. I won’t go in details about how it works internally, but I will just show an example of how it can be used and expose the results.

Let’s start with the code. Here is a piece of code that I am currently working on, that I will release most likely next week, which is related to a small experiment that I am doing on the side.

What this code does is to read a RSS or Atom feed, from the local file system, parse it, and returns a feed namedtuple and a list of articles namedtuple. Subsequently, those will be used down the road to easily get into a SQLite database using executemany().

Each of those block are individual code block within the notebook, with explanatory text in between, which I omitted here.

from line_profiler import profile

@profile
def detect_language(text: str):
    """Detect the language of a given text"""

    # remove all HTML tags from text
    text = re.sub('<[^<]+?>', '', text)

    # remove all HTML entities from text
    text = re.sub('&[^;]+;', '', text)

    # remove all extra spaces
    text = ' '.join(text.split())

    # return if the text is too short
    if len(text) < 128:
        return ''

    # limit the text to 4096 characters to speed up the 
    # language detection processing
    text = text[:4096]

    try:
        lang = detect(text)
    except:
        # if langdetect returns an errors because it can't read the charset, 
        # simply return an empty string to indicate that we can't detect
        # the language
        return ''

    return lang
Feed = namedtuple('Feed', ['id', 'url', 'title', 'description', 'lang', 'feed_type'])
Article = namedtuple('Article', ['feed', 'url', 'title', 'content', 'creation_date', 'lang'])
def parse_feed(feed_path: str, feed_id: str):
    parsed = feedparser.parse(feed_path)

    feed_title = parsed.feed.get('title', '')
    feed_description = parsed.feed.get('description', '')

    feed = Feed(feed_id,
                parsed.feed.get('link', ''),
                feed_title, 
                feed_description,
                detect_language(feed_title + feed_description),
                parsed.get('version', ''))

    articles = []
    for entry in parsed.entries:
        article_title = entry.get('title', '')
        article_content = entry.description if 'description' in entry else entry.content if 'content' in entry else ''
        articles.append(Article(entry.get('link', ''),
                                feed_id,
                                article_title,
                                article_content,
                                entry.published if 'published' in entry else datetime.datetime.now(),
                                detect_language(article_title + article_content)))
    return feed, articles

Let’s say that we want to profile the detect_language() function when calling the parse_feed() function. To do this, the first thing we did is to decorate the detect_language() function with the @profile decorator from from line_profiler import profile. Once this is done, we have to load the line_profiler external library using the %load_ext magic command in Jupyter. To do this, we simply have to create the following Python code block and execute the cell to load the module in the current running environment:

%load_ext line_profiler

Once it is loaded, we can create another Python code block that will execute the %lprun command which is specific to Jupyter:

%lprun -f detect_language parse_feed('/Users/frederickgiasson/.swfp/feeds/https---fgiasson-com-blog-index-php-feed-/13092023/feed.xml', 'https---fgiasson-com-blog-index-php-feed-')

Once this cell is executed, line_profiler will be executed and the profiling of the detect_language() function will occurs. Once finished, the following output will appears in the notebook:

Timer unit: 1e-09 s

Total time: 0.215358 s
File: /var/folders/pz/ntz31j490w950b6gn2g0j3nc0000gn/T/ipykernel_65374/1039422716.py
Function: detect_language at line 3

Line #      Hits         Time  Per Hit   % Time  Line Contents
==============================================================
     3                                           @profile
     4                                           def detect_language(text: str):
     5                                               """Detect the language of a given text"""
     6                                           
     7                                               # remove all HTML tags from text
     8        11     136000.0  12363.6      0.1      text = re.sub('<[^<]+?>', '', text)
     9                                           
    10                                               # remove all HTML entities from text
    11        11      78000.0   7090.9      0.0      text = re.sub('&[^;]+;', '', text)
    12                                           
    13                                               # remove all extra spaces
    14        11     118000.0  10727.3      0.1      text = ' '.join(text.split())
    15                                           
    16                                               # return if the text is too short
    17        11      15000.0   1363.6      0.0      if len(text) < 128:
    18         1          0.0      0.0      0.0          return ''
    19                                           
    20                                               # limit the text to 4096 characters to speed up the 
    21                                               # language detection processing
    22        10      12000.0   1200.0      0.0      text = text[:4096]
    23                                           
    24        10       6000.0    600.0      0.0      try:
    25        10  214980000.0    2e+07     99.8          lang = detect(text)
    26                                               except:
    27                                                   # if langdetect returns an errors because it can't read the charset, 
    28                                                   # simply return an empty string to indicate that we can't detect
    29                                                   # the language
    30                                                   return ''
    31                                           
    32        10      13000.0   1300.0      0.0      return lang

As we can see, most of the time spent is used detecting the language using langdetect.

Conclusion

It is as simple as that thanks to line_profiler which is just simple, effective and well integrated in Jupyter. This is perfect for quickly profiling some code on the fly.

ReadNext 0.0.4: Local Embedding Model

I just release ReadNext version 0.0.4. The primary goal of this new version is to remove the dependency on the Cohere Embedding web service endpoint by using a local embedding model by default. To enable that, ReadNext got integrated with Hugging Face and is currently uses the BAAI/bge-base-en model.

Local vs. Remote

This new change remove dependency on one external service which makes it more stable. The processing time is a little bit longer with the local model, but it also depends on the capabilities of your local computer.

In terms of performance, the two systems are comparable. In my experience, about 80% of the propositions are the same, and the remaining 20% that are different yeld no major difference in accuracy. However, I do prefer the BAAI/bge-base-en propositions a little better for what I experienced so far.

You may want to experiment with both to see what works best for you. The only thing you have to do is to change the EMBEDDING_SYSTEM environment variable and to reload your terminal instance.

New Configurations

Two new configuration options have been added to this version:

  1. EMBEDDING_SYSTEM: This is the embedding system you want to use. One of: BAAI/bge-base-en(local) or cohere.
  2. MODELS_PATH: This is the local path where you want the models files to be saved on your local file system (ex: /Users/me/.readnext/models/)

If you already have ReadNext installed on your computer, please make sure to add those two new environment variables to you environment.

New Commands

Two new commands have been added as well. They have been added to help understanding the current status of the ReadNext tool. Those two commands are:

  1. readnext version: this gives the version of ReadNext that you are currently using
  2. readnext config: this gives the configuration parameters, and their values, currently used to run that instance of ReadNext

Literate Programming

While at it, I decided to migrate ReadNext’s Python codebase to use nbdev to continue its development using literate programming

All the literate files (notebooks in this case) where the code is tangled and the documentation weaved from are accessible in the nbs folder. The tangled codebase is available in the readnext folder. Finally, the weaved documentation is available as GitHub pages here.