Data Reliability Engineering

I am happy to be able to share about one of the things that I have been up to since I started working at Dayforce. What Is that thing?

Data Reliability Engineering

I had the opportunity to put in place a new functional area called Data Reliability Engineering. This may look good, but you may wonder what this thing is about.

Data Reliability Engineering (DRE) can be seen as a child of Site Reliability Engineering (SRE). The foundation of DRE is SRE. Organizationally speaking, we embedded DRE in the SRE organization at Dayforce.

DRE is SRE for Machine Learning and Data systems.

A DRE team focuses on, and is responsible for, ensuring that data pipelines, storage, and retrieval systems are reliable, robust, and scalable. It borrows principles from software engineering, DevOps, and site reliability engineering (SRE), to apply them to data-intensive systems.

The goal of the team is to ensure that data, which is a critical business asset, is consistently available, accurate, and timely available for different processes such as auditing, machine learning data training, analysis, and to different stakeholders such as data scientists, ML engineers, data analysts, etc.

A DRE team makes sure that the right Data Service-Level Indicators (DSLIs) are in place, that the Data Service-Level Objectives (DSLOs) and Agreements (DSLAs) are respected and constantly monitored. It also helps with the automation of the data movements, to increase the observability of the data pipelines and data systems, with the management of incidents incurring data availability and supporting teams with all the above.

Overall, it ensures that the data used to generate analytics reports, machine learning models or any Dayforce features is accurate, reliable, and available on time.

A data reliability engineer (DRE) is a professional responsible for implementing and managing data reliability engineering principles. They act as the guardians of data integrity and availability within the organization.

The DRE team act as trusted advisors for the company, actively participating in data platform infrastructure design and scalability considerations. It is responsible for implementing and managing data reliability engineering principles. It acts as the guardian of data integrity and availability within the organization.

Move Fast by Reducing the Cost of Failure

DRE helps teams to move fast by reducing the cost of failure of Machine Learning and Data projects. Some will say that it makes it a slow start, but it pays off in the long run. We focus on development velocity in the long term, not the short, burst of work to ship features.

DRE (and SRE) helps improve the product development output.

How? By reducing the MTTR (Mean Time To Repair). That way, developers will not have to waste time cleaning up after those issues. The further down the road we discover bugs to fix, the more expensive they are.

The reliability teams are not here to slow projects down, it is quite the opposite: they are here to improve their long-term velocity, while increasing their reliability.

Data Engineer vs. Data Reliability Engineer

Data Engineers are responsible for developing data pipelines and appropriately testing their code.

Data Reliability Engineers are responsible for supporting the pipelines in production by monitoring the infrastructure and data quality.

In other words, Data Engineering teams usually perform unit and regression tests that address known or predictable data issues before the code goes to production. DRE teams instrument the production environment to detect unknown problems before impacting the end-users.

What do we do?

DRE teams have the goal of setting and maintaining standards for the accuracy and the reliability of production data, while enabling velocity for data and analytics and machine learning engineers. The DRE team is more than just reacting to machine learning and data outages, they are in charge of preemptively identifying and fixing potential problems, and producing automated ways of testing and validating data, automatically detecting PII (Personal Identifiable Information) in different areas of the ecosystem, etc.

Areas that DREs would have purview over, include:

  • Data lifecycle procedures (e.g., when, and how data gets deprecated)
  • Data SLI (Service Level Indicator), Data SLA (Service Level Agreements), Data SLO (Service Level Objective) definition and documentation
  • Data observability strategy and implementation
  • Data pipeline code review and testing
  • Helps with the automation of data movement
  • Helps with the management of data incidents
  • Data outage triage and response process
  • Automating data related processes in the infrastructure to constantly remove toil
  • Data ownership strategy and documentation
  • Education and culture-building (e.g., internal roadshow to explain data SLAs)
  • Developing guardrails around data processes to increase data reliability, availability, and privacy
  • Monitoring costs of data activities (pipelines, storage, compute, network, etc.)
  • Track the lineage of the data
  • Perform change management when data tooling changes
  • Ensure cross-team communication regarding data activities
  • Ensure PII (Personal Identifiable Information) is properly handled in the data ecosystem
  • Ensure the business is compliant with all regulations regarding data (i.e., GDPR, etc.)
  • Ensure that the Machine Learning models are versioned, reproducible, evaluated, monitored and comply with overall software engineering best practices

DRE teams do not just put out fires. They put the guardrails in place to prevent the fires from happening. They enable agility for ML engineers, analytics engineers, and data scientists, keeping them moving quickly knowing that safety guards are in place to prevent changes to the data model from impacting production. Data teams are always balancing speed with reliability. The Data Reliability Engineer owns the strategies for achieving that balance.

Literate Programming: for DevOps, MLOps and Infrastructure as Code in General

Software developers generally tells me that they don’t have to document most of anything since the code is the documentation: just read it. This is when I reply: the code tells me the how you did it (if lucky) the words around the code should tell me the why.

With the recent (last 15 years) emergence of Infrastructure as Code (IaC), new important specialized developer roles such as DevOps and now MLOps, I will argue that literate programming concepts are becoming more and more important to the software industry.

Infrastructure as Code

In the last fifteen years, we saw the emergence of several Domain Specific Languages (DSL) to help system administrators to manage and provision their infrastructure. Those DSL revolutionized the way infrastructures were created and cared for.

Infrastructures were now entirely defined in plain text files. Those files could be versioned, they would get special treatments in IDEs, complete infrastructure could be rollbacked, etc.

But, there is one special characteristics that IaC has: the side effects of the “infrastructure code” are huge because a single line can lead to provision, or destroy, huge number of hardware resources which can have dramatic physical or monetary impacts.

Importance of Documentation

In this context, I argue about the importance of literate programming principles to code and maintain IaC. Some of the DSL are very opaque, a very small change can have dramatic side effects. IaC also has deal with versions of several tens, if not hundred, pieces of software that have to work together. IaC creates very complex networks of computers in different regions of the World.

None of that is self evident in code written in those DSL. The why needs to be documented very carefully, and documentation needs to be as close as possible to the DSLs code because every time something changes in the infrastructure, the change needs to be reflected in the text that describes the rational of that piece of infrastructure. And finally, both the how and the why will be carefully peer reviewed in a pull request. 

Org-mode: Agnostic Literate Programming Framework

Considering that there exists a specific DSL per framework (Terraform, Ansible, Docker, Puppet, Chef, etc.) it is important to have a literate programming framework that is language agnostic (unlike CWeb, nbdev, etc.). This is why I strongly support Org-mode. If the DevOps/MLOps developers works within Emacs, they have all the power of all the major modes already existing in Emacs to manipulate the code of those DSL within the code blocks. If they don’t, they can always use their favorite IDE with a Org-mode command line utility like OrgWeb.

Few Examples

Let’s take a look at what it looks like in the wild. Here are two examples, one that describes a Dockerfile and a series of Ansible playbooks, properly rendered in GitHub.

Literate Dockerfile

The first example is orgweb’s Docker.org file where the Dockerfile if generated. As you can see, everything of importance about the generated Docker image is stated. The version of Alpine Linux used, which version of Emacs is shipped with, the reason why we choose Alpine in the first place, etc.

Then it explains why the Dockerfile needs to install the tf-dejavu package, and what happens if it is not installed. And then it explains why the install.el and site-start.el files are being copied over and what they are used for. And finally, why install.el is being run while building the image.

Literate Ansible

Here is another example from a project I stumbled upon recently. This repository is a set of Ansible playbook to provision a series of infrastructure resources such as a docker registry, longhorn, etc.

Impact of Careful Naming when Using GitHub Copilot

Today, I continue my investigation of how I can better leverage tools such as GitHub Copilot, and their impact on the work of software developers. I recently investigated how such tools can benefit from Literate Programming methodology.

I this new post, I am investigating the importance of carefully naming of functions, parameters and variables, and the impact on the performance of the tool.

Naming Things

More than 20 years ago, David Thomas and Andrew Hunt wrote in The Pragmatic Programmer:

The beginning of wisdom is to call things by their proper name.

  — Confucius

 

What’s in a name? When we’re programming, the answer is “everything!”

 

We create names for applications, subsystems, modules, functions, variables — we’re constantly creating new things and bestowing names on them. And those names are very, very important, because they reveal a lot about your intent and belief.

 

We believe that things should be named according to the role they play in your code. This means that, whenever you create something, you need to pause and think “what is my motivation to create this?”

 

This is a powerful question, because it takes you out of the immediate problem-solving mindset and makes you look at the bigger picture. When you consider the role of a variable or function, you’re thinking about what is special about it, about what it can do, and what it interacts with. Often, we find ourselves realizing that what we were about to do made no sense, all because we couldn’t come up with an appropriate name.

Naming has always been a very hard and important problem in computer science, and this reality won’t change any time soon, if anything, it will get even more important in the coming new era composed of LLMs and Copilot like systems and tools.

The current premise we live with since roughly last Christmas, is that software developers productivity will experience a major boost helped by the new type of tooling that is becoming available, namely GitHub Copilot and its integration in VS Code. If the premise is true, and I have no indication at the time of this writing that it won’t, then the next immediate question become: how can we best use those tools to get the most pleasant and effective productivity boost?

Today, I am investigating the aspect of naming.

Meaningful vs. Meaningless

For this investigation, I will implement the exercise #10 of chapter 5.0.0 of The Art of Computer Programming:

10. [15] You are given a tape containing one million words of data. How do you determine how many distinct words are present open the tape?

The implementation will be in Python that will only require a handful of functions. This goes against the intent of the exercise, but I was lacking imagination to find something to code for this post.

For this experimentation, I created two empty and distinct workspaces. I loaded each of the workspace in different VS Code instances. The purpose here is to make sure that Copilot didn’t get any hint from elsewhere in the Workspace about my intents.

Then, I purposely didn’t write any comments, any text of any kind other than pure, uncommented, Python code. The rough structure of the implementation is:

  1. Use a book from the Gutenberg project as the source of token. In this case, we will use Marcel Proust’s translation of John Ruskin’s La Bible D’Amiens
  2. Tokenize the book in words
  3. Create a set of distinct words/tokens

Hopefully Meaningful Naming

For the first iteration, I tried to come up with hopefully more meaningful name for the functions and their parameters. The first step is to get a book from the Gutenberg project.

First thing I did is to start by typing the function name def get_project_guttenberg (notice the typo to Gutenberg). 

The initial suggestion is not that helpful. It returns a string. But the function’s name is not that meaningful either. What does that mean? Am I looking for a project description for some kind of  “Gutenberg project”?

Then I continued to type with def get_project_guttenberg_book That time, Copilot started to guess that I wanted to read a text book file from the file system. In reality, I am intending to simply download it directly from the web and not from the local file system. But we can see that it is heading in the right direction.

Then, I started to type the parameters of the function I was about to create def get_project_guttenberg_book(url. When I specify that the function is expecting a URL as input, it “understood” (really guessing at this point) that I want to get the book’s text from the web, so it suggested the following code:

def get_project_guttenberg_book(url: str) -> str:
    """Returns the text of a project guttenberg book"""
    return requests.get(url).text

This is working, and the function’s design is working since the intent is clear even if we can fetch any Web document using that function (and not just Gutenberg project text books). We can always refine that function later when necessary.

But what if I would have used a different parameter’s name, let’s try this: def get_project_gutenberg_book(ebook_id:

By simply changing the parameter’s name by a more meaningful one creates a more specialized and purposeful function:

def get_project_gutenberg_book(ebook_id: str) -> str:
    """Returns the text of a project gutenberg book"""
    return requests.get(f"https://www.gutenberg.org/cache/epub/{ebook_id}/pg{ebook_id}.txt").text

Next step is to create the “data tape” from that source of text. Starting by typing def data_tape_ we can see that Copilot grasp the general intent of what we are trying to do even if it is not there yet. 

If we tweak the function’s name a little bit with def get_data_tape_from_book(url then we are getting a more contextualized suggestion from Copilot. It is way too specific and probably hallucinating a little bit the structure of a Gutenberg book from its training set.

We will keep that suggestion and shorten the split() call to:

def get_data_tape_from_book(url: str) -> list:
    """Returns the data tape from a project guttenberg book"""
    return get_project_guttenberg_book(url).split()

The last step is to get the set of distinct words (from which we can calculate its len()). By starting typing  def distinct_words_from_tape we are getting an adequate solution:

The tape is a list, it returns a set, a set of composed of distinct tokens. Simple and effective leverage of Python’s data structure.

def distinct_words_from_tape(tape: list) -> set:
    """Returns the distinct words from a data tape"""
    return set(tape)

We are done, the following gives us the number we are looking for:

len(distinct_words_from_tape(get_data_tape_from_book("https://www.gutenberg.org/cache/epub/62615/pg62615.txt")))

Less Meaningful Naming

Now, see what happens when I use less meaningful names, when my brain becomes sloppier. def get_data(d gives us:

Not really what we are looking for, but this is understandable since there is zero context. Let’s continue with our initial idea by writing: def get_data(d): return requests. Now it is guessing that the d parameter is some data structure from which it can find the URL for the requests. And then it guesses that the request will read some json file that will need to be parsed. Most likely just because when people use the word data, they refer to some kind of structured data, and most likely that the most widely semi-structured data format out there exchanged over the web is still JSON. This is just what the Copilot learned from millions of Python projects.

If we continue typing def get_data(d): return requests.get(d) then it “thinks” that it will return some arrays where a URL will be accessible. At that time, it is just starting to hallucinate a solution.

I end-up simply writing that naive function without using any of the Copilot suggestions:

def get_data(d):
   return requests.get(d).text

Now that we have put the ground for some context with the get_data()function, let’s see how it goes to write the get_tape function. After typing def get_rape( I got:

It just got it from the code I produced in get_data(). If I continue typing to change the parameter def get_tape(t), I get:

Not much more helpful. Let’s continue to type the body of the function: def get_tape(t): return get_data(t).spl it will finally propose to split on \n. But in reality, this is unnecessary and too narrow because when the first parameter of the split() function is None then the following happens:

When set to None (the default value), will split on any whitespace character (including \n \r \t \f and spaces) and will discard empty strings from the result.

The end result is that I didn’t use any suggestion to write get_tape():

def get_tape(t):
   return get_data(t).split()

Finally, we want to get the distinct words from the data tape. Start writing def distinct it almost propose the right thing.  However, there is no reason to convert the set to a list before returning it:

I ended up not using any suggestion for this function either:

def distinct(t):
   return set(t)

Conclusion

As we saw with those two examples, the proper naming of things is very important to get the most of this new kind of tooling. I agree that the second example is extreme, but in my experience they are not uncommon names that we can find in code bases. I didn’t try to obfuscate every name, the names where just too generals and a bit useless.

When David and Andrew wrote twenty years ago:

those names are very, very important, because they reveal a lot about your intent and belief

They considered those names very very important such that your intent and belief could be properly communicated to whoever read your code in the future (including you a few months from then). Today, this assertion stands true, but its scope is broader. Names are very, very important, because they also instruct assistant tools such as GitHub Copilot to more easily guess your intent and belief to help you write better code faster.

What I personally like with this new family of tools such as GitHub Copilot is how I think they will shape the software developers of tomorrow, how it will force them to be more careful about their writing, and in this case their naming. The better the writing, the most precise and unambiguous it is, the more power they will be able to harness from those LLMs.

I start to envision that the general productivity of software developers will experience an important boost in the coming five to ten years, but also (and more importantly to me) an overall increase in the quality of the code and systems they produce. All this because the tool became a huge incentive for them to care about those mundane non-code details such as writing mundane humans words.

Today, I feel that a lot of developers wonder if their job is at sake. In my next post, I will start outline what I am currently feeling around those questions. I don’t think developers job are at sake, but the way they work will definitely have to change, and the way we train future generations of software developers will have to change as well.

Literate Programming at the dawn of LLMs

Since the beginning of the year, the industry’s main focus seems to revolve around “prompting.” We’ve seen the emergence of new job titles, new job descriptions, and even the introduction of “prompting wizards,” all of which are essentially part of branding and marketing strategies.

Prompting involves articulating a problem and providing clear instructions in the hope that the person or system reading it will produce the intended outcome. The recent shift lies in the recipient of these instructions: rather than a person taking action to solve the problem and follow the instructions, it’s now a thing (currently some form of AI model) that carries out the task.

What I find amusing, after 20 years of professional experience in software development and engineering management, is that we’re finally getting engineers to generate a substantial amount of text instead of solely focusing on writing code. This appears to signify quite a significant paradigm shift to me.

Prompting and Literate Programming

I recently had something of an epiphany while investigating the current state of Literate Programming: could Literate Programming not become a powerful software development paradigm with the advent of LLMs?

I mean, for 39 years, literate programming programmers have been essentially doing just this: “prompting” their software development. They have been describing their problems and outlining instructions before implementing the actual code, often in the format of a book or notebook. The only difference is that they were the ones doing 100% of the coding afterward (either themselves or with the help of an implementation development team).

Intuitively, it seems that this same format and these same skills are precisely what’s needed to best leverage LLMs in coding computer software. LLMs will undoubtedly become very effective tools, but they are just that: tools that need to be learned, experimented with, and mastered to extract the best results from them.

GitHub’s Copilot

In this blog post, I aim to explore how literate programming can influence and enhance the utilization of LLMs. The current leading LLM tool for software developers is undoubtedly GitHub’s Copilot, integrated into VS Code. It boasts three main features:

  1. Code completion
  2. Completions Panel (providing up to 10 distinct auto-completion suggestions)
  3. Chat (recently made available to the general public)

With all of these capabilities integrated into an IDE like VS Code, it forms a package that significantly accelerates the software development process.

The next question arises: will Copilot grasp, and potentially benefit from, the literate programming process in the suggestions it provides? This is what I’m aiming to explore – to observe how it reacts, what proves effective, and what may not.

To put it to the test, I’ve developed a straightforward command-line tool in Python designed to function as a basic calculator. The remainder of this post comprises a series of screenshots accompanied by my comments at each step.

literate-copilot

Before diving in, Is still needed to create a new GitHub project, and to use nbdev_new to create a new nbdev project, and then to configure it.

Before starting to develop the CLI tool, I wanted to see if GitHub Copilot was self aware of its own capabilities:

It’s hard to discern from this interaction whether it’s generating content or not, but at the very least, it seems promising. Let’s see if we can further explore this level of contextual awareness.

The initial step I took was to compose the introduction for the tool, right here in this Jupyter notebook. It outlines the purpose of the tool and the extensive list of calculator operations we aim to implement. I obtained the imports from the prior interaction with Chat. I manually added typer as this is the library I intend to use for building the command-line utility.

Following that, I proceeded to discuss creating a Typer application and its functionalities, etc. In the subsequent code block, I deliberately refrained from writing anything, as I didn’t want Copilot to auto-generate code within this block. I was interested in evaluating if it had an understanding of the entire notebook’s context, not just within a specific code block. This is why I opened the Suggestions Panel to assess if it would suggest anything relevant given the current context.

What I received was particularly interesting, as the initial suggestion aligns perfectly with the next step. It overlooks the #| export nbdev instruction, but that’s perfectly acceptable, as it’s rather obscure.

Next, I began detailing the subsequent steps by creating a new Markdown cell. At this point, Copilot’s auto-completion capabilities come into play. This is particularly interesting, as it essentially anticipates what I was about to write, drawing from the extensive list of calculator commands I plan to implement. In this case, it starts with the first command on that list, which is addition. This suggests to me that it leverages the entire notebook as the context for its suggestions.

For context, here is the full list of operations we want to implement:

However, this was actually not the first command we wanted to implement. The first one we wanted to implement is the version of the command line tool that we display to the users if they ask for it.

Then the next step is to start implementing the long list of calculator operations, starting with addition:

Why was the quiet parameter suggested? To dig a bit further into its thought process, I decided to open the Completions Panel. Suggestion 3 sheds light on what it had in mind. However, for a basic calculator, this isn’t very useful since the outcome of adding two numbers is quite straightforward. I’ll go ahead and accept .

Now, let’s compile this command-line application to ensure it functions as intended:

By blindly accepting the code proposed by Copilot, here is how the add command works:

Let’s see if it works as intended:

Yes, it does. It’s not the most convenient method for adding two numbers; it’s a bit complex and verbose, but it will suffice for now.

Afterwards, I added the entire list of operators in the same manner, by appending code block after code block, and it successfully implemented each of them. There was a point around number 7 or 8 where it lost the order, but simply starting to type the right term got it back on track. For example, typing def si will continue with defining the Sin function accordingly. Here is the current list that has been implemented so far:

Adding Tests

Now that we have all these functions, I’d like to give Copilot a try at generating tests for each of them. To do this, I posed a very simple question to the newly generated release of Copilot Chat while having the 00_main.ipynb file open:

I would like to add tests for each of those commands.

By “those commands”, I was referring to what was currently displayed in the Workspace on my right, hoping that it would contextualize the request within the Workspace. The result Chat provided me with is:

It is even aware that it is missing some from the list described in the introduction and continue to list them starting at the right place (divide):

As you can see, it is fully aware of the context. It will produce one test per command, understanding that the commands print output to the terminal and that the functions do not return actual numbers. It will also attempt to use a CliRunner to execute the tests. While it doesn’t work out of the box, it’s certainly a step in the right direction.

Conclusion

This concludes the tests. It’s clear that Copilot is aware of a Workspace and contextualizes its suggestions accordingly. When working in a Jupyter notebook, it takes into account every code block.

This little experiment suggests to me that adopting a literate programming workflow and its principles can lead to better and more effective suggestions from LLMs like Copilot.

For thousands of years, humans have been expressing their thoughts in a sequential manner, from top to bottom. We’ve developed highly effective systems to organize these writings (you can explore the BIBO ontology for a glimpse into this). These systems have evolved and been refined up to the present day.

To me, this is the essence of Literate Programming. It’s about developing computer software in a more natural, thoughtful, and systematic human way.

Not many people in the industry share this perspective. However, what I’ve begun to explore in this blog post is how LLMs, along with integrated tools like GitHub’s Copilot, could potentially shift that perception. How Literate Programming could emerge as one of the top programming frameworks for effectively utilizing tools like Copilot.

ReadNext 0.0.4: Local Embedding Model

I just release ReadNext version 0.0.4. The primary goal of this new version is to remove the dependency on the Cohere Embedding web service endpoint by using a local embedding model by default. To enable that, ReadNext got integrated with Hugging Face and is currently uses the BAAI/bge-base-en model.

Local vs. Remote

This new change remove dependency on one external service which makes it more stable. The processing time is a little bit longer with the local model, but it also depends on the capabilities of your local computer.

In terms of performance, the two systems are comparable. In my experience, about 80% of the propositions are the same, and the remaining 20% that are different yeld no major difference in accuracy. However, I do prefer the BAAI/bge-base-en propositions a little better for what I experienced so far.

You may want to experiment with both to see what works best for you. The only thing you have to do is to change the EMBEDDING_SYSTEM environment variable and to reload your terminal instance.

New Configurations

Two new configuration options have been added to this version:

  1. EMBEDDING_SYSTEM: This is the embedding system you want to use. One of: BAAI/bge-base-en(local) or cohere.
  2. MODELS_PATH: This is the local path where you want the models files to be saved on your local file system (ex: /Users/me/.readnext/models/)

If you already have ReadNext installed on your computer, please make sure to add those two new environment variables to you environment.

New Commands

Two new commands have been added as well. They have been added to help understanding the current status of the ReadNext tool. Those two commands are:

  1. readnext version: this gives the version of ReadNext that you are currently using
  2. readnext config: this gives the configuration parameters, and their values, currently used to run that instance of ReadNext

Literate Programming

While at it, I decided to migrate ReadNext’s Python codebase to use nbdev to continue its development using literate programming

All the literate files (notebooks in this case) where the code is tangled and the documentation weaved from are accessible in the nbs folder. The tangled codebase is available in the readnext folder. Finally, the weaved documentation is available as GitHub pages here.