Continuous Integration at Determ

  • Author

    Andrija Dvorski

  • Published

    Oct, 18, 2022

  • Reading time

    4 min

When developing hundreds of microservices that run Determ, it is crucial to keep everything running smoothly for our users. New features being developed must not impede users currently using the platform or degrade their experience by breaking existing behavior.

Another aspect we must keep in mind is the quality of the code. Any code you write today is legacy tomorrow. That’s why we have to ensure that it’s as clean and maintainable as possible. Everything that is merged into main branches must be reviewed and given feedback.

We have to enforce these rules without hurting the developer experience but actually improving it. At the same time, we also need to provide better stability and reliability to our clients.

Although it might seem like a daunting task, we handle it well. Read on to see how we use Continuous Integration at Determ to solve these problems.

Continuously Integrating

Today, every company has its take on what continuous integration is supposed to do. For us, continuous integration is quickly iterating on changes to the codebase. We look to continuously integrate whatever we are working on with the main branch. When we have a chunk of code written, we should look to create a merge request, ask for feedback and verify the pipeline completes successfully. 

By doing this, we can catch smelly code before it takes root. It’s easier to quickly give quality feedback when you have a hundred lines of code to review rather than thousands. It’s also easier to fix the requested changes.

And then, by quickly merging, we can run the integration pipelines. That way we’re verifying everything is in order and making our changes visible to everybody. This process gives us confidence that we will catch any issue with the changes before it even becomes an issue.

Source: unDraw

Pipelines

For those unfamiliar with CI pipelines, you can think of them as scripts triggered by certain events. 

Using pipelines, we can ensure standard checks are run on any code change. That makes us more confident about merging code into the main branch. Using GitHub, it is straightforward to restrict merging code with failing pipeline, combine that with restricting direct pushing to the main branch, and get a reliable and stable codebase.

Given well-tested code and pipelines running those tests, you can be more confident that everything works. In the case of the pipeline failing, the person creating the merge request is notified about it and can take necessary actions, and the merge action is blocked until the pipeline is resolved. 

Source: unDraw

Another use we found for pipelines is streamlining the creation of new releases. When code is merged into the main branch, a release will be built, tagged, and pushed to our release registry after running tests one more time. This greatly simplifies project onboarding since you can focus on learning the code base and ignore the specifics of creating releases.

Since our code is mainly hosted on GitHub, we use GitHub Actions as pipelines, and we are still learning all the ways we can streamline these processes even more.

In the future, we would love to have code quality metrics generated automatically by the common pipeline, giving more insight into the state of the code being merged and providing developers with automatic actionable feedback on their code.

Reviews

A good peer review process and feedback on code changes can drastically improve code quality and maintainability. You must avoid merging unreviewed code at all costs. Otherwise, you risk introducing code smells that will require much effort to remove by the time they are noticed.

The most common issue with code reviews is that the review process takes a long time and either requests superficial or drastic changes, both of which take time. In both cases, the cause is the size of the changes. 

Source: unDraw

Given too many lines of code, a reviewer will lose focus, and the reviewer’s ability to detect defects will suffer. A study found that a lot more flaws are found given smaller code size. Also, given a lot of code in a single merge request, the chances of any change request requiring a lot of refactoring are higher.

The other issue, with the review taking a long time to review the code, is the bigger time frame necessary for a reviewer. A reviewer will probably require multiple sittings to go through everything. Or, the reviewer might keep postponing the review due to the time investment required. This will lead to a stressful review experience for both parties.

So to solve this, we put up merge requests for any chunk of code completed. This ensures any drastic changes are requested early into the development. It also makes the code review process much quicker. In this case, the code is usually reviewed within an hour. This process also benefits from our integration pipelines running throughout the feature development cycle, as any integration issues are caught immediately and will require only minor changes.

As is the case with pipelines, we also set up in GitHub that having an approved review is required to merge the changes. This ensures that all code being merged has passed sanity checks from your peers. It also ensures that it is in a state fit of being in the main codebase.

All in all…

This was a short introduction and an overview of how we utilize continuous integration practices to build Determ and how we keep its codebase clean and maintainable. 

If you face similar problems, while copying our process might not work for you, you can start by moving as many of the manual tasks in your process to scripts and utilize your git hosting service’s settings to enforce required reviews and checks. Even if you don’t use GitHub, every major git repository hosting platform offers similar features you can experiment with.

You should also start writing tests if you aren’t already. A well-tested codebase can provide additional trust in these scripts. It can also let people focus on reviewing business logic implementation instead of reviewing if the code will even compile.

All in all, I hope this solved some of the questions you had before reading this blog and maybe even motivated you to improve your own processes.

Skip to content