This post is the second part in a series about what to look for when inheriting a legacy code base. This post will focus on looking at the details of the code itself.
- Legacy application code review - Part 1: The big picture
- Legacy application code review - Part 2: The details
Legacy code bases get a bad rap, but they can be even more fun than working from a clean slate. Not only do you have to make decisions that are extensible in the long-term, but you do that within the delicate balancing act of not breaking what is already there.
Since there are more dependencies and considerations to take into account when working with an existing application, I recommend doing a full code review and audit of the application before making any plans for future functionality. Having as much knowledge as possible will help you make informed decisions about the best path forward.
In this post, I outline a variety of things I recommend investigating during this code review, what we learn from those findings, and how to adjust your recommendations and estimates.
Automated static analysis
After gaining an understanding of the big picture and how all the pieces of an application fit together, we dig into the details of the application code. We begin by using a variety of tools for automated static analysis of the code base. Static analysis is done without running the code and focuses on the structure of the code. We look at an array of metrics to give us an understanding of how easy it will be to work with and how ready it is for extending. None of these metrics on their own show problems or tell the entire story, but they provide information for you to make decisions.
Analisis-Tools.dev maintains a wonderful list of static analysis tools organized by language for your reference.
Lines of code
The simplest metric to measure is the size of the code base. Large code bases are not inherently bad. But the larger the code base the longer it takes a new developer to find the relevant bit of code to change. Size is a helpful measure to consider when estimating so you can plan for the time it will take you to understand the code base.
Next, take a look at class and method length. Note the biggest classes and methods measured in lines of code. Similar to code base size, long classes and methods are not inherently bad (sometimes there is a good reason for a class to be larger than average), but long classes and methods take longer to understand and can be harder to refactor. Especially with methods, longer code typically hides more side effects that catch up to you when refactoring.
If you find that there are a lot of long classes and methods, you may want to adjust your estimates accordingly to give yourself some time at the start for a little refactoring. If you cannot refactor and feel that it's important, have a conversation with your stakeholders about the risk of unintended bugs introduced because of side effects hidden in the code you’re changing.
Duplication
You can also run checks to see how DRY code is. DRY stands for Don’t Repeat Yourself. Code that is DRY means that code is not duplicated in multiple places within the application. Not repeating code minimizes the number of places that need to be touched when you have to make a change. This limits the number of opportunities to introduce bugs into the code.
Hopefully, you find little duplication. If you find a lot and it’s in the relevant parts of the code you’ll be working in, increase your estimate a bit to give yourself time to consolidate the duplicated (or triplicated 😱) parts of the code.
Complexity and churn
Then you can measure complexity and churn. Complexity often refers to cyclomatic complexity and ABC (assignments, branches, conditionals) complexity.
Cyclomatic complexity is a fancy way to quantify how many linear branches through the control flow there are. It sounds complicated, but it usually mirrors the way people intuitively think about complexity in their code (computer scientists love intimidating vocabulary 🙄).
ABC complexity measures the number of assignments, branches, and conditionals in code. The branches part of ABC complexity lines up pretty closely with cyclomatic complexity.
And churn refers to how often a piece of code is updated.
On their own, none one of those measures tells you much. Taken together, they can tell you where the best candidates for refactoring are. Chunks of code that are high complexity and high churn should be refactored.
If a bit of code is highly complex, you don’t want to change it often so isolate it to its own file that can be left alone as much as possible.
If the code changes often, you want to make it very simple to understand so the changes aren’t hard to make. Both cases limit how likely it is to introduce accidental bugs into the code.
Test coverage
The last metric to analyze statically is test coverage. I don’t recommend looking for 100% test coverage, but 75-85% test coverage may show that the test suite will protect you from introducing unintended regressions. Test coverage is not enough to fully protect the code since the tests may not be robust. However, it’s a good quick measure for you to determine your confidence in changing the code base.
If you are working with a project with minimal to no test coverage, you may want to recommend spending time upfront adding test coverage to document the current state of the application. It’s helpful to educate your stakeholders about how robust test coverage doesn’t prevent bugs. It can prevent unintended behavior changes from making it to production and can document expected behavior for developers.
Automated security audit
While a full security audit is often out of the scope of this kind of code review, there are some tools we can use to identify known security vulnerabilities, such as Brakeman for Ruby on Rails or Snyk. Using a tool like this can help you plan for remediating any high priority vulnerabilities in the application and communicate immediate needs to your stakeholders.
Human analysis
Static analysis is great for quickly understanding the structure of a code base, but there are some cases where we can’t substitute the keen eye of a human engineer. You should examine the code to see how closely it follows established conventions and idioms of its framework.
Conventions
Highly idiomatic code is easier to onboard because you can quickly navigate the code base knowing where to look for files and functionality based on experience. Code that does not follow established conventions is slower to move around in since you are never sure where to look or where you’ll find the relevant bit of code.
Dead code
You should also try to get an understanding of how much code in an application is unused. There are some dynamic analysis tools that you could use for this. But most require deploying code to production so I rely on manual analysis if I haven’t started working on the project yet.
Unused code, or dead code, can affect the overall health of a code base. As you refactor and make changes to the application, you may invest time updating dead areas to keep a green test suite when you should remove dead code so you have more time to devote to the revenue-generating areas of the application.
Dependencies
Examine the major dependencies in an application to check that they are actively maintained. If you find dependencies that are not actively maintained, you may want to budget time for replacing those dependencies with modern, maintained alternatives. You’ll want to do this, especially if the upcoming feature work wants to build with this dependency. For example, creating fancier maps with a very old version of Leaflet or wanting to make graphs with an outdated charting library.
Updating dependencies can be tricky and should factor into your estimate. Keeping dependencies updated means that the project can continue to thrive and features continually added.
Documentation
The last bit to rely on human analysis for is reading the project documentation. If a project is not well documented, you may take that as a risk that there are potentially unknown features and functionality that you need to be prepared for. This won’t necessarily increase your estimate but is a helpful measure in understanding how difficult it will be to find answers and context if you find something odd in the application.
Dynamic analysis
Finally, try to run the application locally and use performance profiling tools to determine if there are any risks to your stakeholders’ load and scaling goals that should be addressed earlier rather than later. If you notice poor performance in an important area of the application, you may recommend addressing that early in the roadmap to meet the load demands once the project launches.
Benefits of robust code review
When you have a holistic understanding of an application, you’re able to provide informed recommendations of the technical areas your stakeholders should invest in and should deprecate. Technical debt is a fact of software life, but you can help them know where it’s worth paying down and where it’s a sunk cost.
Everything that you look at during a code review is in service of making sound recommendations driven by your stakeholders’ business goals while also providing more accurate estimates of the amount of effort it will take to deliver on those goals. It’s important not to view this review as a time to be judgmental but as a learning opportunity and a chance to prepare for a successful project.
Existing application code review checklist
- Does it run?
- Language/framework version support
- Infrastructure version support - database, storage service, cache, message broker, etc.
- Configured environments
- CI/CD pipeline
- Data model
- Static analysis
- Code base size
- Class and method size
- Cyclomatic complexity
- Churn
- Test coverage
- Human analysis
- Security audit
- Code conventions and idioms
- Dead code
- Dependencies
- Documentation
- Dynamic analysis
- Performance profiling
Find related posts:ConsultingDeveloper workflow