Everything You Need To Know About Secure Code Review Software (FAQs)
What Is A Secure Code Review?
Code review, also known as peer code review, is the process checking the quality of a piece of source code before it’s merged and shipped to customers. Secure code review specifically focuses on identifying security issues and vulnerabilities within the source code. It can help identify mistakes, bugs, logic problems and any other issues early on. This, in turn, can streamline the software development process as these types of errors are more difficult and costly to remediate further on in the software development lifecycle.
There are two main ways to carry out a secure code review: automatically or manually.
In an automated code review, a review tool automatically reviews the course code in line with a set of pre-defined rules that help it identify vulnerabilities and bugs. Automatic review tools (such as SAST and DAST tools—more on these later) often provide ready integrations with SCM/IDEs, and they’re usually compatible with multiple development environments.
In a manual secure code review, the programmer who wrote the code collaborates with other programmers to check their source code. Manual secure code reviews are usually carried out by one or more developers who are experts in the domain of that piece of code. Because of this, manual secure code reviews don’t just help save time and money—they also facilitate better communication and teamwork amongst development teams, enable shared ownership of code, and foster the education of junior developers by teaching them new ways to troubleshoot problems and write more secure code. When carrying out a manual review, development teams often use secure code review software such as those listed in this guide, to help them keep track of feedback and changes throughout the review process.
Automatic secure code reviews are often faster than manual reviews, but they don’t take into account context such as general business logic, or the developer’s intentions whilst creating the code. Manual reviews are also cheaper than automatic reviews, but they can be more prone to error. Because of this, we recommend using a combination of both methods to carry out an effective secure code review.
What Are The Different Methods For Secure Code Review?
There are two main methods that can be used for secure code review—manual and automatic—but these can be broken down a little further.
Manual Secure Code Review Methods
Over-The-Shoulder Code Reviews
Over-the-shoulder code reviews are considered one of the easiest and most intuitive methods for reviewing code. Once a developer has written their code, a qualified colleague joins them at their workstation (either in person or remotely through a shared screen) and reviews the code, providing suggestions for improvement while the author explains to them why they wrote it the way they did.
This is a very informal review method and may need to be paired with some type of tracking or documentation (such as via one of the tools in this guide) in order to log and verify any small changes that are made during the review and large changes that need to be implemented later.
Email Pass-Around
In an email pass-around review, a developer emails a diff (short for “difference”; a utility that allows programmers to look at two files side-by-side and compare the differences between them, such as where new lines of code have been added) of changes to the whole development team. The other members of the team can then suggest further changes if needed.
While this approach is flexible and usually easier than getting a whole team together in person for a review, an email chain with multiple different suggestions and opinions can quickly become complicated for the author to sort through.
Pair Programming
Pair programming is a continuous review process that involves two developers working together at one workstation; one actively codes, while the other provides real-time feedback.
Pair programming is very effective at inspecting new code, as it bakes the review into the programming process. It can also be useful for training junior developers. However, it’s time-consuming and prevents the reviewer from producing anything else. It’s also less objective than other secure code review methods as both the author and the reviewer are very close to their own work.
Tool-Assisted
Tool-assisted code reviews are not the same as automatic code reviews. With an automatic review, a SAST or DAST tool carries out the entire review. With a tool-assisted review, the review process itself is manual, but facilitated by a tool that:
- Gathers and organizes changed files and displays the differences, with version controls
- Facilitates communication between reviewers and developers, e.g., by allowing them to annotate the code
- Assesses the efficacy and effectiveness of the code review process by generating reports on key metrics
Tool-assisted reviews can help development teams to overcome the limitations of some of the other methods listed above, such as enabling reviews to take place asynchronously and remotely, automatically notifying the author when a review comes in, and ensuring that all comments, suggested changes, and implemented changes are tracked.
Automatic Secure Code Review Methods
Static Application Security Testing (SAST)
Static Application Security Testing tools analyze applications at the code level to identify vulnerabilities that could be exploited, so that the developer can remediate those vulnerabilities before the app goes to market. SAST tools analyze every line of code in an application, cross-referencing them with a database of known vulnerabilities. Any sections of code that are found to contain known vulnerabilities are highlighted, and the solution notifies the developer so they can remediate the issue.
Dynamic Application Security Testing (DAST)
Dynamic Application Security Testing tools help identify run-time vulnerabilities and misconfigurations within web applications while they’re in production, by carrying out simulated attacks (or “penetration tests”) on them. These simulated attacks are carried out through the front end, which enables the DAST tool to analyze the app exactly as a threat actor would—from the outside, looking in.
When a DAST tool identifies a vulnerability, it notifies the development team immediately. It also creates a report detailing how an attacker might exploit that vulnerability, which enables the dev team to prioritize their remediation efforts. Some DAST tools also offer “attack replay” to guide dev teams through the discovery and potential exploitation of a vulnerability, making it even easier for them to locate and remediate issues.
What Are The Best Practices For Conducting A Secure Code Review?
When conducting a secure code review, we recommend that you follow these best practices:
- Put together a secure code review guide and checklist to ensure consistency between reviews and different reviewers. This will help when it comes to auditing, as well as ensuring that everyone’s using the same terminology to describe any issues that need to be addressed. The guide should also include a limit on code review sessions to help keep them productive, e.g., only reviewing a certain amount of code, or only reviewing for an hour at a time.
- Review your code regularly during development, rather than carrying out one big review just before it’s released. This will help save time and money further down the line.
- Use a combination of manual and automatic review methods to ensure that you’re conducting reviews as efficiently and effectively as possible.
- Focus your manual reviews on broader security issues and use automatic testing to identify specific issues. Look out for failures in authentication and authorization controls, potential exposures of sensitive data, inadequate error handling, inadequate session management, misconfigurations, logging, encryption, and injection flaws.
- Log all security issues into a reporting tool, and continuously track repetitive issues or insecure code patterns to help inform future reviews. You should also provide context for each change, including its purpose and any other relevant user stories, issues, or tickets.