Not all SAST tooling is created equal. A basic SAST scanning evaluation may center around surface level concerns like: Can you detect vulnerabilities within the specific coding languages I use? Can you detect SAST vulnerabilities for a given CWE or OWASP type? How long does the SAST scan take?
These questions are important to answer. But an evaluation of available Static Application Security Testing (SAST) strategies can go far, far deeper. Let’s start digging…
Before diving into advanced SAST methodologies and techniques, let’s first set our baseline. What is Static Application Security Testing (SAST)? Static Application Security Testing or SAST is a method used in Application Security (AppSec) that scans source code or built artifacts to identify vulnerabilities in your software. SAST can be used to detect vulnerabilities such as SQL injection, cross-site scripting (XSS), insecure direct object references (IDOR), security misconfigurations and more.
If you run Static Application Security Testing (SAST) scans, there are certain properties that you may have in your source code that determine the external posture of your service. An example would be, do you validate inputs? If you’re not validating inputs – such as in the case of SQL injection attacks like the recent ‘ResumeLooters’ attack – now a threat actor can exploit your system. Another important lens to evaluate is am I running a backdoor that can be identified with SAST scanning? Am I running a framework that is vulnerable? All of this can be detected by an effective SAST scanning solution.
There are plenty of companies that are still paying for SAST scanning per line of code. As a starting point, this represents a perverse incentive to actually use your SAST tool. The more you use it, the more you spend and I’d highly recommend finding an alternate tool (first, but probably not last shameless plug: check out our very public and transparent pricing!)
Another major impact to cost is where the scanner is deployed. If you run scanners on your own infrastructure, you may be on the hook to cover the cost of this compute.
Lastly, delivering software has a cost. Your developers are well paid, and their time is money. That means that slow scanners or poor mitigation workflows, which ultimately slow down development, both represent a transitive cost to your organization.
In service of a more secure and controlled secure development lifecycle, we’re seeing broad adoption of SAST as a best practice across compliance and industry standards. To name a few:
False positives are simply a reality with SAST scanning tools. The question is how do you minimize false positives, and even more importantly, minimize the exposure of your developers to false positives.
Start by choosing a tool that actively reviews reported false positives and the associated SAST rules that resulted in the false positive. At Arnica (plug #2!), we are constantly refreshing poorly performing rules and adding new high performing rules to minimize false positives.
Equally important is how you manage false positives. Do you require additional effort by developers to claim that a finding is in fact a false positive? Do they need to log into an external tool to figure out if it is a false positive? Do you have a workflow that actually allows the developers to suggest a finding as a false positive and have it triaged? Are devs empowered using rich ChatOps integrations to automate security vulnerability management?
When you run SAST scanners, you might want to have different SAST rule sets for different versions of the products that you release. This might be the case if you release new versions of the product but some customers are still using a previous version.
When it comes to implementing a SAST tool, it is important to be able to accommodate for the fact that one product might have a number of different versions, each having their own guardrails and policies.
SAST scanning tools can identify potential vulnerabilities within your pipelines. So, if there is any attack surface within your DevSecOps pipelines you can use static code analysis to detect it.
In some cases you may want to identify certain industry specific characteristics with custom SAST rules. You might want to look for a function that makes money wires with certain naming conventions. You might want to know if there is any change in these specific functions before pushing to production. This empowers your AppSec team to actually improve your incremental threat modeling by simply running SAST scans on the pull request.
Similarly within the public sector and defense industry, you may want to take a similar approach using custom SAST rules for crypto libraries in code. FedRAMP, for example, dictates compliance measures pertaining to specific ciphers. Similarly, public sector or defense organizations may want to fix all Medium and above severity vulnerabilities, as opposed to an unregulated enterprise SaaS app which might only care about High and above.
When it comes to critical infrastructure, you might want to enforce a policy that has zero tolerance for any potential vulnerability or bugs. This is where it is important to provide rich context within SAST findings complemented by high quality rules to ensure low false positive rates. This will ensure that the critical infrastructure runs with the integrity that it is expected to have without presenting a major burden to your development teams.
By integrating SAST scanning tools into your DevSecOps pipelines, you can report on findings in a consistent manner. This is why CI/CD based SAST tool deployments have been a standard for so long. However, there are serious drawbacks to this approach including that with each new repository, you’ll be reliant on the engineering team to add your DevSecOps pipeline into their source code and run the pipeline.
An alternate approach is to use a pipelineless SAST deployment, which ensures 100% coverage from day one and forever without any work from your engineering team.
As we’ve covered above, custom SAST rules are critical to an advanced deployment of static code analysis. Any tool that supports advanced deployments should enable you to tweak the rules to get more accurate results.
As an example, let’s say you have a custom database query validator. Using custom SAST rules, you can take the rules that reported a SQL injection and say, “if this package is important and this function is used, do not call it out as a SQL injection.” This is just one example of how you can use custom rules to fit your SAST deployment to your unique environment.
The frequency and approach to scanning can be a critical differentiator between tools. A full scan could take hours or even days to complete. What you want to have is faster full scans but also a scan that tells you not about all vulnerabilities that you have in a repo (that’s what the full scan is for), but a subset of vulnerabilities in that particular code change. This approach has a huge impact on SAST scanning speeds and, in turn, maintaining development velocity.
Scanning for and identifying vulnerable code is foundational to static code analysis, obviously. But identifying the fix can be just as impactful. Using advanced SAST tools, you should be able to identify who fixed each security vulnerability.
At Arnica, we celebrate code fixes in production in order to give developers greater motivation to continue to tackle important risks in production. We’ve even worked with many of our customers to build security champion initiatives around these developer celebrations.
Every SAST scanning tool can report on potential security vulnerabilities. But the real question becomes what can a developer do with the findings and how easy do you make it for them to deploy a fix for a SAST risk. AI is still not great for many things in the world of AppSec. But when it comes to SAST risk mitigations, we’ve found it to be a powerful mitigation facilitator.
Especially junior developers may not know what to search for in order to address a critical SAST finding. Arnica feeds the developer AI-generated SAST risk mitigation recommendations in order to make the mitigation exceptionally easy. The result of this feature has been a boosted productivity (because they’re not side tracked by unclear SAST findings) and increased SAST fix rates.
For every SAST finding that you have in your AppSec tool, you should be able to identify a broad range of contextual markers that contribute to an overall ability to automate compliance checks. Some of these might include:
All of this information should be readily provided from the SAST scanning outputs with the complete story for every finding.
We discussed the importance of reducing false positives above. In order to accomplish this goal, your SAST tool should be able to provide a confidence level for the finding. Based on the confidence level, you can decide if and how you want to flag this issue for the developers.
For example, while you might have a detected SQL injection vulnerability, you may want to stipulate that based on variation X or framework Y, you don’t have a high confidence that the vulnerability will be exploited and thus you should be able to reduce the confidence, which should reduce the severity scoring, which, ultimately, should reduce the volume of false positives surfaced to developers. In this case you might even revise an initially high severity risk to be a low severity risk.
At the point in time when you do communicate a SAST risk to developers, it’s helpful to be able to provide the developer with an option to dismiss a risk and provide a reason for the dismissal. The dismissal should then kick off a configurable vulnerability management workflow and process.
For example, you might dictate within your workflow, “I want a security team member to review dismissed risks that are high or critical. But if the risk is medium severity, it can be sent to a peer developer to review.”
Reviewing SAST findings is a manual process. Your AppSec team might want to kick off a campaign to tackle a given CWE or specific type of an attack. You might want your security team to do an initial review to ensure the findings are true positives. If you are getting noise, this gives the security team the opportunity to update the SAST rules to ensure quality findings are being sent to developers.
Once your rules are well defined, you’ll need developers to pick up the findings and push fixes in code. You should be able to select a very specific category and go deep within that category as you prioritize that risk type broadly across the organization. After building trust by tackling the very narrow use case repeatedly, you can pick up the next campaign, rinse and repeat.
Prioritization of SAST findings can be done based on risk severity, on the business importance of the product it was found in, or by the category of risk that is being prioritized. Being clear within your AppSec team and, equally important, communicating the priorities to developers ensures that both teams understand what SAST findings demand attention.
An advanced SAST scanner should provide recommendations on how to fix each vulnerability. We walked through the opportunity to use AI to provide risk mitigation recommendations. But this is not always the case. What you have written in the code will not always be translated correctly by AI. You may need to convert the string to something else – an integer perhaps. In these edge cases, you may need an expert in that particular system to do a deep dive into the finding to figure out how to fix. The goal of leveraging effective AI is to keep these edge cases to a minimum so they are the exception, and not the rule, for how you engage with developers.
Retesting the code is a critical step to ensure that the finding that was identified by your SAST scanner is no longer found in the target location where you released the software. For example, if a vulnerability is fixed in a feature branch it does not imply that the fix was necessarily deployed to production. For this reason, Arnica determines the risk by where it exists in git.
Finally, as part of reviewing SAST findings, sometimes you get exposed to new vulnerabilities that you didn’t think of. In this case, you should have testing and a subsequent practice in place that allows you to find these new vulnerabilities, build a pattern around them, and then build a custom rule to address this new vulnerability quickly.
Implementing advanced Static Application Security Testing (SAST) strategies is becoming increasingly critical as threats become more advanced. By deeply understanding the software development lifecycle within your organization and the impact that SAST tools will have on it, by leveraging thoughtful implementation of advanced SAST techniques and workflows, and by effectively leveraging SAST findings to optimize for developer experience, you can dramatically improve security outcomes across your organization.
Embracing these strategies and best practices will ensure that your application remains well secured, safeguarding your organization, products, and customers from potential threats. To see how Arnica’s pipelineless SAST solution can help you implement these strategies, book some a SAST consultation with our team!