APPSEC

Time for an Honest Talk About Third-Party Risk Management and Software Composition Analysis (SCA)

Mark Maney
Head of Customer Success
September 10, 2024
Mark Maney is an accomplished customer success leader with ties to both civil and computer engineering and has overseen the product lifecycle in product management, development, implementation, and consulting roles. Outside of work, Mark can be found golfing, tinkering, or spending time with his wife, daughter, and energetic Basenji mix.

TL;DR

AppSec teams are tasked with using Software Composition Analysis (SCA) to respond to a multitude of ever-urgent third-party vulnerabilities, each of which presents a security risk to the organization. Mature security processes, and accurate risk prioritization frameworks are key to quickly responding to critical items. Some of these processes have stagnated over time, and in the case of third-party risk prioritization – it's time for an update.

{{arnica-top-signup-banner="/template-pages/try-arnica-banner"}}

The Problem with Third-Party Risk Management 

Third-party risks in software are not new, but with recent executive orders, high profile attacks, and subsequent media attention the software industry is facing increased scrutiny around these inherited risks. The resulting industry-wide movement to secure the use of third-party software dependencies has inadvertently shed light on how ill-equipped many organizations are to respond to them. In rare cases the gap may be created by neglecting security practices or lackluster policy adherence, but in most cases, it is due to an industry-wide problem of immature risk modeling and unclear risk prioritization frameworks. Teams understandably struggle to identify which risks need fixing, and even in cases where priorities are clear, sometimes the ideal solution is not.

Third-party risk mitigation is not simple. It often requires thorough investigation before a change can be made. Some package updates introduce more severe risks. Others may completely break a build. Nothing about third-party vulnerability mitigation is “One size fits all.”

Antiquated risk severity frameworks leave companies guessing

Third-party vulnerabilities are an expected and often accepted outcome of today’s development process and avoiding them entirely is impossible. The speed of development required to stay competitive in today’s market requires using pre-built and open-source code, meaning no product exists without dependencies. As a result, vulnerabilities are common and plentiful. To help teams navigate the sea of risk, each vulnerability is registered in a library of Common Weaknesses that can be tied back to the exploitable package. This database is the Common Vulnerabilities and Exposures (CVE) database, and it is an integral part of third-party security.  

It’s also a part of the problem.  

The current system's approach to assigning severity levels is well intended, but due to the sheer quantity of risks the outcome is slow, subjective and generic. It’s common for CVEs to go years without any update to the severity or attack profile, leading to misinterpretations, untimely data, and ultimately inadequate prioritization of threats. Many commonly used Software Composition Analysis (SCA) solutions rely on these CVEs to identify and classify risk, limiting their accuracy to that of the CVE framework. To address these challenges and enhance the efficacy of third-party risk severity ratings, a proactive approach focusing on collaboration, context, and continuous refinement is imperative. 

Some existing risks are invisible to risk severity frameworks   

Third-party packages don’t need to have a documented CVE to carry risks. CVEs are written to identify the existence of a vulnerability in a specific package, but they do not include the full list of packages that include the vulnerability. Often untested and underutilized packages may not have a CVE associated due to a low download count and reduced visibility.  

Similarly, packages can hold unknown risk when they have not been actively maintained for an extended window of time, or have reached the end of support, also known “end of life” packages. While unlisted in CVE databases, packages that are unmaintained, or stale leave you open to unknown levels of risk.  While teams are bombarded with excessive CVE alerts, those trying to determine how they can protect against aging versions find underwhelming assistance. SCA scanners that define risk solely through CVE databases should be avoided, as they ignore important risk factors.  

Addressing the gaps in risk severity scoring with SCA scanning 

Context plays a crucial role in accurately gauging the severity of vulnerabilities and is a required input for managing risks at organizations with multiple products or versions. Factors such as exploitability, impact, affected systems, and deployment method must be considered when determining severity levels. This information provides the necessary depth to assess the true risk posed by a vulnerability, and it is missing for teams that rely only on CVSS scores. Without this context teams can waste precious time fixing vulnerabilities in internal tooling while deployed code remains at risk.  

Assigning a risk’s severity should be dynamic and subject to continuous refinement. As new threats emerge, and risk exploitation becomes reality, severity ratings should reflect these changes. This can be achieved by combining severity ratings and ensuring your tooling and processes are designed to achieve results, not just lists. 

How to Improve Third-Party Risk Management:

Third-party risks are here to stay – and we need to change the way we approach them. Following the steps below will lead to a more robust risk prioritization strategy and increased vulnerability response. 

  1. Smarter Vulnerability Severities: Increase accuracy of prioritization by leveraging multiple risk frameworks. Ensure that you include dynamic measures such as EPSS to ensure accuracy and binary indicators such as the KEV catalog to indicate active exploitation.
  2. Reduce Unnecessary and Low Priority Alerts: Alert fatigue is not just a sales trope. Excessive risk alerts and critical security notifications buried in a list of code quality-oriented feedback reduce attention to what is important, and sometimes result in all alerts being ignored. Adding to the exhaustion many scanning tools fail to identify which risks can and cannot be mitigated today. Ensure your solution includes mitigation analysis to limit manual investigation for developers and improve resolution outcomes.
  3. Integrate critical guidance into developer workflows: The dashboard age has made aggregation the top priority, making visualizing enterprise level data easy. It has also caused some processes to be forced into dashboarding and reports, prioritizing risk visibility over security outcomes. Security alerts are best seen before they hit your dashboard and should be identified and sent to developers while new features are under development to improve time-to-remediation and allow developers to pivot before features are fully baked.
  4. Create processes for ongoing and existing risk mitigation: Third-party vulnerabilities become harder to mitigate as time goes on. Automate offensive policies to prevent the introduction of new third-party vulnerabilities while they can still be resolved quickly. Simultaneously manage existing third-party risks after stopping the bleed.
  5. Maintain proper visibility to current dependencies: New risks often appear in old code. When a new CVE is published you will want a quick and reliable way to determine if your code is at risk and where the package(s) are begin used as dependencies. 

Closing Thoughts on SCA & Vulnerability Management

Modern security solutions such as Application Security Posture Management (ASPM) products have made it commonplace to aggregate and prioritize findings, including vulnerabilities detected in Software Composition Analysis (SCA) scans. But few tools understand the importance of deep context and timing in these findings, focusing instead on traditional and aging risk assessment methodologies. The goal is not just to sort a list of existing third-party risks but to provide actionable steps to reduce existing third-party risks while preventing the introduction of new risks that crop up in new code commits.  

Forward thinking security organizations will prioritize the coupling of active strategies such as real-time detection and pipelineless integrations with context-rich alerts on CVEs and package reputation to ensure full coverage of source code and reporting that provides mitigation assistance. When properly executed, teams can avoid unforeseen third-party risk and accurately assess the risk associated with existing, aging dependencies, and ongoing development.

THE LATEST UPDATES

More from our blog

Optimizing Code Security: Advanced Strategies in SAST Scanning
Optimizing Code Security: Advanced Strategies in SAST Scanning
September 17, 2024
Building an AppSec Program, Powered by Pipelineless Security
Building an AppSec Program, Powered by Pipelineless Security
September 17, 2024
Practical Guide for Evaluating Secret Detection Solutions to Fit Modern Software Development
Practical Guide for Evaluating Secret Detection Solutions to Fit Modern Software Development
July 10, 2024

{{arnica-bottom-signup-banner="/template-pages/try-arnica-banner"}}