By Rahul Raghavan, Co-Founder and Chief Evangelist, we45
We are at a certain point today where the maturity of our AppSec testing capabilities is at an all-time high. These days, product engineering personas look to identify differences between “pen testing” and “pen-testing” — and rightfully so. While the actual differences in this example might be debatable, it’s important for us to take a closer look at the motivation behind the quest for such specificity.
Around 2010, just 11 short years ago, VAPT (Vulnerability Assessment and Penetration Testing) was the most preferred terminology for anything to do with a technical assessment — whether application security, a network audit, or a sweep scan. Technology groups have come a long way since then, thanks to the need and the adoption of a focused approach in assessing an information asset.
In the past few years, security assessments have progressed mainly due to changing objectives and demands of product engineering at varying stages of maturity. I’d like to talk about some of the most significant progressions among those and explore them in some detail.
The 3-Step AppSec Assessment Plan
Application security assessments are no longer a matter of just tools and platforms as much as they are about depth and scale. With the changing landscape of application architecture to micro-services, SPAs, and serverless deployments, we see that there’s only so much that security tools can bring to the table by themselves. Depth (of assessments) now has more to do with tailor-made test scenarios that are often beyond the capabilities of tools.
However, this isn’t to say that security platforms offer nothing of value. An effective application security assessment is in compartmentalizing test scenarios into the ones that are tool-dependent (and hence automatable at some point in time) and the ones that are mostly dependent on the skills of a pen tester (typically logic flaws or config-driven flaws).
If organizations want to derive maximum value from assessment services, they need to engage in a 3-step assessment plan with their vendor. Here’s how it goes:
A point-in-time assessment to be performed on the alpha release or an MVP release of the application.
An assessment to be performed ahead of a compliance audit or certification such as PCI-DSS, HIPAA or SOC(I/II).
A pilot program of sorts that would help the organization ascertain the right fit between their engineering/security processes and the vendor’s.
As a starting point, one-off penetration tests are a great way for an organization (the customer) to understand both the skills of the vendor and also introspect on the resources—tech and skill—required internally to remediate vulnerabilities effectively.
“The merit of a sound security assessment is equally dependent on the remediation advisory as much as the vulnerability itself. To this extent, having more than one remediation strategy defined in your reports for not all standard remediation would work contextually with an application.”
Such assessments usually start with the vendor’s engineers drawing up detailed abuser-driven threat models that in-turn map to their associated test cases. This is a great way to establish transparency in terms of coverage and context to the penetration test with the customer.
The threat modeling exercise also paves the way for the vendor to understand “blind-spots” in workflows in the application. The test cases are then categorized in one of the steps described above, minimizing the dependency on a person for subsequent assessment iterations.
Custom Security Automation (CSA)
An extension of the standard penetration test, the progression to custom security automation focuses on the single most important aspect of penetration tests: effective remediation.
Development teams often do not have the same tools or skills that penetration testers do, which often results in issues with reproducing the vulnerability and subsequently remediating them. Effective remediation also results in the issue resurfacing or regressing along release schedules. The CSA tackles this problem statement in a two-pronged approach: automation and training.
Product teams who are used to the drill of frequent assessment iterations in a year are looking to scale this activity and scale translates to automation. They are oftentimes trying to achieve one or more of the following:
- Improve developer awareness and turnaround time for remediation.
- Reduce the recurrence of similar or common vulnerabilities regressing across product release cycles.
- Reduce back-and-forth between development and security teams during the validation phase.
However, what often goes unnoticed is the availability of reusable technology components that can aid in this objective. One such is the scripting and automation technologies used by Quality Assurance (QA) teams.
With CSA, security engineers can use components such as Selenium and Cucumber to help reproduce both pure logic and tool enabled vulnerabilities. These scripts can then be delivered along with the assessment reports to customers who would then be able to use them to locate and understand the vulnerability.
“Scripts can be used by teams as part of their existing QA runs as regression scripts. So not only do they now have a way to help development teams find and fix the problem better, but also truly ensure if a bug is fixed or not.”
CSA can also bundle training along with the assessment contract, thereby setting a very strong context to the assessment itself. Developer security training can be conducted either pre or post the assessment with interesting outcomes:
- Pre-assessment trainings are usually agnostic of the application or context but help drive strong fundamentals of secure code principles to developers. This is extremely useful if teams comprise newer members who don’t really know what to expect of a security report. Such trainings have been shown to increase remediation turnaround, albeit by a small margin.
- Post-assessment trainings bring in the much-needed context of the application in scope. The examples are directly plugged out from the actual vulnerabilities found (when applicable). This not only increases the stickiness factor of training with development teams, but exponentially increases how proactively security is made part of subsequent code commits.
Application Security as Code (ASaC)
Scaling application security for mature teams and organizations usually come in two distinct problem statements:
- Fewer applications that have complex workflows and numerous release cycles (typical of B2C platforms, retail or internet applications).
- Enterprise applications that don’t necessarily change much, but the sheer number to be assessed even once a year is mind-boggling.
In both these scenarios, there is insurmountable pressure on security engineering teams to cater to the following objectives with a fixed number of skilled resources to cater to increasing assessment iterations:
- Ascertain effective coverage of application threat scenarios by segregating security responsibilities and part accountability to development teams.
- Establish effective integration points between security and engineering technology components (tools, scanners, bug trackers, dashboards, performance indicators).
- Achieve security engineering objectives across teams through a hub-and-spoke touch point model through SPOCs in respective teams.
- Develop full stack assessment capabilities that scale with DevOps operationally.
At its most granular level, automation basically boils down to “code”. That’s the message we need to be sending to security engineering teams: the need to “get code” has to start from home!
AppSec as Code (ASaC) is a mixture of service and solution enhancements that look at scaling primary penetration testing tasks using code and the power of integrations.
For example, even a plain and simple penetration test can be made to scale exponentially by breaking down the individual phases of Reconnaissance, Discovery, Mapping and Exploitation to its granular test cases.
This is where the 3-step AppSec assessment approach comes in handy once again. By ascertaining which tools are the best fit for a specific test case (or a group of test cases) combined with outcome-based assertions (such as finding an issue directly from the tool or by parsing the result to find a value) and stitching them together, you can aim to increase reusability across agnostic test cases.
ASaC cuts across the following critical areas:
- Threat Modeling / Abuser Case Automation
- Test Case to Tool Automation
- Parameterized scanner automation and exploit scripting
- Vulnerability Correlation & Management
- Vulnerability to Threat Modeling mapping
“The ASaC is not meant as a replacement to manual assessment strategies but aims to augment them. The true value comes as a force multiplier effect between tools and manual testing.”
It is imperative that security (testing) vendors realize and acknowledge the changing dimensions of product teams in terms of awareness, skill, and subsequently the maturity of their application security quotient. While penetration testing for some could be a critical bottleneck for a team to get their conversations going with their customer, it’s also a critical cost center within engineering that needs a much-deserved overhaul.
About Rahul Raghavan
Rahul Raghavan is a Co-Founder and Chief Evangelist, we45. The sheer pervasiveness of applications, their associated software engineering process, and therefore the variance of application security quotient across software teams is what drives Rahul’s primary role as an AppSec Advocate at we45. Having worked on both the building and breaking sides of product engineering, Rahul appreciates both the constraints and the opportunities of imbibing security within the software lifecycle. This understanding created a natural segue for we45’s custom security solution engineering and enhanced AppSec service delivery models for its global customers. As an active DevSecOps Marketer, Rahul works closely with the offices of CTOs and CIOs in the setting up of cross-functional skill-building and collaboration models between engineering, QA, and security teams to build and manage software security maturity frameworks. Rahul is a Certified Information Systems Auditor (CISA) and is a regular speaker at global conferences, seminars, and meetup groups on the following topic areas:
- Application Security Automation and DevSecOps
- AppSec Tooling
- Threat Modeling in Agile Engineering
- QA: Security Mapping
- Automation ROI Modelling
- AWS Security
- Secure Software Maturity Models