When you consider security within Digital Services, you could say that 3 main facets exist:
- Security Testing, Pentesting or IT Health Checks (ITHC) involve using some tooling to try and find any vulnerabilities in a system, and then to see if those vulnerabilities can be further exploited. Usually you’ll see one or more of these lined up towards the end of a project, often crammed in before go-live.
A lot of faith is placed in pentests or ITHCs, but they only assess security at a single point in time. It’s like taking your car for its roadworthiness test – sure, it gets checked, but it could then immediately develop a fault on exiting the Test Centre. Pentests and ITHCs are fragile when change is involved – they take time to scope and execute in a world where digital services frequently change.
- Security Accreditation looks at assessing what security controls are in place, what risks exist within the service and what mitigations are planned to address those risks. You’ll end up with an accreditation pack that gets taken to a Senior Information Risk Owner (SIRO) for sign off, i.e. the person that owns and accepts the risks.
Accreditation places a heavy focus on documentation, were design artefacts and risk analysis documents come together to provide a view of the system for some unconnected person to make an assessment on. Again, keeping documentation up to date is onerous when change is involved.
- Security Engineering is often referred to as RuggedOps, SecOps or DevSecOps, and it involves building security thinking into the agile development process using tooling at development time or within the build pipeline. I very much like the term ‘Continuous Security’ (<- great deck) as it fits well with Continuous Integration, Continuous Deployment and Continuous Delivery. We should all use this term.
Security engineering tries to shift security thinking into the development process as early as possible. Catching a vulnerability earlier is a lot cheaper to fix in the long run. With Security Engineering, we can use approaches like abuser stories or security stories to try to understand how the system could be compromised. This gets the team thinking like an adversary, which is a good thing. Security gets checked continuously, from story creation stage through to deployment via a build pipeline. This is a much better approach than a series of scheduled ITHCs.
A lot of organisations only cover points one and two. Some do all of them. I’d like to tell you about an approach that we use within Kainos that I think works quite well. Let’s call it a Unified approach to Security.
A Digital Service is composed of more than just application code. Infrastructure is involved, often Cloud based in nature, with all the bells and whistles that come with each cloud provider. Then of course you have the ‘stuff’ that needs to be done that sits outside the technology – people vetting, joining and leaving processes, secure service administration, governance risk and compliance etc.
How do we consider all of this in a repeatable, standardised way? What do development teams need to care about? Or Ops? Or the Service owner? How can customer security teams be sure that security controls really have been implemented properly? How can we unify our approach?
This is where a simple set of standards come into play, namely OWASP Application Security Verification Standards and the NCSC 14 Cloud Security Principles. By combining these standards into an approach, we can start to define some structure that considers the Application, the Platform and the overall Service.
ASVS defines 3 levels of verification, as follows:
- ASVS Level 1 is for all software.
- ASVS Level 2 is for applications that contain sensitive data, which requires protection.
- ASVS Level 3 is for the most critical applications that perform high value transactions, contain sensitive medical data, or any application that requires the highest level of trust.
It depends on what your building, but Level 1 ASVS is pretty much the 101-type stuff you should be building into your app.
The above diagram above shows 3 separate software projects. If we apply OWASP ASVS as a means of verifying security against these apps, we can see that there are 20 areas to consider, ranging from Architecture, to Mobile to API. Some of these may not be applicable to your project, that’s fine. In some cases, if the tech stack is the same across projects, you can reuse your approach to aligning with ASVS requirements. More on this later.
If you take this a step further and implement the OWASP ASVS for your software project in something like Confluence and JIRA, you can start to clearly visualise how you comply with it. Additionally, it moves you away from spreadsheets being sent around your organisation that no one looks at.
Say we implement the OWASP section on V4 Access control as a confluence page, you’ll get something like the following:
This approach has a number of clear benefits to teams:
- Standard Guidance – Development teams have a guide to the areas that need to be considered when building secure software. You aren’t reliant on having a security savvy person within the team – anyone can read up on the standard and get an idea on what needs to be thought about. Knowledge is distributed.
- Visibility – Anyone, including customer security teams can come along and see what security stories have been implemented, and how they have been implemented. You link from the confluence page straight out to the implementation story or design document.
This is an important point, as a verification story could be covered manually by a tester or by an automated test in the build pipeline. It could be a design document explaining how a software library or framework covers the implementation, i.e. component X is used to sanitise user input, or errors are handled a consistent way. The goal of course is to automate as much as possible, though this isn’t always possible or doable on day 1 of a project.
- Granularity – Security stories can be mixed into the sprint backlog. We all need to have a healthy balance between functional and non-functional stories per sprint. No excuses on this one as ASVS has been composed really well into overarching themes with sub-stories.
- Consistency and Reuse – In the example, we can see how three projects apply OWASP ASVS. Anyone moving from project to project can understand what’s been done. Verification approaches can be shared across projects (assuming a similar tech stack is used). Assets and scripts can be built up that can be reused for future projects. Pure Win.
- Security Documentation evolves – When building out an accreditation pack, security stories in Jira (or whatever tool you use) can be tagged as such. You can then report on implementation coverage pretty quickly.
As mentioned earlier, we don’t build apps to sit on a developer’s machine. To be of any value to an end user, the apps we build need to be available for use, which mean they need to be deployed to a platform somewhere.
Given that there are SO MANY security features within a cloud platform, which should you use? Well, it depends on the threats you expect to face. Within the ASVS Architecture section, there’s a specific story that asks the following:
Verify that a threat model for the target application has been produced and covers off risks associated with Spoofing, Tampering, Repudiation, Information Disclosure, Denial of Service, and Elevation of privilege (STRIDE).
This particular story is a Level 2 ASVS story, but I think it should be a standard consideration for any Digital Service, and therefore included in the Level 1 category. With the knowledge of a threat model, you can then start to identify what aspects of the platform security features you really need.
For example, if you expect to be DDOS’d, maybe it’s a better idea to move from the basic platform DDOS feature to a Standard Tier version? Maybe insider threats are a big concern from rogue administrators? This could indicate that you should consider something like Just In Time access for service administration.
The point being made is that we can move towards an approach of clearly identifying why we need to spend extra ££ on additional platform security features – the threat exists therefore we need to mitigate it.
The 14 Cloud Security Principles from the NCSC should be used as the Service Security Standard. These standards encompass both the development and platform security controls, but also extend further to cover off important aspects such as supply chain security, secure service administration and operational security
In the same way as we put ASVS compliance in a Confluence page, a matrix model can be used to show compliance, partial compliance and non-compliance against the 14 NCSC principles. Note, that Azure compliance has been previously mapped by Microsoft and a similar mapping has been created for AWS.
Again, the goal here is visibility. People can see how the service is standing up against the cloud security principles, and what areas need worked on. You can revisit this page over time to see how the service security is improving, and perhaps even use the page itself as part of any SIRO accreditation set as well.
We’ve used the Unified Security approach across a number of projects for our customers, and even within our own platforms as it provides the means to cover security for the app, the platform and the service.
Hopefully you’ll find something of use here to help improve security for your own services – please let us know!