If you’ve ever been involved in developing a software solution, especially one that processes or stores sensitive information, you’ll have no doubt been exposed to Risk Assessments. This article isn’t specifically related to IT Project Risk management, which is a somewhat different thing (e.g. wee Jimmy gets hit by a bus, how will we cope?) — it’s more about the risks that are inherent on the technology side of things. For example, if you use a 3rd party product or library, what are the risks in using it, or what are the risks in using a particular cloud hosting platform. That kind of thing.
This next bit is slightly dry, but bear with me 🙂
There are a number of Risk Assessment frameworks that can be used to guide you, such as the National Institute of Standards and Technology (NIST) Special Publication 800–30 which sits at a lovely 95 pages. NIST looks at risk across Tiers, beginning at the Organisation (Top Tier) and then delving into business processes and then finally information systems in the lower level tiers.
NIST Risk Management
Another framework is ISO27005, which belongs to a series of ISO standards such as:
ISO/IEC 27018 establishes commonly accepted control objectives, controls and guidelines for implementing measures to protect Personally Identifiable Information (PII) in accordance with the privacy principles in ISO/IEC 29100 for the public cloud computing environment.
If we look at ISO27005 which focuses on Information Security Risk management in general, you get an defined approach whereby you clarify what the assets, threats, existing controls, vulnerabilities and consequences are.
ISO27005 Risk Management
In the Risk Treatment section, you basically have four options:
Modification — The level of risk should be managed by introducing, removing or altering controls so that the residual risk can be reassessed as being acceptable. Basically, change things and reassess to see if the risk is still as bad as it was when you started.
Retention — The decision on retaining the risk without further action should be taken depending on risk evaluation. In other words, don’t do anything. In this case it’s important to ensure that you have clear confirmation from the customer that they accept the risk.
Avoidance —The activity or condition that gives rise to the particular risk should be avoided. Gotta smile at this one as it basically means that you shouldnt do the risky thing! Sometimes that can’t be an option.
Sharing — The risk should be shared with another party that can most effectively manage the particular risk depending on risk evaluation. In other words, offload it. You see this happen a lot, where a solution is passed over to be a managed service. The issue however is that if/when things fail, you still get tarred with the failure. I believe it only provides a false sense of security.
And that’s the real gist of it — you carry out a risk assessment, get a big list of risks and rinse and repeat. Usually this happens as a big task at some stage in the project, and the risks exist as a dry document somewhere. Could there be a better way?
A Suggested approach for Continuous Risk Assessment
In an agile project, we typically don’t have the formal transitions that you would have in a waterfall based project; transitions that better suit an extensive risk assessment activity. Instead we have sprint refinements where we identify user stories, and hopefully assess them on some value that they have to a customer. What happens if we change this process to consider risk as a thing that affects the value of the story?
For example, User Story A details a new feature. That feature provides a value to the customer, but at what risk? Say we describe it as
Value = Value minus Risk
If the risk is low, then the value still remains high. However, if the risk is high, then effectively the value of that story drops as the risk brings a negative effect. Here’s an example.
You want to change contact details on an existing webpage. Fine, this is an easy story to run. The value is reasonable because the business have moved premises and they need people to contact them. The risk is minimal as it really only is a text change in a configuration file or database; therefore the value remains as it is.
However, say you want a new feature that allows a user to submit feedback to your site, You’ll have a contact form that they can use to enter this information and you want it saved to your database for future consolidation with other pieces of feedback. In this case, the value is reasonable because you’re getting good feedback, however there is now an added risk of a SQL Injection attack, that reduces the value of the story.
You need to consider the threat of the SQL Injection and define how you’ll mitigate against it. Maybe you’ll add an additional input sanitisation story that needs to be played at the same time as your original story. Maybe you’ll offload the feedback collection to a 3rd party site? The point is, the security risk is being considered ‘way’ earlier in the process.
It’s a pretty simple concept, but if you then export a list of your stories that have the risks considered against them, then you have a good chunk of a risk assessment already covered, with mitigation options applied as well.
What about infrastructure then? Well given that it’s good to develop your infrastructure as code, then this applies equally to this scenario as well. Need a new EC2 instance? Sure; what other security stories need to be played to secure it? Contrived example I know, but it hopefully covers the point.
There will of course be parts of the solution that can’t be covered this way, but the point is that we are considering risk as part of story refinement, and in combination with a build pipeline that applies security testing, you begin to turn the security process into more of a repeatable, lightweight thing.