You are currently viewing Does secure coding kill your mojo?

Does secure coding kill your mojo?

You know that feeling, when you confidently walk through airport security, then the metal detector unexpectedly goes off? Suddenly all eyes are on you, like you’re someone of particular suspicion – and it turns out that you just have a coin or a key in your pocket! 

This is the latest instalment of  our ‘What the Sec is DevSecOps?’ series.

Code analysis or coding standard enforcement to developers can invoke similar feelings. You’re confident in what you’re doing and then when you think that you’re on the home stretch, you’re hit with a multitude of “yes, technically correct” security findings to trawl through.

It’s with this one ‘beep’, momentum, confidence and even mojo, is lost.

In previous posts, we discussed why it was important for developers to have a solid understanding of application security fundamentals. In addition, building with security architecture building blocks, that adhered to the secure design principles relevant to your application.

How can we make sure that our secure design, and our understanding of the threats to our application doesn’t manifest, like many of the largest and most damaging breaches of the 21st century, in a mistake or oversight in the way that we write a line of code?

Continuing our degustation of security practices that make for effective DevSecOps, this post will cover making this course of the culinary experience tastier. Before we dig in, let’s take a quick look at the two primary disciplines that can occur as soon fingers hit the keyboard.

Software composition analysis (SCA). 

The code that you borrow, you build from and you rely on, needs to be up to scratch, free from vulnerability and be something that you can use without presenting the risk of having to publish your own intellectual property to the open source community. SCA provides an inventory of what third party software is being used, whether exploitable vulnerabilities reside in that version of it, and insight into your obligation of attributions or licensing when using it.

Secure Code Analysis or Static Application Security Testing – SAST

Programming languages themselves have no secrets, hold no data, and there’s no concept of anything sensitive at the language level. While there are better ways of doing things, the primary goal of compilers and interpretive languages is to make life easier to functionally create effective, performant programs – but they lack the context to make the programs built using them inherently secure.

The onus is on us to realise that it’s really easy in any modern programming language to mistakenly inject security defects that can be exploited by attackers (often without complicated attacks or needing inside knowledge), and to ensure that we apply the lessons we learnt in application security training.

That is easier said than done when you’re writing to the backlog, covering many modules, microservices or components, or adding to existing code bases you didn’t originally create. all while not wanting to be the one holding up the release train.

Analysing every line of your code for these unintentional security defects such as lack of input validation, implicit trust assumptions, buffer overflows and data exposure is a powerful way to ensure that the core of your application is as robust and resilient as it can be.

How do we keep up the momentum? To avoid the resistance (and ultimately killing one’s mojo), you need to think about the following approaches.

Timely and Automated

Both SCA and SAST can be introduced into the developer’s code writing tools (IDEs). Not in a way that will grind their productivity or creativity to a halt, but helpfully as “line completion guidance” on specific technical flaws that can be eliminated before progressing, or as alerts when you pull in a library you shouldn’t.

F1 cars have brakes to make them go faster – secure code checking is like a carbon-carbon break – you’ll end up in production with confidence much quicker if you do it early and often.

They should also both be run as integral parts of the Continuous Integration (CI) process and tool chain. Think of the build not progressing if it contains a technical flaw or the use of a package that you have, with the developer’s agreement, identified as being something that simply shouldn’t reside in your code.

They can also be applied to scan image and code repositories on a regular basis to maintain the hygiene of the very core of the service you are providing for your consumers.

This obviously can’t only be a human peer review / code review process. Although healthy peer analysis and competition to produce secure code fosters a great culture within the DevOps teams, finding the plethora of general and also the needle in the logic flow haystack is best suited to automation. You’re going to be looking at some tools to help you achieve these goals.

Targeted

Back to the airport. Wouldn’t it be great if we had a weapon detector rather than a metal detector?

Secure software tools are trying to achieve this outcome, but can’t guarantee it. What they can do is offer you options as to what output you choose (think most prevalent and risk ranked, attack specific, regulatory standard or application type specific), settings that allow you to choose whether you err on the side of caution or lean towards only that which is definitive, and more importantly you can ensure that only those issues deemed incontrovertibly not allowed to be present break the build.

You should also ensure you use those filters to reflect the training, the threat model and secure design elements specific to the application in question you had in previous courses and to steadily dial up the checking rather than throwing all checks on all defects all in the pot at once.

Finally, you should be able to inject tailored content into the remediation advice of the tools you use. That way it is not just a “widely accepted” alternative or remediation, it’s team-sanctioned “this is how we do it and why” advice.

Accepted and Helpful

Ideally our security trained DevOps engineers would not only appreciate the problem domain, but also have all the answers when they make mistakes or use something unknowingly.

But the reality is that they don’t need to. With the right appreciation as to the security problems you are checking for, and tools providing specific guidance as to the problem and widely accepted remediations (in the relevant programming language) as they type, build and check-in code, you don’t have to rely on that unfair expectation or the possibility that your team is made up of varying levels of experience.

The better the advice and the more targeted the checks are, the more you will foster an acceptance within the group, the cultural shift to appreciating security as being part of everything the code is evaluated for.

Secure coding doesn’t have to kill anyone’s mojo. With the right tools and best practices in place. You can let your developers strut their stuff through the SDLC process, with confidence in one hand, and creativity in the other.

Matthew Flannery is CTO of Accelera Group.