Security
SECURITY
TTM | ROI | Sellability | Agility | Reputation |
The defence of a system against unwarranted tampering or data theft.
Security may not always receive the wide acceptance it deserves, but that's certainly changing, as itis given greater recognition and credence. Security is no longer an optional undertaking, but one that must be given credence.
To understand what security is, we should first understand the types of value that a technology business produces. Value, in this sense, is typically realised through assets, which may be: people, data, external-facing software, internal software and tools, and firmware devices.
DATA ASSETS
Data is often described as a business' “crown jewels”. And for good reason, without data, businesses would be unable to offer customers a wide and diverse range of business services. Yet it's also important to realise that data has different forms (and uses), particularly in the context of Security.
Broadly speaking, data can be designated for one of the following uses:
- (Business) transactional data.
- To serve the internal business functions.
- To help to make better decisions.
(Business) transactional data is typically the information exposed to our customers (or users), and may include: identification, purchases, financial, trading, inventory, governmental policies, voting, legally-binding agreements and contracts, factual content, and communications (e.g. social media). You get the picture.
DIVULGING OF SENSITIVE INFORMATION
Regardless of the type of data, it's predominantly (but not exclusively) related to a personal, sensitive, or secret nature, and therefore should never be divulged to unauthorised persons. Some regulatory bodies impose significant fines on policy breaches (e.g. GDPR) to encourage businesses to take preventative action.
The second form of data is (typically) internal to a business, and used to serve its operational functions (such as to produce the value in a Value Stream) or direction. This may include: roadmaps, strategy, system designs, documentation (e.g. to describe a business workflow), personnel information, change management (e.g. using JIRA tickets with the Agile methodology), tool configuration (e.g. CI/CD Deployment Pipelines), secrets, source code, data backups, and system run history (e.g. logs). This is by no means an exhaustive list.
INTERNAL INFORMATION
The internal data used within a business to function can be just as important as the business transactional datasets, if its loss, or unavailability, prevents us from producing value, or recovering from a continuity failure. Don't forget to back this up offsite.
The third form uses data to make better-informed decisions. I have already described some of them in the Observability chapter, however there's also the prospect of using some “digital soothsaying” (I'm being facetious), by using machines to predict a customer's likely course of action, or to upsell to them (Machine Learning).
DATA SHARING INTEGRATION STYLES
Be cognisant that some integration approaches between systems duplicate data, indicating that there are multiple locations that must be secured for the same data. Attackers may only need to gain access to one of those systems to get what they need.
TRANSACTIONAL SOFTWARE SERVICES
Typically, users don't directly interact with transactional datasets. Rather, they are allowed access to those datasets (or a runtime behaviour) through an intermediate layer of software services, such as an application or API. These services act as an interface (and often a domain boundary) to - for example - allow data to be moved onto different systems, or to be consumed by users in real-time, making them an attractive point for attack. Consequently, these software services are almost always secured, using authentication and authorisation controls (described later).
INTERNAL SOFTWARE (AND TOOLS)
Businesses also use software applications and tools to support their internal business functions, and thus, to produce value. Common tools include: Deployment Pipelines, ETLs (to move data around), email, telephony and messaging communications, utility scripts, data management scripts, PDF readers, wikis, password key stores, word processing and spreadsheets, and source code version control.
They too must be regularly managed, patched, and upgraded to ensure they remain sufficiently secure.
FIRMWARE
Not all software is consumed transactionally by users over the internet. For instance, some execute on embedded hardware, installed across a range of electronic devices, including: PCs, cars, ATMs, fridge freezers, washing machines, TVs, and phones. Today we refer to them as smart devices, and talk about the Internet of Things (IOT). Of course these devices must also be trustworthy.
PEOPLE
People are also assets of course. Ideas are generated by people, not machines. At least for now. Some business processes are so complex that only a small minority can do them. Or training has not been taken to allow multiple people to do it. Their lack of availability can place a business in harm's way.
DATA, SOFTWARE & DEPENDENCIES
To appreciate the scope of the security challenge, we should briefly turn our attention to what makes up a software service. Of course there are many different interpretations but a common microservices model has two distinct component tiers:
- A data store (for transactional data).
- A transactional software service (to expose the data to users).
So far, so good. But each component is also composed from other things and made accessible, either to other software, or to the user. A typical example of these dependencies is shown below (I'm viewing this from the most general, to the most specific).
16 | Utilities (Electricity, Water) |
15 | Data center |
14 | WAF, IDS, etc |
Software (Web) Service | |
13 | Network |
12 | Infrastructure |
11 | Virtualisation & Containerisation |
10 | Operating System |
9 | Runtime Platform (e.g. Java) |
8 | Our code |
7 | Third-Party Libraries |
Data | |
6 | Network |
5 | Infrastructure (inc virtualisation) |
4 | Operating System |
3 | Runtime Platform (e.g. Java) |
2 | Software |
1 | Data |
Wow, that's a lot of things for a "simple" web service. So what's going on? Well, both of these two components (data and service) are either: composed from other things, sit upon an enabler technology to run, or are wrapped by other layers to support its accessibility (in the sense that it can be accessed).
Let's look at the software service. It depends upon the data tier (and thus all of the layers it's composed from), but it also relies upon others. The code we write (layer 8) depends upon third-party libraries and frameworks (layer 7). Our software is written in a certain language (e.g. Python, Java, Go), so requires that runtime platform (layer 9) to interpret commands. That runtime platform needs to run on an Operating System (layer 10), and probably (nowadays) runs on some form of virtualisation (layer 11). Next, we need infrastructure (hardware resources) for our service to run upon (layer 12). We then need to make it accessible to the network (layer 13). Our service could now be exposed, but we'd probably want to add some additional security controls (layer 14). Finally, our hardware typically runs from multiple physical data centers (layer 15), and that needs resources to allow it to continue functioning (layer 16). No doubt it could be broken down further, but I think I made my point. It's a complex landscape, and that's not even considering the subsidiary tooling (e.g. Deployment Pipelines) required to support software construction.
Unfortunately, there's more concerning news. Any of these layers is a potential target for attackers. Ok, the electricity or water network probably isn't a typical attack vector (one would hope), but there's plenty of others. Take layer 7, representing third-party libraries - a set of software libraries we use, rather than create ourselves. According to Synopsys, third-party libraries were found to make up (an average of, across the industries) 78% of the applications they measured, and 81% contained at least one vulnerability [1]. Yet these third-party libraries aren't always scanned for vulnerabilities before being used in an application. Such practices (self) inflict problems by potentially exposing vulnerabilities to attackers without us even being aware of them. An attacker could use that vulnerability to infiltrate our systems, and business, make it unstable, or even exfiltrate (steal) our data.
POST-DEPLOYMENT VULNERABILITIES
Those third-party libraries can also become vulnerable post-deployment; i.e. after we first use them. This could be hours, or even years, later. The point is that software should be upgraded regularly (watch out for Upgrade Procrastination), and vulnerabilities should also be checked for in real production environments.
Of course, this was just one example of a security concern. I could make a similar assessment of operating system (OS) patching, or any other layer. Each layer carries some risk. We should be careful not to neglect them.
MOTIVATIONS
Ok, now that we understand the makeup of a typical software solution, and how a weakness in any layer can prompt an attack, let's turn our attention to what motivates an attacker.
The goal of an attacker is typically for one (or more) of the following reasons:
- Self interest. They want to prove to themselves, or others, that they can hack a system.
- To infiltrate, potentially with the intent of a future attack.
- To disrupt. They might disrupt the operations of a business or government, in order to diminish their reputation, for political or financial gain, or simply to create chaos.
- Exfiltration. To extract data, potentially with the intent to extort, diminish a reputation, undertake corporate espionage, or even to expose a business' shady practices.
INFILTRATE
Not all attackers necessarily wish to inflict damage on the victim (although one could argue this is a moot point). Some just want to prove to themselves - or to others - that they have the skills to hack a system, or indeed a person (through a technique known as Social Engineering). Unfortunately, infiltration is often the prelude either to others attacking it (publishing a successful attack often leads to mimicry), or to a future, broader attack. It's also illegal.
SOCIAL ENGINEERING
Social Engineering can be used both to infiltrate an organisation (through a person) but also to inflict damage, such as by enticing staff into opening malicious files from phishing emails or websites.
DISRUPTION
The aim here is to cause disruption to the entity (typically by using a technique such as a Denial-of-Service (DoS), Malware, Ransomware, or a DNS attack).
For example, they might decide to disrupt user (customers) activity, by reducing, or limiting, their access to services, by employing a Denial-of-Service (DoS) - flooding business services with illegitimate requests to prevent legitimate ones access. They might cause a system outage (due to poor Scalability), or even place it in a peculiar state vulnerable to exfiltration. We might use Throttling, IP Whitelisting, or an API Gateway here.
If they go after our servers, they might be able to terminate running processes, encrypt the disks, or lock us out, in which case we've lost Control of our user services, and created a Business Continuity problem. Should we have made ourselves highly dependent upon that infrastructure, circumvented best practices, failed to sufficiently embed automation (e.g. have no means to provision new servers, and do so quickly), have no alternative servers, or have neglected to test our Disaster Recovery (DR) plan, then we have significantly limited our ability to recover, and are - to all intents and purposes - at the whims of our attackers.
Of course disruption need not be limited to what our customers (immediately) see. Internal operations can be disrupted easily too. A disruption to our Value Stream can have severe consequences. We've lost our ability to build new value or change. This value could be environments, source code, deployments, ticketing (e.g. JIRA), documentation (knowledge), or people.
EXFILTRATION
The successful extraction of information from companies and governments can be very significant, both for the business who is responsible for its loss, and for the customers affected by the breach. This data can be used to engineer attacks on those customers, spam them, you name it. In some cases the financial cost is in the hundreds of millions of dollars. Regulations such as GDPR take a very dim view of businesses unable to protect their (and their customers) information.
The reasons to exfiltrate information are varied, and may include: to sell on to others, to extort (e.g. "give us money or we'll expose your customers' data"), industrial espionage, or even in the name of public service.
CHALLENGES
Ok, so we've talked a bit about the constituent parts of a typical software application, and an attacker's motivations, but we haven't yet talked about how we might find ourselves in this position in the first place? Whilst not an exhaustive list, it should highlight many of the key reasons [1]:
- Aging Systems and Evolvability issues.
- External Dependencies.
- Frankenstein Systems.
- Insecure Coding Practices.
- Over-privileged access.
- Methodology.
- Exposure.
- Architectural Style.
- Unconditional Acceptance of Input.
- Poor (deployment) automation.
- Cryptographic.
Let's look at some of them now.
ENTROPY, AGING SYSTEMS & POOR EVOLVABILITY
The speed at which the technology industry moves creates a problem. Entropy and senescence are common causes of security transgressions. It requires us to regularly undertake upgrades and modernisation programmes across much of our estate. For many businesses, this seems impractical, particularly to those suffering from Functional Myopicism. Consequently, systems and toolsets are left in a state of decay.
But is this really a big deal? Yes, I'm afraid so. Consider first that software vendors are - in the main - not here just to serve one customer (us). They have other customers, possibly thousands, and must stay in tune with them. This requires modernity, and the constant evolution of their offering, equating to a rapid release cycle that's often faster than our own. Managing many versions is complex, expensive, and causes them to exert effort on an evaporating ROI. They don't like it, and rightly so - their main focus is on the new. Realistically then, vendors cannot maintain many versions of their product.
As their customers (and potentially our competitors) upgrade, it lessens our own standing with that vendor - they have less appetite to support a dwindling number of customers who have chosen (it's still a choice) not to modernise. Vendors find fewer and fewer reasons to support their aging products, resulting in fewer changes, and (for our context) fewer security fixes, until support also ceases. The consequences to us - as a business - is an increased likelihood of vulnerabilities being discovered in our solution that are never fixed. Put simply, we expose our business to increased Security risk (and Evolvability concerns) by neglecting to undertake regular upgrades.
On a second point, we should also consider talent acquisition and retention. Most engineers live-and-breathe technology and modernity. We are unlikely to find vast pools of talent still working with aged technologies and approaches, or with a willingness to return to those “bygone days”. For businesses stuck in this position, it gets increasingly hard to make the modernity transition, thus leaving systems, tooling, and practices both increasingly vulnerable to attack, but also increasingly difficult to secure. We lose our Evolvability, Change Friction takes hold, Stakeholder Confidence dwindles, and we are eventually forced to find an (expensive) alternative, often in the form of a complete product rebuild. Is this good ROI?
FUNCTIONAL MYOPICISM
If you find yourself in this position, ask why? For instance, is it Functional Myopicism from upper management? Maybe it's a lack of understanding of the consequences? Perhaps it's a lack of automated testing? Maybe it's using Waterfall and Lengthy Release Cycles? The point is, Upgrade Procrastination is the consequence of a belief or situation, and that is the thing that needs remedied first.
EXTERNAL DEPENDENCIES
I've already dwelled upon this subject earlier, but to reiterate, we heavily depend upon others to produce (third-party) software that we employ in our solution (according to Synopsys, third-party libraries were found to make up - as an average of, across the industries - 78% of the applications they measured [2]). This is a necessary approach - to ensure we're building the right type of value for our particular business problem - but it can also open the door to security vulnerabilities and attacks, by introducing instabilities or poor practices into our solutions.
FRANKENSTEIN'S MONSTER SYSTEMS
A Frankenstein's Monster System is a “system of systems” that is sewn together to solve a larger problem. They can create multiple problems, including Evolvability, Resilience, and Scalability issues, but in this context we're interested in the principle of the weakest link.
These types of systems gather features from multiple sources. Take this bit from here, that bit from there, and that from over there and combine them. Typically though, we don't get the relevant pieces of each system, but everything from all of them. Lock, stock, and barrel. Consequently, we're using multifarious technologies, over a wide age range, some of which may no longer be supported, creating a security hole which may be leveraged to open up the entire system of systems.
ARCHITECTURAL STYLE
There are valid arguments and counter arguments for both the Centralised, and Distributed architectures. For instance, on the downside, a centralised solution typically accesses a monolithic database, which can cause a poor Separation of Concerns (it's hard to apply the Principle of Least Privilege), thus leading to a larger Blast Radius, should our data be exfiltrated. On the flip side, having it all contained in one place enables us to deploy all of our security controls around a single perimiter, so we can be reasonably confident we've not missed anything.
The Distributed Architecture typically promotes the opposite. There can be a good Separation of Concerns, particularly when we employ pure microservices (and potentially the use of Technology per Microservice), but being distributed, it also requires us to distribute our security controls, and that could mean we miss some.
MICROSERVICE PATCHING
Depending upon your outlook, software patching in microservices can be attractive. Being fiercely independent (and using technologies such as Containerisation), microservices enable us to patch software in a piecemeal fashion, permitting sections of a system to be patched, released, and regression tested individually. This offers a great deal of flexibility (particularly if there's a big release coming and the business accepts the risk [3]), but with the potential headache of a lot of redeployments if a critical patch is required.
One final note on the distributed architecture. Whilst the practice of Technology Choice per Microservice offers greater flexibility, it may also expose us to a much wider range of security vulnerabilities (at least to keep track of), as different vendors react at different speeds, potentially leaving some parts of an architecture exposed, whilst other parts are secure.
METHODOLOGY
The delivery methodology (e.g. Agile, Waterfall) we employ also influences Security. Take the Waterfall methodology. It's a sequential, batch-oriented model, with things done in a strict sequence, and any attempt to return to a previous stage is deemed a significant project management failure. If we're lucky, Security in this model has two look-ins - one at the start, allowing security requirements to be added, and one towards the end, post the functional testing. This allows little time to fix a serious problem - such as if a serious security failing is identified, or a control is missed - and the Cult of the Project means it may get released regardless.
SLOW FEEDBACK
The problem with Waterfall in relation to Security is that it takes a long time to receive feedback. This leads to projects being released that are known to contain security flaws, simply because the business has run out of time and they're pressured into releasing it.
The Agile methodology has different challenges. Requirements tend to be defined (and implemented) in a more dynamic fashion, and it can be hard to find security representatives when they're needed. This either leads to slowing down (and creating a mini-waterfall model), or to software being built without sufficient security consideration. Another common complaint about Agile is it can focus us too much on the immediate problem, thus losing sight of the big picture (the “can't see the forest from the trees” analogy).
The general acceptance of DevOps and (more significantly in this case) DevSecOps have helped here. By promoting diversity in teams, we have given a greater (and wider) voice to those who once sat on the periphery (e.g. security), something that's surely a positive step.
PROTOCOLS
Secure protocols are used to encrypt communications between two parties. Without them it's possible for others to “sniff” unsecured traffic, including passwords, something we probably don't want.
Some familiar examples I still see are either the lack of HTTPS in web applications and tooling (meaning the communication is open to others - the “S” being the secure version of the HTTP protocol), or the use of an old (vulnerable) implementation of it (e.g. TLS 1.1).
SECURITY HEADERS & COOKIES
There's also the use (or lack of) security headers and cookies that should be passed through with each request, but I won't delve any deeper into that here.
INSUFFICIENT AUTHENTICATION
Authentication (and authorisation) is a broad subject, and one I will only touch upon. Authentication relates to how someone (or something) proves they are who they claim to be.
The most common form of authentication is the single-factor “password” model. You supply a username and password, and the system accepts it, assuming they're correct. Whilst this has been used for many years, it has certain flaws:
- It uses only a single factor as proof. Consequently, it only requires that one factor to be stolen/captured for that account to be compromised. More and more, we're embedding modern authentication practices into our lives, which expect more than one factor (2FA or MFA).
- It was deemed simple enough that businesses rolled their own implementation. Some businesses are still dealing with the aftermath of this decision. This includes unencrypted passwords in a data store. Some businesses still have legacy systems that authenticate against cleartext passwords but are too fearful to change (I jest you not).
- As attacks became more advanced it forced the industry to adopt a more secure single-factor model. The solution was to lengthen passwords, mandate certain special characters, numbers, and mixed case, and introduce the (now infamous) password reset policy. Unsurprisingly, this led to an increased complexity and cognitive load on the user, an increased demand on forgotten password routes, the writing down of passwords, and password reuse. Users are a resourceful bunch! Unfortunately, the problem being fixed was a reaction based on an already flawed model.
SERVICE-TO-SERVICE AUTHENTICATION
It’s not always desirable to allow unrestricted access to a software service. If anything can access it, how do you know if that thing is trustworthy? Thus, the need for service-to-service authentication. By authenticating at the service level, we can have confidence that the requester is known.
INSECURE CODING PRACTICES
One of the more successful attacks is to use the internal software we build against us. It's a bit like a judo master using our forces against us.
In layer 8 (of the figure above), in this example, we build software to meet a business need. We may use a third-party platform and libraries to meet this goal, but we generally must contextise our software for our business (and customer) needs. Lots of things could go wrong here, which is why the application layer represents one of the most common areas for attack. For instance, we might not sufficiently validate incoming data (or encode outgoing), or expose sensitive information (see the over-privileged section), or provide over-privileged access, or fail to check that the current user session is permitted access to the account they have requested (access a different users details). We might expose a repeatable pattern on how sensitive ids are created, which the attacker can simply increment on, or write code that causes our service to fall into an unknown state, or store clear-text passwords in our code, or publish our (sensitive) code to a publicly-accessible area.
OWASP
The Open Web Application Security Project (OWASP) is a nonprofit foundation that regularly publishes its “top ten” application security risks to promote security improvements [4].
It's a very useful resource that gives us a sense of risk, attack vectors, and possible mitigations and solutions.
OVER PRIVILEGED
Most software services and platforms tend to employ some form of access model to decide whether to allow a user access to a specific feature. This is commonly interpreted by Role-Based Access (RBAC) - where each user (or service) is allocated a role and (more importantly) privileges unique to it. In such a model, any attempt by that user to access a service outwith of this allocation is disallowed.
PRINCIPLE OF LEAST PRIVILEGE
More precisely, we're discussing the Principle of Least Privilege - a principle to promote the allocation of appropriate privileges for our particular context. Think of it as Goldilocks Access - not too much, not too little, just enough access to do what's required.
You might wonder what the big deal is? Well, put simply, an overallocation of privileges creates risk. Fundamentally, it permits users to perform more actions than they should be entitled to. They might never use it, they may (and break something), or (the key reason) they may have their credentials stolen and used by an attacker. Think of it this way, were an attacker to gain access to your system, would you rather they had a low-privileged account, or a high one? Exactly.
LIMITING THE ATTACK SURFACE
The Attack Surface, or Blast Radius if you prefer, is important here. The use of expansive privileges increases the attackable area of a system (and business), which consequently increases the likelihood of the attacker gaining deeper and wider access to that system.
This problem is typical of a Monolithic architecture using expansive access privileges into any table in that database.
Ok, did I convince you? Surely though, it's such a fundamental concept that everyone already does it? I'm sorry to disappoint you. It might be conceptually simple, but it has tougher implementation and operational challenges. For instance, how should we handle those users who straddle multiple roles? What about a promoted staff member who needs greater access? What mechanism should we use to manage these permissions? How do you onboard, or disable, an account? How about the plethora of tools that use their own unique model, how will we integrate them together to make it seem seamless? Not so simple is it?
VALIDATION & DOCUMENT CONSUMPTION
Most businesses accept data from external sources. It's part of doing business. That data though could originate from anywhere, including attackers. Even an innocent party (a customer, agent, or even partner system) can be caught up in the machinations of another nefarious player, and inflict unintentional damage upon us. The safest option is to treat all data entering our systems as untrustworthy, until it's proven otherwise. It's important that we validate everything that enters our systems (and thus business), even from our partners.
PARTNER SYSTEMS SECURITY
Our partner systems can also be a source of misfortune for us, should they be vulnerable to attack, integrate with our systems, and we fail to validate those inputs.
Let's consider a law firm. Many are what I'd consider "document-heavy" - the business is heavily influenced by the production and consumption of (legal) documents (and of course their contents). They consume these legal documents (PDFs, word documents) into their business from many sources (e.g. customers, other legal firms). The key point is, they can't be sure of where those documents really originated.
Opening a malicious file could, for instance, result in that business suffering from a ransomware attack. All from one little file.
VIGILANCE
This is why staff are trained to be vigilant, to consider whether any emails may contain a malicious file before they open it.
That's just one example. Systems communicate in many formats (e.g. text, binary, XML, JSON) and integration styles (e.g. file transfer, APIs). Without validation, it's possible to inject code to execute on the server (e.g. a SQL injection attack), or even reflect it back to unsuspecting users to, for example, steal their session.
Data must be validated for both its accuracy (its type, length, format - which is also good for data integrity and reporting), and for its secureness.
POOR DEPLOYMENT AUTOMATION
Poor (slow) deployment practices don't fit precisely into the Security field (in the sense that security principles and controls do), but it does influence our ability to quickly react to a security issue, thus making it an important consideration.
Time is a critical element of security. The longer software remains vulnerable, the greater the likelihood it will be leveraged for malicious intent. Thus, the ability to quickly, repeatedly, and reliably deliver change makes a business more nimble, but also more secure. Should we get a patch for a critical vulnerability, we can release it quickly and thus, limit its impact.
INDECENT EXPOSURE
It's hard to attack a system that you know nothing about. Discovery is used here, to analyse the systems (and business), understand what's there, and look for weaknesses across a wide estate of applications, APIs, access control, networks, infrastructure, tools, platforms, programming languages, application and web servers, password managers etc. They're looking for anything that they can pinpoint down to a certain technology, or version that can then be targeted with a more refined, tailored assault.
We don't want to make the attacker's life easy. That means curtailing the information we allow them access to. So consider the following case of indecent exposure below. It shows an example of a failing API call (known technically as a stack trace), reflected back to the caller (our attacker).
"exception": "org.springframework.dao.DataIntegrityViolationException",
"trace": "org.springframework.dao.DataIntegrityViolationException: PreparedStatementCallback;
SQL [INSERT INTO customer (firstname, lastname, age) values(?,?,?)];
ERROR: null value in column \"lastname\" of relation \"customer\" violates not-null constraint\n
Detail: Failing row contains (681c62ab-f1ff-46aa-90e3-dcc350d81e65, Jack, null, 55). at
com.mckintek.customer.controllers.CustomerController.create(CustomerController.java:20) at
Caused by: org.postgresql.util.PSQLException:
ERROR: null value in column \"lastname\" of relation \"customer\" violates not-null constraint\n
Detail: Failing row contains (681c62ab-f1ff-46aa-90e3-dcc350d81e65, Jack, null, 55). at
org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:356) at...
I've removed lots of unnecessary detail (and I mean lots), but it's a treasure trove for an attacker, including that:
- It's a Java stack.
- It uses the Spring framework.
- It persists to a Postgres SQL database. There's also version information (v3) on its libraries, and the database table (and column names) interacted with.
- (Infer) The API doesn't have sufficient validation, using the database constraints to manage it (oh dear).
This is but one example. Attackers may scan networks for open ports, look for unused HTTP verbs, or default server and web server configurations, the list is endless. Even job posts can offer a wealth of information. “Ah, I see you use Java 11 and an Oracle 19 database? Excellent, now I know where to focus my attacks.” I've even heard stories of attacker's interviewing for a job they have no intention of taking, simply to use the interview as a springboard to the discovery piece.
To reiterate, preventing potential attackers from understanding, contextualizing, and thus infiltrating our systems is important. By limiting what can be discovered, we hinder the attacker's likelihood of success.
DATA PROTECTION AND COMPLIANCE
Data protection is vital for systems that store personal information. Allowing private information to be exposed to others (without prior approval), or to use it for purposes other than those stated (e.g. testing), may result in prosecution, or at the very least, reputational harm. You already may be familiar with PCI Compliance and GDPR regulations.
Data protection isn't solely about preventing the exposure of sensitive information to external entities, it's also about protecting that data from persons internal to an organisation. As described in the section entitled “Over Privileged”, aim to give users Goldilocks Granularity to resources - not too little, not too much, just enough to do their jobs.
PILLARS AFFECTED
TTM
Imagine that you identify a gap in the market, leverage it, by building a new system, successfully market it, and then sell it to a range of interested parties. Great, eh?
But it so happens, in the rush to market dominance, Security wasn't considered from the start, thereby making the product insecure. The rush to meet an immediate TTM need has hindered the longer-term TTM, in this case, causing the system to be redesigned and rebuilt (Rework from The Seven Wastes).
TIMING
Timing is crucial in such scenarios. Engage security too late and we must undo a lot of good work. Do it too soon and we may end up Overprocessing when we don't yet know if the product will fly.
ROI
Rebuild costs can be expensive if Security isn't considered from the start (Secure By Design). A ROI should also incorporate any financial penalties caused by a failure - in whatever form - in that system.
SELLABILITY
If poor security practices affect Reputation, then they also affect sales. Typical questions a prospective customer may ask include:
- Where is my data stored (particularly when using a public cloud)? e.g. some European customers are deeply concerned with their sensitive data being held outside Europe.
- What data do you encrypt? How, and when do you do so? How do you manage keys?
- Has there been any previous occurrences of security mismanagement? That would be a warning sign.
- What technologies (and versions of those technologies) are being used, and how regularly are they patched?
- Software lifecycle processes - how is the software built to ensure it is secure?
- Are regular Penetration Tests undertaken? Show us them.
Consider the ease with which a product can be sold in normal conditions, versus after the business has suffered from a public security failing. Wouldn't it be detrimental to future sales opportunities?
REPUTATION
Probably the easiest technical quality to sell as one that directly impacts Reputation is Security. Every other week, there seems to be another corporate heavyweight admitting it too has become the victim of a security breach. Not only do these breaches cause embarrassment, often they also impact that brand's customers - from inaccessibility to paid-for services, all the way up to identity theft.
We're also talking in big numbers. In some cases, a single security breach has:
- Cost a business hundreds of millions of dollars.
- Halved share prices.
- Caused the eye of the regulators to fall upon them (i.e. slowdown).
- Caused the untimely departure of key executives.
It's not something to be taken lightly, especially in light of new laws such as GDPR.
SUMMARY
The successful attacks noted in [5] makes for grim reading. So why is Security sometimes overlooked? Firstly, being a non-functional quality, it may not receive the same business focus as functionality. This is to be expected, assuming we're not into the realms of Functional Myopicism, all architectural qualities must vie for attention against feature sets. Secondly, it requires a different skill (and knowledge) set to what many software engineers have, therefore it's not a natural fit into their day-to-day workings. Finally, even after all the bad press, the ramifications of a breach continue to be underappreciated. I hope I've shown here that security is multifaceted, with many assets, and myriad methods of attack. And like technology, it's rapidly evolving.
Ideas suffer from Entropy and have a limited shelf life. Their realisations (i.e. software) have an even shorter shelf life. They are quickly replaced with something better, implying that technology ages, becomes redundant, and dies. The longer it's ignored, the harder it becomes. Be aware of Upgrade Procrastination, and - if possible - view it from the Sustainability perspective.
We can't necessarily control what attackers think, nor their motivations, but we can influence our own Circle of Influence. There are three courses of action open to us (none of them mutually exclusive), which I term PLR (Prevention, Limitation, Recovery). You can:
- Prevent it. For instance by making discovery difficult we hope to frustrate and bore our prospective attackers and send them off elsewhere.
- Limit its Blast Radius. When we are attacked, we can limit the damage they can do by using mechanisms like an Intrusion Detection System (IDS), audits, or the Principle of Least Privilege. The ideal outcome is the attacker gets nothing, but it's better they get hold of email addresses, than (for example) email addresses, passwords, and payment information.
- Recovery. When the first two approaches are unsuccessful, we must be prepared to recover, and to do so quickly. The “quickly” point is important. There's little point in being able to recover if it takes three months to do so. Some of the areas we might look at here include Impact Assessment, Disaster Recovery (DR) planning and testing, (tested) data backups, Deployment Pipelines, and Infrastructure-as-Code (IaC).
It's worth mentioning that some solutions to security problems also solve other problems. For instance by regularly upgrading technologies, not only do we keep our systems more secure, but we also promote Evolvability (our systems survive for longer), Innovation (we can employ new ideas to solve problems in a more innovative fashion), and talent acquisition and retention. Many modern technologies and techniques also promote better Scalability and Resilience techniques. All of this supports greater robustness and agility.
FURTHER CONSIDERATIONS
- [1] - The purpose of this section isn't to turn you into a security specialist, solely to highlight some of the main causes. I'd recommend buying a book on application security to learn more.
- [2] - How much of an application is typically made up of third-party libraries? “As our findings underscore, open source is everywhere, as is the need to properly manage its use. Open source is the foundation for every application we rely on today. Identifying, tracking, and managing open source is critical for effective software security.” - www.synopsys.com/content/dam/synopsys/sig-assets/reports/2020-ossra-report.pdf, https://www.synopsys.com/software-integrity/resources/analyst-reports/open-source-security-risk-analysis.html
- [3] - Whilst it's advisable to undertake remedial action quickly, you may decide to accept the risk for now, assuming that it has been correctly articulated and scored. In the end, that's your choice.
- [4] - https://owasp.org/www-project-top-ten/
- [5] - Examples of successful attacks. https://www.csis.org/programs/strategic-technologies-program/significant-cyber-incidents